diff --git a/404.html b/404.html deleted file mode 100644 index 3c0218c4a..000000000 --- a/404.html +++ /dev/null @@ -1,218 +0,0 @@ - - - - - - - - 404 Page not found - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - -
-
-

404.

-

Hey! You look a little lost. This page doesn't exist (or may be private).

- ↳ Let's get you home. -
-
- - - diff --git a/CNAME b/CNAME deleted file mode 100644 index 37d312365..000000000 --- a/CNAME +++ /dev/null @@ -1 +0,0 @@ -roadmap.logos.co diff --git a/authoring-content.html b/authoring-content.html new file mode 100644 index 000000000..f103e4063 --- /dev/null +++ b/authoring-content.html @@ -0,0 +1,100 @@ + +Authoring Content

All of the content in your Quartz should go in the /content folder. The content for the home page of your Quartz lives in content/index.md. If you’ve setup Quartz already, this folder should already be initailized. Any Markdown in this folder will get processed by Quartz.

+

It is recommended that you use Obsidian as a way to edit and maintain your Quartz. It comes with a nice editor and graphical interface to preview, edit, and link your local files and attachments.

+

Got everything setup? Let’s build and preview your Quartz locally!

+

Syntax

+

As Quartz uses Markdown files as the main way of writing content, it fully supports Markdown syntax. By default, Quartz also ships with a few syntax extensions like Github Flavored Markdown (footnotes, strikethrough, tables, tasklists) and Obsidian Flavored Markdown (callouts, wikilinks).

+

Additionally, Quartz also allows you to specify additional metadata in your notes called frontmatter.

+
content/note.md
---
+title: Example Title
+draft: false
+tags:
+  - example-tag
+---
+ 
+The rest of your content lives here. You can use **Markdown** here :)
+

Some common frontmatter fields that are natively supported by Quartz:

+
    +
  • title: Title of the page. If it isn’t provided, Quartz will use the name of the file as the title.
  • +
  • aliases: Other names for this note. This is a list of strings.
  • +
  • draft: Whether to publish the page or not. This is one way to make pages private in Quartz.
  • +
  • date: A string representing the day the note was published. Normally uses YYYY-MM-DD format.
  • +
+

Syncing your Content

+

When you’re Quartz is at a point you’re happy with, you can save your changes to GitHub by doing npx quartz sync.

+
+
+
+

Flags and options

+ +
+

For full help options, you can run npx quartz sync --help.

+

Most of these have sensible defaults but you can override them if you have a custom setup:

+
    +
  • -d or --directory: the content folder. This is normally just content
  • +
  • -v or --verbose: print out extra logging information
  • +
  • --commit or --no-commit: whether to make a git commit for your changes
  • +
  • --push or --no-push: whether to push updates to your GitHub fork of Quartz
  • +
  • --pull or --no-pull: whether to try and pull in any updates from your GitHub fork (i.e. from other devices) before pushing
  • +
+
\ No newline at end of file diff --git a/build.html b/build.html new file mode 100644 index 000000000..4f04f3118 --- /dev/null +++ b/build.html @@ -0,0 +1,81 @@ + +Building your Quartz

Once you’ve initialized Quartz, let’s see what it looks like locally:

+
npx quartz build --serve
+

This will start a local web server to run your Quartz on your computer. Open a web browser and visit http://localhost:8080/ to view it.

+
+
+
+

Flags and options

+ +
+

For full help options, you can run npx quartz build --help.

+

Most of these have sensible defaults but you can override them if you have a custom setup:

+
    +
  • -d or --directory: the content folder. This is normally just content
  • +
  • -v or --verbose: print out extra logging information
  • +
  • -o or --output: the output folder. This is normally just public
  • +
  • --serve: run a local hot-reloading server to preview your Quartz
  • +
  • --port: what port to run the local preview server on
  • +
  • --concurrency: how many threads to use to parse notes
  • +
+
\ No newline at end of file diff --git a/categories/index.html b/categories/index.html deleted file mode 100644 index 83b91122a..000000000 --- a/categories/index.html +++ /dev/null @@ -1,820 +0,0 @@ - - - - - - - - Categories - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

All Categories

- -
- -
-

Vac updates

-

7 notes with this tag

-
- - - - - -
-

Milestones

-

4 notes with this tag

-
- - - - - -
-

Nomos updates

-

4 notes with this tag

-
- - - - - -
-

Waku updates

-

4 notes with this tag

-
- - - - - -
-

Codex updates

-

3 notes with this tag

-
- - - - - -
-

Acid updates

-

2 notes with this tag

-
- - - - - -
-

Ilab updates

-

2 notes with this tag

-
- - - - - -
-

Milestones overview

-

1 notes with this tag

-
- - - - - -
-

Team updates

-

1 notes with this tag

-
- - -
-
- -
- -
- -
- - - diff --git a/categories/index.xml b/categories/index.xml deleted file mode 100644 index 690fd83ed..000000000 --- a/categories/index.xml +++ /dev/null @@ -1,10 +0,0 @@ - - - - Categories on - https://roadmap.logos.co/categories/ - Recent content in Categories on - Hugo -- gohugo.io - en-us - - diff --git a/configuration.html b/configuration.html new file mode 100644 index 000000000..d21ee680f --- /dev/null +++ b/configuration.html @@ -0,0 +1,147 @@ + +Configuration

Quartz is meant to be extremely configurable, even if you don’t know any coding. Most of the configuration you should need can be done by just editing quartz.config.ts or changing the layout in quartz.layout.ts.

+
+
+
+

Tip

+ +
+

If you edit Quartz configuration using a text-editor that has TypeScript language support like VSCode, it will warn you when you you’ve made an error in your configuration, helping you avoid configuration mistakes!

+
+

The configuration of Quartz can be broken down into two main parts:

+
quartz.config.ts
const config: QuartzConfig = {
+  configuration: { ... },
+  plugins: { ... },
+}
+

General Configuration

+

This part of the configuration concerns anything that can affect the whole site. The following is a list breaking down all the things you can configure:

+
    +
  • pageTitle: title of the site. This is also used when generating the RSS Feed for your site.
  • +
  • enableSPA: whether to enable SPA Routing on your site.
  • +
  • enablePopovers: whether to enable popover previews on your site.
  • +
  • analytics: what to use for analytics on your site. Values can be +
      +
    • null: don’t use analytics;
    • +
    • { provider: 'plausible' }: use Plausible, a privacy-friendly alternative to Google Analytics; or
    • +
    • { provider: 'google', tagId: <your-google-tag> }: use Google Analytics
    • +
    +
  • +
  • baseUrl: this is used for sitemaps and RSS feeds that require an absolute URL to know where the canonical ‘home’ of your site lives. This is normally the deployed URL of your site (e.g. quartz.jzhao.xyz for this site). Do not include the protocol (i.e. https://) or any leading or trailing slashes. +
      +
    • This should also include the subpath if you are hosting on GitHub pages without a custom domain. For example, if my repository is jackyzha0/quartz, GitHub pages would deploy to https://jackyzha0.github.io/quartz and the baseUrl would be jackyzha0.github.io/quartz
    • +
    • Note that Quartz 4 will avoid using this as much as possible and use relative URLs whenever it can to make sure your site works no matter where you end up actually deploying it.
    • +
    +
  • +
  • ignorePatterns: a list of glob patterns that Quartz should ignore and not search through when looking for files inside the content folder. See private pages for more details.
  • +
  • theme: configure how the site looks. +
      +
    • typography: what fonts to use. Any font available on Google Fonts works here. +
        +
      • header: Font to use for headers
      • +
      • code: Font for inline and block quotes.
      • +
      • body: Font for everything
      • +
      +
    • +
    • colors: controls the theming of the site. +
        +
      • light: page background
      • +
      • lightgray: borders
      • +
      • gray: graph links, heavier borders
      • +
      • darkgray: body text
      • +
      • dark: header text and icons
      • +
      • secondary: link colour, current graph node
      • +
      • tertiary: hover states and visited graph nodes
      • +
      • highlight: internal link background, highlighted text, highlighted lines of code
      • +
      +
    • +
    +
  • +
+

Plugins

+

You can think of Quartz plugins as a series of transformations over content.

+

+
plugins: {
+  transformers: [...],
+  filters: [...],
+  emitters: [...],
+}
+
    +
  • Transformers map over content (e.g. parsing frontmatter, generating a description)
  • +
  • Filters filter content (e.g. filtering out drafts)
  • +
  • Emitters reduce over content (e.g. creating an RSS feed or pages that list all files with a specific tag)
  • +
+

By adding, removing, and reordering plugins from the tranformers, filters, and emitters fields, you can customize the behaviour of Quartz.

+
+
+
+

Note

+ +
+

Each node is modified by every transformer in order. Some transformers are position-sensitive so you may need to take special note of whether it needs come before or after any other particular plugins.

+
+

Additionally, plugins may also have their own configuration settings that you can pass in. For example, the Latex plugin allows you to pass in a field specifying the renderEngine to choose between Katex and MathJax.

+
transformers: [
+  Plugin.FrontMatter(), // uses default options
+  Plugin.Latex({ renderEngine: "katex" }), // specify some options
+]
+

If you’d like to make your own plugins, read the guide on making plugins for more information.

\ No newline at end of file diff --git a/hosting.html b/hosting.html new file mode 100644 index 000000000..38dc12f46 --- /dev/null +++ b/hosting.html @@ -0,0 +1,198 @@ + +Hosting

Quartz effectively turns your Markdown files and other resources into a bundle of HTML, JS, and CSS files (a website!).

+

However, if you’d like to publish your site to the world, you need a way to host it online. This guide will detail how to deploy with either GitHub Pages or Cloudflare pages but any service that allows you to deploy static HTML should work as well (e.g. Netlify, Replit, etc.)

+
+
+
+

Tip

+ +
+

Some Quartz features (like RSS Feed and sitemap generation) require baseUrl to be configured properly in your configuration to work properly. Make sure you set this before deploying!

+
+

Cloudflare Pages

+
    +
  1. Log in to the Cloudflare dashboard and select your account.
  2. +
  3. In Account Home, select Workers & Pages > Create application > Pages > Connect to Git.
  4. +
  5. Select the new GitHub repository that you created and, in the Set up builds and deployments section, provide the following information:
  6. +
+ + + + + + + + + + + + + + + + + + + + + + + + + +
Configuration optionValue
Production branchv4
Framework presetNone
Build commandnpx quartz build
Build output directorypublic
+

Press “Save and deploy” and Cloudflare should have a deployed version of your site in about a minute. Then, every time you sync your Quartz changes to GitHub, your site should be updated.

+

To add a custom domain, check out Cloudflare’s documentation.

+

GitHub Pages

+

Like Quartz 3, you can deploy the site generated by Quartz 4 via GitHub Pages.

+

In your local Quartz, create a new file quartz/.github/workflows/deploy.yml.

+
quartz/.github/workflows/deploy.yml
name: Deploy Quartz site to GitHub Pages
+ 
+on:
+  push:
+    branches:
+      - v4
+ 
+permissions:
+  contents: read
+  pages: write
+  id-token: write
+ 
+concurrency:
+  group: "pages"
+  cancel-in-progress: false
+ 
+jobs:
+  build:
+    runs-on: ubuntu-22.04
+    steps:
+      - uses: actions/checkout@v3
+        with:
+          fetch-depth: 0 # Fetch all history for git info
+      - uses: actions/setup-node@v3
+        with:
+          node-version: 18.14
+      - name: Install Dependencies
+        run: npm ci
+      - name: Build Quartz
+        run: npx quartz build
+      - name: Upload artifact
+        uses: actions/upload-pages-artifact@v2
+        with:
+          path: public
+ 
+  deploy:
+    needs: build
+    environment:
+      name: github-pages
+      url: ${{ steps.deployment.outputs.page_url }}
+    runs-on: ubuntu-latest
+    steps:
+      - name: Deploy to GitHub Pages
+        id: deployment
+        uses: actions/deploy-pages@v2
+

Then:

+
    +
  1. Head to “Settings” tab of your forked repository and in the sidebar, click “Pages”. Under “Source”, select “GitHub Actions”.
  2. +
  3. Commit these changes by doing npx quartz sync. This should deploy your site to <github-username>.github.io/<repository-name>.
  4. +
+
+
+
+

Tip

+ +
+

If you get an error about not being allowed to deploy to github-pages due to environment protection rules, make sure you remove any existing GitHub pages environments.

+

You can do this by going to your Settings page on your GitHub fork and going to the Environments tab and pressing the trash icon. The GitHub action will recreate the environment for you correctly the next time you sync your Quartz.

+
+

Custom Domain

+

Here’s how to add a custom domain to your GitHub pages deployment.

+
    +
  1. Head to the “Settings” tab of your forked repository.
  2. +
  3. In the “Code and automation” section of the sidebar, click “Pages”.
  4. +
  5. Under “Custom Domain”, type your custom domain and click “Save”.
  6. +
  7. This next step depends on whether you are using an apex domain (example.com) or a subdomain (subdomain.example.com). +
      +
    • If you are using an apex domain, navigate to your DNS provider and create an A record that points your apex domain to GitHub’s name servers which have the following IP addresses: +
        +
      • 185.199.108.153
      • +
      • 185.199.109.153
      • +
      • 185.199.110.153
      • +
      • 185.199.111.153
      • +
      +
    • +
    • If you are using a subdomain, navigate to your DNS provider and create a CNAME record that points your subdomain to the default domain for your site. For example, if you want to use the subdomain quartz.example.com for your user site, create a CNAME record that points quartz.example.com to <github-username>.github.io.
    • +
    +
  8. +
+

The above shows a screenshot of Google Domains configured for both jzhao.xyz (an apex domain) and quartz.jzhao.xyz (a subdomain).

+

See the GitHub documentation for more detail about how to setup your own custom domain with GitHub Pages.

+
+
+
+

Why aren't my changes showing up?

+ +
+

There could be many different reasons why your changes aren’t showing up but the most likely reason is that you forgot to push your changes to GitHub.

+

Make sure you save your changes to Git and sync it to GitHub by doing npx quartz sync. This will also make sure to pull any updates you may have made from other devices so you have them locally.

+
\ No newline at end of file diff --git a/icon.png b/icon.png deleted file mode 100644 index aa994a73b..000000000 Binary files a/icon.png and /dev/null differ diff --git a/images/Overlay-Communications-Brainstorm.png b/images/Overlay-Communications-Brainstorm.png deleted file mode 100644 index 5634ce7c5..000000000 Binary files a/images/Overlay-Communications-Brainstorm.png and /dev/null differ diff --git a/images/dns-records.png b/images/dns-records.png new file mode 100644 index 000000000..bf9f854bd Binary files /dev/null and b/images/dns-records.png differ diff --git a/images/quartz-layout.png b/images/quartz-layout.png new file mode 100644 index 000000000..03435f7d5 Binary files /dev/null and b/images/quartz-layout.png differ diff --git a/images/quartz-transform-pipeline.png b/images/quartz-transform-pipeline.png new file mode 100644 index 000000000..657f0a3ab Binary files /dev/null and b/images/quartz-transform-pipeline.png differ diff --git a/index.css b/index.css new file mode 100644 index 000000000..f0180249f --- /dev/null +++ b/index.css @@ -0,0 +1 @@ +:root{--shiki-color-text:#24292e;--shiki-color-background:#f8f8f8;--shiki-token-constant:#005cc5;--shiki-token-string:#032f62;--shiki-token-comment:#6a737d;--shiki-token-keyword:#d73a49;--shiki-token-parameter:#24292e;--shiki-token-function:#24292e;--shiki-token-string-expression:#22863a;--shiki-token-punctuation:#24292e;--shiki-token-link:#24292e}[saved-theme=dark]{--shiki-color-text:#e1e4e8!important;--shiki-color-background:#24292e!important;--shiki-token-constant:#79b8ff!important;--shiki-token-string:#9ecbff!important;--shiki-token-comment:#6a737d!important;--shiki-token-keyword:#f97583!important;--shiki-token-parameter:#e1e4e8!important;--shiki-token-function:#e1e4e8!important;--shiki-token-string-expression:#85e89d!important;--shiki-token-punctuation:#e1e4e8!important;--shiki-token-link:#e1e4e8!important}.callout{border:1px solid var(--border);background-color:var(--bg);box-sizing:border-box;border-radius:5px;padding:0 1rem;transition:max-height .3s;overflow-y:hidden}.callout>:nth-child(2){margin-top:0}.callout[data-callout=note]{--color:#448aff;--border:#448aff44;--bg:#448aff10}.callout[data-callout=abstract]{--color:#00b0ff;--border:#00b0ff44;--bg:#00b0ff10}.callout[data-callout=info],.callout[data-callout=todo]{--color:#00b8d4;--border:#00b8d444;--bg:#00b8d410}.callout[data-callout=tip]{--color:#00bfa5;--border:#00bfa544;--bg:#00bfa510}.callout[data-callout=success]{--color:#09ad7a;--border:#09ad7144;--bg:#09ad7110}.callout[data-callout=question]{--color:#dba642;--border:#dba64244;--bg:#dba64210}.callout[data-callout=warning]{--color:#db8942;--border:#db894244;--bg:#db894210}.callout[data-callout=failure],.callout[data-callout=danger],.callout[data-callout=bug]{--color:#db4242;--border:#db424244;--bg:#db424210}.callout[data-callout=example]{--color:#7a43b5;--border:#7a43b544;--bg:#7a43b510}.callout[data-callout=quote]{--color:var(--secondary);--border:var(--lightgray)}.callout.is-collapsed>.callout-title>.fold{transform:rotate(-90deg)}.callout-title{color:var(--color);align-items:center;gap:5px;padding:1rem 0;display:flex}.callout-title .fold{opacity:.8;cursor:pointer;margin-left:.5rem;transition:transform .3s}.callout-title>.callout-title-inner>p{color:var(--color);margin:0}.callout-icon{width:18px;height:18px}.callout-title-inner{font-weight:700}html{scroll-behavior:smooth;-webkit-text-size-adjust:none;-moz-text-size-adjust:none;text-size-adjust:none;width:100vw;overflow-x:hidden}body,section{box-sizing:border-box;background-color:var(--light);font-family:var(--bodyFont);color:var(--darkgray);max-width:100%;margin:0}.text-highlight{background-color:#fff23688;border-radius:5px;padding:0 .1rem}p,ul,text,a,tr,td,li,ol,ul,.katex,.math{color:var(--darkgray);fill:var(--darkgray);overflow-wrap:anywhere;-webkit-hyphens:auto;hyphens:auto}.math.math-display{text-align:center}a{color:var(--secondary);font-weight:600;text-decoration:none;transition:color .2s}a:hover{color:var(--tertiary)!important}a.internal{background-color:var(--highlight);border-radius:5px;padding:0 .1rem;text-decoration:none}.desktop-only{display:initial}@media (max-width:1510px){.desktop-only{display:none}}.mobile-only{display:none}@media (max-width:1510px){.mobile-only{display:initial}.page{max-width:750px;margin:0 auto;padding:0 1rem}}.page article>h1{font-size:2rem}.page article li:has(>input[type=checkbox]){padding-left:0;list-style-type:none}.page article li:has(>input[type=checkbox]:checked){text-decoration:line-through;-webkit-text-decoration-color:var(--gray);text-decoration-color:var(--gray);color:var(--gray)}.page article li>*{margin-top:0;margin-bottom:0}.page article p>strong{color:var(--dark)}.page>#quartz-body{width:100%;display:flex}@media (max-width:1510px){.page>#quartz-body{flex-direction:column}}.page>#quartz-body .sidebar{box-sizing:border-box;flex-direction:column;flex:1;gap:2rem;width:380px;margin-top:6rem;padding:0 4rem;display:flex;position:fixed;top:0}@media (max-width:1510px){.page>#quartz-body .sidebar{position:initial;width:initial;flex-direction:row;margin-top:2rem;padding:0}}.page>#quartz-body .sidebar.left{left:calc(50vw - 755px)}@media (max-width:1510px){.page>#quartz-body .sidebar.left{align-items:center;gap:0}}.page>#quartz-body .sidebar.right{right:calc(50vw - 755px)}@media (max-width:1510px){.page>#quartz-body .sidebar.right>*{flex:1}}.page .page-header{width:750px;margin:6rem auto 0}@media (max-width:1510px){.page .page-header{width:initial;margin-top:2rem}}.page .center,.page footer{width:750px;margin-left:auto;margin-right:auto}@media (max-width:1510px){.page .center,.page footer{width:initial;margin-left:0;margin-right:0}}.footnotes{border-top:1px solid var(--lightgray);margin-top:2rem}input[type=checkbox]{color:var(--secondary);border:1px solid var(--lightgray);background-color:var(--light);appearance:none;border-radius:3px;width:16px;height:16px;margin-inline:-1.4rem .2rem;position:relative;transform:translateY(2px)}input[type=checkbox]:checked{border-color:var(--secondary);background-color:var(--secondary)}input[type=checkbox]:checked:after{content:"";border:solid var(--light);border-width:0 2px 2px 0;width:4px;height:8px;display:block;position:absolute;top:1px;left:4px;transform:rotate(45deg)}blockquote{border-left:3px solid var(--secondary);margin:1rem 0;padding-left:1rem;transition:border-color .2s}h1,h2,h3,h4,h5,h6,thead{font-family:var(--headerFont);color:var(--dark);font-weight:revert;margin-bottom:0}article>h1>a,article>h2>a,article>h3>a,article>h4>a,article>h5>a,article>h6>a,article>thead>a{color:var(--dark)}article>h1>a.internal,article>h2>a.internal,article>h3>a.internal,article>h4>a.internal,article>h5>a.internal,article>h6>a.internal,article>thead>a.internal{background-color:#0000}h1[id]>a[href^=\#],h2[id]>a[href^=\#],h3[id]>a[href^=\#],h4[id]>a[href^=\#],h5[id]>a[href^=\#],h6[id]>a[href^=\#]{opacity:0;font-family:var(--codeFont);-webkit-user-select:none;user-select:none;margin:0 .5rem;transition:opacity .2s;display:inline-block;transform:translateY(-.1rem)}h1[id]:hover>a,h2[id]:hover>a,h3[id]:hover>a,h4[id]:hover>a,h5[id]:hover>a,h6[id]:hover>a{opacity:1}h1{margin-top:2.25rem;margin-bottom:1rem;font-size:1.75rem}h2{margin-top:1.9rem;margin-bottom:1rem;font-size:1.4rem}h3{margin-top:1.62rem;margin-bottom:1rem;font-size:1.12rem}h4,h5,h6{margin-top:1.5rem;margin-bottom:1rem;font-size:1rem}div[data-rehype-pretty-code-fragment]{line-height:1.6rem;position:relative}div[data-rehype-pretty-code-fragment]>div[data-rehype-pretty-code-title]{font-family:var(--codeFont);border:1px solid var(--lightgray);color:var(--darkgray);border-radius:5px;width:max-content;margin-bottom:-.5rem;padding:.1rem .5rem;font-size:.9rem}div[data-rehype-pretty-code-fragment]>pre{padding:.5rem 0}pre{font-family:var(--codeFont);border:1px solid var(--lightgray);border-radius:5px;padding:.5rem;overflow-x:auto}pre:has(>code.mermaid){border:none}pre>code{counter-reset:line;counter-increment:line 0;background:0 0;padding:0;font-size:.85rem;display:grid}pre>code [data-highlighted-chars]{background-color:var(--highlight);border-radius:5px}pre>code>[data-line]{box-sizing:border-box;border-left:3px solid #0000;padding:0 .25rem}pre>code>[data-line][data-highlighted-line]{background-color:var(--highlight);border-left:3px solid var(--secondary)}pre>code>[data-line]:before{content:counter(line);counter-increment:line;text-align:right;color:#738a9499;width:1rem;margin-right:1rem;display:inline-block}pre>code[data-line-numbers-max-digits="2"]>[data-line]:before{width:2rem}pre>code[data-line-numbers-max-digits="3"]>[data-line]:before{width:3rem}code{color:var(--dark);font-size:.9em;font-family:var(--codeFont);background:var(--lightgray);border-radius:5px;padding:.1rem .2rem}tbody,li,p{line-height:1.6rem}table{border-collapse:collapse;margin:1rem;padding:1.5rem}table>*{line-height:2rem}th{text-align:left;border-bottom:2px solid var(--gray);padding:.4rem 1rem}td{padding:.2rem 1rem}tr{border-bottom:1px solid var(--lightgray)}tr:last-child{border-bottom:none}img{border-radius:5px;max-width:100%;margin:1rem 0}p>img+em{display:block;transform:translateY(-1rem)}hr{background-color:var(--lightgray);border:none;width:100%;height:1px;margin:2rem auto}audio,video{border-radius:5px;width:100%}.spacer{flex:auto}ul.overflow,ol.overflow{content:"";clear:both;height:300px;overflow-y:auto}ul.overflow>li:last-of-type,ol.overflow>li:last-of-type{margin-bottom:50px}ul.overflow:after,ol.overflow:after{pointer-events:none;content:"";opacity:1;background:linear-gradient(transparent 0px,var(--light));width:100%;height:50px;transition:opacity .3s;position:absolute;bottom:0;left:0}header{flex-direction:row;align-items:center;gap:1.5rem;margin:2rem 0;display:flex}header h1{flex:auto;margin:0}.clipboard-button{float:right;color:var(--gray);border-color:var(--dark);background-color:var(--light);z-index:1;opacity:0;border:1px solid;border-radius:5px;margin:-.2rem .3rem;padding:.4rem;transition:all .2s;display:flex;position:absolute;right:0}.clipboard-button>svg{fill:var(--light);filter:contrast(.3)}.clipboard-button:hover{cursor:pointer;border-color:var(--secondary)}.clipboard-button:focus{outline:0}pre:hover>.clipboard-button{opacity:1;transition:all .2s}.article-title{margin:2rem 0 0}.content-meta{color:var(--gray);margin-top:0}.tags{gap:.4rem;margin:1rem 0;padding-left:0;list-style:none;display:flex}.tags>li{white-space:nowrap;overflow-wrap:normal;margin:0;display:inline-block}a.tag-link{background-color:var(--highlight);border-radius:8px;padding:.2rem .5rem}.page-title{margin:0}.search{flex-grow:.3;min-width:-moz-fit-content;min-width:fit-content;max-width:14rem}.search>#search-icon{background-color:var(--lightgray);cursor:pointer;white-space:nowrap;border-radius:4px;align-items:center;height:2rem;display:flex}.search>#search-icon>div{flex-grow:1}.search>#search-icon>p{padding:0 1rem;display:inline}.search>#search-icon svg{cursor:pointer;width:18px;min-width:18px;margin:0 .5rem}.search>#search-icon svg .search-path{stroke:var(--darkgray);stroke-width:2px;transition:stroke .5s}.search>#search-container{contain:layout;z-index:999;-webkit-backdrop-filter:blur(4px);backdrop-filter:blur(4px);width:100vw;height:100vh;display:none;position:fixed;top:0;left:0;overflow-y:auto}.search>#search-container.active{display:inline-block}.search>#search-container>#search-space{width:50%;margin-top:15vh;margin-left:auto;margin-right:auto}@media (max-width:1510px){.search>#search-container>#search-space{width:90%}}.search>#search-container>#search-space>*{background:var(--light);border-radius:5px;width:100%;margin-bottom:2em;box-shadow:0 14px 50px #1b21301f,0 10px 30px #1b213029}.search>#search-container>#search-space>input{box-sizing:border-box;font-family:var(--bodyFont);color:var(--dark);border:1px solid var(--lightgray);padding:.5em 1em;font-size:1.1em}.search>#search-container>#search-space>input:focus{outline:none}.search>#search-container>#search-space>#results-container .result-card{cursor:pointer;border:1px solid var(--lightgray);text-transform:none;text-align:left;background:var(--light);border-bottom:none;outline:none;width:100%;margin:0;padding:1em;font-family:inherit;font-size:100%;line-height:1.15;transition:background .2s}.search>#search-container>#search-space>#results-container .result-card .highlight{color:var(--secondary);font-weight:700}.search>#search-container>#search-space>#results-container .result-card:hover,.search>#search-container>#search-space>#results-container .result-card:focus{background:var(--lightgray)}.search>#search-container>#search-space>#results-container .result-card:first-of-type{border-top-left-radius:5px;border-top-right-radius:5px}.search>#search-container>#search-space>#results-container .result-card:last-of-type{border-bottom:1px solid var(--lightgray);border-bottom-right-radius:5px;border-bottom-left-radius:5px}.search>#search-container>#search-space>#results-container .result-card>h3{margin:0}.search>#search-container>#search-space>#results-container .result-card>p{margin-bottom:0}.darkmode{width:20px;height:20px;margin:0 10px;position:relative}.darkmode>.toggle{box-sizing:border-box;display:none}.darkmode svg{cursor:pointer;opacity:0;fill:var(--darkgray);width:20px;height:20px;transition:opacity .1s;position:absolute;top:calc(50% - 10px)}:root[saved-theme=dark] .toggle~label>#dayIcon{opacity:0}:root[saved-theme=dark] .toggle~label>#nightIcon,:root .toggle~label>#dayIcon{opacity:1}:root .toggle~label>#nightIcon{opacity:0}button#toc{text-align:left;cursor:pointer;color:var(--dark);background-color:#0000;border:none;align-items:center;padding:0;display:flex}button#toc h3{margin:0;font-size:1rem;display:inline-block}button#toc .fold{opacity:.8;margin-left:.5rem;transition:transform .3s}button#toc.collapsed .fold{transform:rotate(-90deg)}#toc-content{max-height:none;list-style:none;transition:max-height .5s;overflow:hidden}#toc-content.collapsed>.overflow:after{opacity:0}#toc-content ul{margin:.5rem 0;padding:0;list-style:none}#toc-content ul>li>a{color:var(--dark);opacity:.35;transition:opacity .5s,color .3s}#toc-content ul>li>a.in-view{opacity:.75}#toc-content .depth-0{padding-left:0}#toc-content .depth-1{padding-left:1rem}#toc-content .depth-2{padding-left:2rem}#toc-content .depth-3{padding-left:3rem}#toc-content .depth-4{padding-left:4rem}#toc-content .depth-5{padding-left:5rem}#toc-content .depth-6{padding-left:6rem}.graph>h3{margin:0;font-size:1rem}.graph>.graph-outer{border:1px solid var(--lightgray);box-sizing:border-box;border-radius:5px;height:250px;margin:.5em 0;position:relative;overflow:hidden}.graph>.graph-outer>#global-graph-icon{color:var(--dark);opacity:.5;cursor:pointer;background-color:#0000;border-radius:4px;width:18px;height:18px;margin:.3rem;padding:.2rem;transition:background-color .5s;position:absolute;top:0;right:0}.graph>.graph-outer>#global-graph-icon:hover{background-color:var(--lightgray)}.graph>#global-graph-outer{z-index:9999;-webkit-backdrop-filter:blur(4px);backdrop-filter:blur(4px);width:100vw;height:100%;display:none;position:fixed;top:0;left:0;overflow:hidden}.graph>#global-graph-outer.active{display:inline-block}.graph>#global-graph-outer>#global-graph-container{border:1px solid var(--lightgray);background-color:var(--light);box-sizing:border-box;border-radius:5px;width:50vw;height:60vh;position:fixed;top:50%;left:50%;transform:translate(-50%,-50%)}@media (max-width:1510px){.graph>#global-graph-outer>#global-graph-container{width:90%}}.backlinks{position:relative}.backlinks>h3{margin:0;font-size:1rem}.backlinks>ul{margin:.5rem 0;padding:0;list-style:none}.backlinks>ul>li>a{background-color:#0000}footer{text-align:left;opacity:.7;margin-bottom:4rem}footer ul{flex-direction:row;gap:1rem;margin:-1rem 0 0;padding:0;list-style:none;display:flex}ul.section-ul{margin-top:2em;padding-left:0;list-style:none}li.section-li{margin-bottom:1em}li.section-li>.section{grid-template-columns:6em 3fr 1fr;display:grid}@media (max-width:600px){li.section-li>.section>.tags{display:none}}li.section-li>.section>.tags{justify-self:end;margin-left:1rem}li.section-li>.section>.desc>h3>a{background-color:#0000}li.section-li>.section>.meta{opacity:.6;flex-basis:6em;margin:0}.popover .section{grid-template-columns:6em 1fr!important}.popover .section>.tags{display:none}.section h3,.section>.tags{margin:0}@keyframes dropin{0%{opacity:0;visibility:hidden}1%{opacity:0}to{opacity:1;visibility:visible}}.popover{z-index:999;visibility:hidden;opacity:0;padding:1rem;transition:opacity .3s,visibility .3s;position:absolute;overflow:visible}.popover>.popover-inner{font-weight:initial;line-height:normal;font-size:initial;font-family:var(--bodyFont);border:1px solid var(--lightgray);background-color:var(--light);border-radius:5px;width:30rem;max-height:20rem;padding:0 1rem 1rem;position:relative;overflow:auto;box-shadow:6px 6px 36px #00000040}.popover h1{font-size:1.5rem}@media (max-width:600px){.popover{display:none!important}}a:hover .popover,.popover:hover{animation:.3s .2s forwards dropin}:root{--light:#faf8f8;--lightgray:#e5e5e5;--gray:#b8b8b8;--darkgray:#4e4e4e;--dark:#2b2b2b;--secondary:#284b63;--tertiary:#84a59d;--highlight:#8f9fa926;--headerFont:"Schibsted Grotesk",-apple-system,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif;--bodyFont:"Source Sans Pro",-apple-system,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif;--codeFont:"IBM Plex Mono",ui-monospace,SFMono-Regular,SF Mono,Menlo,monospace}:root[saved-theme=dark]{--light:#161618;--lightgray:#393639;--gray:#646464;--darkgray:#d4d4d4;--dark:#ebebec;--secondary:#7b97aa;--tertiary:#84a59d;--highlight:#8f9fa926} \ No newline at end of file diff --git a/index.html b/index.html index 8c7a62334..9de983442 100644 --- a/index.html +++ b/index.html @@ -1,476 +1,94 @@ - - - - - - - - Logos Technical Roadmap and Activity - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. To learn more about the motivation, please visit the Logos Collective Site.

+ +

Waku

+ +

Codex

+ +

Nomos

+ +

Vac

+ +

Innovation Lab

+ +

Comms (Acid Info)

+
- - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. To learn more about the motivation, please visit the - -Logos Collective Site.

- -

# Waku

- -

# Codex

- -

# Nomos

- -

# Vac

- -

# Innovation Lab

- -

# Comms (Acid Info)

- - - - -
- - -
- - - - - -
- -
- - -
- - +} +function setupCallout() { + const collapsible = document.getElementsByClassName( + `callout is-collapsible` + ); + for (const div of collapsible) { + const title = div.firstElementChild; + if (title) { + title.removeEventListener(`click`, toggleCallout); + title.addEventListener(`click`, toggleCallout); + const collapsed = div.classList.contains(`is-collapsed`); + const height = collapsed ? title.scrollHeight : div.scrollHeight; + div.style.maxHeight = height + `px`; + } + } +} +document.addEventListener(`nav`, setupCallout); +window.addEventListener(`resize`, setupCallout); + \ No newline at end of file diff --git a/index.xml b/index.xml index 77ddb312d..2d986287d 100644 --- a/index.xml +++ b/index.xml @@ -1,296 +1,274 @@ - - - - Logos Technical Roadmap and Activity on + + + Logos Collective Project Roadmaps + https://roadmap.logos.co + Recent content on Logos Collective Project Roadmaps + Quartz -- quartz.jzhao.xyz + + + + Authoring Content + https://roadmap.logos.co/authoring-content + https://roadmap.logos.co/authoring-content + All of the content in your Quartz should go in the /content folder. The content for the home page of your Quartz lives in content/index.md. If you’ve setup Quartz already, this folder should already be initailized. + Tue, 22 Aug 2023 08:20:28 GMT + + Building your Quartz + https://roadmap.logos.co/build + https://roadmap.logos.co/build + Once you’ve initialized Quartz, let’s see what it looks like locally: npx quartz build --serve This will start a local web server to run your Quartz on your computer. + Tue, 22 Aug 2023 08:20:28 GMT + + Configuration + https://roadmap.logos.co/configuration + https://roadmap.logos.co/configuration + Quartz is meant to be extremely configurable, even if you don’t know any coding. Most of the configuration you should need can be done by just editing quartz. + Tue, 22 Aug 2023 08:20:28 GMT + + Hosting + https://roadmap.logos.co/hosting + https://roadmap.logos.co/hosting + Quartz effectively turns your Markdown files and other resources into a bundle of HTML, JS, and CSS files (a website!). However, if you’d like to publish your site to the world, you need a way to host it online. + Tue, 22 Aug 2023 08:20:28 GMT + + https://roadmap.logos.co/ - Recent content in Logos Technical Roadmap and Activity on - Hugo -- gohugo.io - en-us - - 2023-08-21 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-21/ - Mon, 21 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-21/ - Vac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 Vac Github Repos: https://www.notion.so/Vac-Repositories-75f7feb3861048f897f0fe95ead08b06 -Vac week 34 August 21th vsu::P2P vac:p2p:nim-libp2p:vac:maintenance Test-plans for the perf protocol (99%: need to find why the executable doesn&rsquo;t work) https://github. - - - - Comms Milestones Overview - https://roadmap.logos.co/roadmap/acid/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/milestones-overview/ - Comms Roadmap Comms Projects Comms planner deadlines - - - - Innovation Lab Milestones Overview - https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview/ - iLab Milestones can be found on the Notion Page - - - - Nomos Milestones Overview - https://roadmap.logos.co/roadmap/nomos/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/milestones-overview/ - Milestones Overview Notion Page - - - - 2023-08-14 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-08-14/ - 2023-08-14 Waku weekly Epics Waku Network Can Support 10K Users {E:2023-10k-users} -All software has been delivered. Pending items are: -Running stress testing on PostgreSQL to confirm performance gain https://github. - - - - 2023-08-17 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14/ - Nomos weekly report 14th August Network Privacy and Mixnet Research Mixnet architecture discussions. Potential agreement on architecture not very different from PoC Mixnet preliminary design [https://www. - - - - 2023-08-17 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-14/ - Vac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 -Vac week 33 August 14th vsu::P2P vac:p2p:nim-libp2p:vac:maintenance Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920 delivered: Perf protocol https://github.com/status-im/nim-libp2p/pull/925 delivered: Test-plans for the perf protocol https://github. - - - - 2023-08-11 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-08-11/ - Fri, 11 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-08-11/ - Codex update August 11 Client Milestone: Merkelizing block data Initial Merkle Tree implementation - https://github.com/codex-storage/nim-codex/pull/504 Work on persisting/serializing Merkle Tree is underway, PR upcoming Milestone: Block discovery and retrieval Continued analysis of block discovery and retrieval - https://hackmd. - - - - 2023-08-17 <TEAM> weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11/ - Fri, 11 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11/ - Logos Lab 11th of August Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - 2023-08-09 Acid weekly - https://roadmap.logos.co/roadmap/acid/updates/2023-08-09/ - Wed, 09 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/updates/2023-08-09/ - Top level priorities: Logos Growth Plan Status Relaunch Launch of LPE Podcasts (Target: Every week one podcast out) Hiring: TD studio and DC studio roles - - - - 2023-08-06 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-08-06/ - Tue, 08 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-08-06/ - Milestones for current works are created and used. Next steps are: -Refine scope of research work for rest of the year and create matching milestones for research and waku clients Review work not coming from research and setting dates Note that format matches the Notion page but can be changed easily as it&rsquo;s scripted nwaku Release Process Improvements {E:2023-qa} - - - - 2023-08-07 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07/ - Nomos weekly report Network implementation and Mixnet: Research Researched the Nym mixnet architecture in depth in order to design our prototype architecture. - - - - 2023-08-07 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-07/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-07/ - More info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week): https://www. - - - - Codex Milestones Overview - https://roadmap.logos.co/roadmap/codex/milestones-overview/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/milestones-overview/ - Milestones Zenhub Tracker Miro Tracker - - - - Milestone: Waku Network supports 10k Users - https://roadmap.logos.co/roadmap/waku/milestone-waku-10-users/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/milestone-waku-10-users/ - %%{ init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#BB2528', 'primaryTextColor': '#fff', 'primaryBorderColor': '#7C0000', 'lineColor': '#F8B229', 'secondaryColor': '#006100', 'tertiaryColor': '#fff' } } }%% gantt dateFormat YYYY-MM-DD section Scaling 10k Users :done, 2023-01-20, 2023-07-31 Completion Deliverable TBD - - - - Waku Milestones Overview - https://roadmap.logos.co/roadmap/waku/milestones-overview/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/milestones-overview/ - 90% - Waku Network support for 10k users 80% - Waku Network support for 1MM users 65% - Restricted-run (light node) protocols are production ready 60% - Peer management strategy for relay and light nodes are defined and implemented 10% - Quality processes are implemented for nwaku and go-waku 80% - Define and track network and community metrics for continuous monitoring improvement 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties) 15% - Dogfooding of RLN by platforms has started 06% - First protocol to incentivize operators has been defined - - - - 2023-08-02 Acid weekly - https://roadmap.logos.co/roadmap/acid/updates/2023-08-02/ - Thu, 03 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/updates/2023-08-02/ - Leads roundup - acid Al / Comms -Status app relaunch comms campaign plan in the works. Approx. date for launch 31. - - - - 2023-08-03 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-24/ - Thu, 03 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-24/ - NOTE: This is a first experimental version moving towards the new reporting structure: -Last week -vc vc::Deep Research milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission related work section milestone (15%, 2023/08/31) Nimbus Tor-push PoC basic torpush encode/decode ( https://github. - - - - 2023-08-02 Innovation Lab weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02/ - Wed, 02 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02/ - Logos Lab 2nd of August Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - 2023-08-01 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-08-01/ - Tue, 01 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-08-01/ - Codex update Aug 1st Client Milestone: Merkelizing block data Initial design writeup https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md Work break down and review for Ben and Tomasz (epic coming up) This is required to integrate the proving system Milestone: Block discovery and retrieval Some initial work break down and milestones here - https://docs. - - - - 2023-07-31 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31/ - Nomos 31st July -[Network implementation and Mixnet]: -Research -Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder. - - - - 2023-07-31 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-31/ - vc::Deep Research milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission proposed solution section milestone (15%, 2023/08/31) Nimbus Tor-push PoC establishing torswitch and testing code milestone (15%, 2023/11/30) paper on Tor push validator privacy addressed feedback on current version of paper vsu::P2P nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH&rsquo;s EIP-4844 Merged IDontWant ( https://github. - - - - 2023-07-31 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-07-31/ - Docs Milestone: Docs general improvement/incorporating feedback (continuous) next: rewrite docs in British English Milestone: Running nwaku in the cloud next: publish guides for Digital Ocean, Oracle, Fly. - - - - 2023-07-24 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24/ - Mon, 24 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24/ - Research -Milestone 1: Understanding Data Availability (DA) Problem High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris. - - - - 2023-07-24 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-07-24/ - Mon, 24 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-07-24/ - Disclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones. - - - - 2023-07-21 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-07-21/ - Fri, 21 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-07-21/ - Codex update 07/12/2023 to 07/21/2023 Overall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc&hellip; -Our main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. - - - - 2023-07-17 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-17/ - Mon, 17 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-17/ - Last week -vc Vac day in Paris (13th) vc::Deep Research working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node working towards comprehensive current/related work study on gossipsub scaling vsu::P2P Paris offsite Paris (all CCs) vsu::Tokenomics Bugs found and solved in the SNT staking contract attend events in Paris vsu::Distributed Systems Testing Events in Paris QoS on all four infras Continue work on theoretical gossipsub analysis (varying regular graph sizes) Peer extraction using WLS (almost finished) Discv5 testing Wakurtosis CI improvements Provide offline data vip::zkVM onboarding new researcher Prepared and presented ZKVM work during VAC offsite Deep research on Nova vs Stark in terms of performance and related open questions researching Sangria Worked on NEscience document ( https://www. - - - - 2023-07-12 Innovation Lab Weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12/ - Wed, 12 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12/ - Logos Lab 12th of July Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - 2023-07-10 Vac Weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-10/ - Mon, 10 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-10/ - vc::Deep Research refined deep research roadmaps https://github.com/vacp2p/research/issues/190, https://github.com/vacp2p/research/issues/192 working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus working towards comprehensive current/related work study on gossipsub scaling vsu::P2P Prepared Paris talks Implemented perf protocol to compare the performances with other libp2ps https://github. - - - - Vac Milestones Overview - https://roadmap.logos.co/roadmap/vac/milestones-overview/ - Mon, 01 Jan 0001 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/milestones-overview/ - Overview Notion Page - Information copied here for now -Info Structure of milestone names: vac:&lt;unit&gt;:&lt;tag&gt;:&lt;for_project&gt;:&lt;title&gt;_&lt;counter&gt; -vac indicates it is a vac milestone unit indicates the vac unit p2p, dst, tke, acz, sc, zkvm, dr, rfc tag tags a specific area / project / epic within the respective vac unit, e. - - - - + https://roadmap.logos.co/ + This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. + Tue, 22 Aug 2023 08:20:28 GMT + + Welcome to Quartz 4 + https://roadmap.logos.co/index_default + https://roadmap.logos.co/index_default + Quartz is a fast, batteries-included static-site generator that transforms Markdown content into fully functional websites. Thousands of students, developers, and teachers are already using Quartz to publish personal notes, wikis, and digital gardens to the web. + Tue, 22 Aug 2023 08:20:28 GMT + + Layout + https://roadmap.logos.co/layout + https://roadmap.logos.co/layout + Certain emitters may also output HTML files. To enable easy customization, these emitters allow you to fully rearrange the layout of the page. The default page layouts can be found in quartz. + Tue, 22 Aug 2023 08:20:28 GMT + + Migrating from Quartz 3 + https://roadmap.logos.co/migrating-from-Quartz-3 + https://roadmap.logos.co/migrating-from-Quartz-3 + As you already have Quartz locally, you don’t need to fork or clone it again. Simply just checkout the alpha branch, install the dependencies, and import your old vault. + Tue, 22 Aug 2023 08:20:28 GMT + + Philosophy of Quartz + https://roadmap.logos.co/philosophy + https://roadmap.logos.co/philosophy + A garden should be a true hypertext § The garden is the web as topology. Every walk through the garden creates new paths, new meanings, and when we add things to the garden we add them in a way that allows many future, unpredicted relationships. + Tue, 22 Aug 2023 08:20:28 GMT + + Quartz Showcase + https://roadmap.logos.co/showcase + https://roadmap.logos.co/showcase + Want to see what Quartz can do? Here are some cool community gardens: Quartz Documentation (this site!) Jacky Zhao’s Garden Brandon Boswell’s Garden Scaling Synthesis - A hypertext research notebook AWAGMI Intern Notes Course notes for Information Technology Advanced Theory Data Dictionary 🧠 sspaeti. + Tue, 22 Aug 2023 08:20:28 GMT + + Upgrading Quartz + https://roadmap.logos.co/upgrading + https://roadmap.logos.co/upgrading + Note This is specifically a guide for upgrading Quartz 4 version to a more recent update. If you are coming from Quartz 3, check out the migration guide for more info. + Tue, 22 Aug 2023 08:20:28 GMT + + Components + https://roadmap.logos.co/tags/component + https://roadmap.logos.co/tags/component + Want to create your own custom component? Check out the advanced guide on creating components for more information. + Tue, 22 Aug 2023 08:20:28 GMT + + Comms Milestones Overview + https://roadmap.logos.co/roadmap/acid/milestones-overview + https://roadmap.logos.co/roadmap/acid/milestones-overview + Comms Roadmap Comms Projects Comms planner deadlines . + Thu, 17 Aug 2023 00:00:00 GMT + + Codex Milestones Overview + https://roadmap.logos.co/roadmap/codex/milestones-overview + https://roadmap.logos.co/roadmap/codex/milestones-overview + Milestones § Zenhub Tracker Miro Tracker . + Mon, 07 Aug 2023 00:00:00 GMT + + Innovation Lab Milestones Overview + https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview + https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview + iLab Milestones can be found on the Notion Page. + Thu, 17 Aug 2023 00:00:00 GMT + + Nomos Milestones Overview + https://roadmap.logos.co/roadmap/nomos/milestones-overview + https://roadmap.logos.co/roadmap/nomos/milestones-overview + Milestones Overview Notion Page. + Thu, 17 Aug 2023 00:00:00 GMT + + Vac Roadmap + https://roadmap.logos.co/roadmap/vac/ + https://roadmap.logos.co/roadmap/vac/ + Welcome to the Vac Roadmap Overview. + Tue, 22 Aug 2023 08:20:28 GMT + + Vac Milestones Overview + https://roadmap.logos.co/roadmap/vac/milestones-overview + https://roadmap.logos.co/roadmap/vac/milestones-overview + Overview Notion Page - Information copied here for now Info § Structure of milestone names: § vac:<unit>:<tag>:<for_project>:<title>_<counter> vac indicates it is a vac milestone unit indicates the vac unit p2p, dst, tke, acz, sc, zkvm, dr, rfc tag tags a specific area / project / epic within the respective vac unit, e. + Thu, 17 Aug 2023 20:15:32 GMT + + Waku Roadmap + https://roadmap.logos.co/roadmap/waku/ + https://roadmap.logos.co/roadmap/waku/ + Welcome to the Waku Roadmap Overview. + Tue, 22 Aug 2023 08:20:28 GMT + + Milestone: Waku Network supports 10k Users + https://roadmap.logos.co/roadmap/waku/milestone-waku-10-users + https://roadmap.logos.co/roadmap/waku/milestone-waku-10-users + %%{ init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#BB2528', 'primaryTextColor': '#fff', 'primaryBorderColor': '#7C0000', 'lineColor': '#F8B229', 'secondaryColor': '#006100', 'tertiaryColor': '#fff' } } }%% gantt dateFormat YYYY-MM-DD section Scaling 10k Users :done, 2023-01-20, 2023-07-31 Completion Deliverable § TBD Epics § Github Issue Tracker . + Mon, 07 Aug 2023 00:00:00 GMT + + Waku Milestones Overview + https://roadmap.logos.co/roadmap/waku/milestones-overview + https://roadmap.logos.co/roadmap/waku/milestones-overview + 90% - Waku Network support for 10k users 80% - Waku Network support for 1MM users 65% - Restricted-run (light node) protocols are production ready 60% - Peer management strategy for relay and light nodes are defined and implemented 10% - Quality processes are implemented for nwaku and go-waku 80% - Define and track network and community metrics for continuous monitoring improvement 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties) 15% - Dogfooding of RLN by platforms has started 06% - First protocol to incentivize operators has been defined . + Mon, 07 Aug 2023 00:00:00 GMT + + 2023-08-02 Acid weekly + https://roadmap.logos.co/roadmap/acid/updates/2023-08-02 + https://roadmap.logos.co/roadmap/acid/updates/2023-08-02 + Leads roundup - acid § Al / Comms Status app relaunch comms campaign plan in the works. Approx. date for launch 31.08. Logos comms + growth plan post launch is next up TBD. + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-08-09 Acid weekly + https://roadmap.logos.co/roadmap/acid/updates/2023-08-09 + https://roadmap.logos.co/roadmap/acid/updates/2023-08-09 + Top level priorities: § Logos Growth Plan Status Relaunch Launch of LPE Podcasts (Target: Every week one podcast out) Hiring: TD studio and DC studio roles Movement Building: § Logos collective comms plan skeleton ready - will be applied for all BUs as next step Goal is to have plan + overview to set realistic KPIs and expectations Discord Server update on various views Status relaunch comms plan is ready for input from John et al. + Wed, 09 Aug 2023 00:00:00 GMT + + 2023-07-21 Codex weekly + https://roadmap.logos.co/roadmap/codex/updates/2023-07-21 + https://roadmap.logos.co/roadmap/codex/updates/2023-07-21 + Codex update 07/12/2023 to 07/21/2023 § Overall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc… Our main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-08-01 Codex weekly + https://roadmap.logos.co/roadmap/codex/updates/2023-08-01 + https://roadmap.logos.co/roadmap/codex/updates/2023-08-01 + Codex update Aug 1st § Client § Milestone: Merkelizing block data § Initial design writeup metadata-overhead.md Work break down and review for Ben and Tomasz (epic coming up) This is required to integrate the proving system Milestone: Block discovery and retrieval § Some initial work break down and milestones here - edit Initial analysis of block discovery - 1067876 Initial block discovery simulator - block-discovery-sim Milestone: Distributed Client Testing § Lots of work around log collection/analysis and monitoring Details here 41 Marketplace § Milestone: L2 § Taiko L2 integration This is a first try of running against an L2 Mostly done, waiting on related fixes to land before merge - 483 Milestone: Reservations and slot management § Lots of work around slot reservation and queuing 455 Remote auditing § Milestone: Implement Poseidon2 § First pass at an implementation by Balazs private repo, but can give access if anyone is interested Milestone: Refine proving system § Lost of thinking around storage proofs and proving systems private repo, but can give access if anyone is interested DAS § Milestone: DHT simulations § Implementing a DHT in Python for the DAS simulator. + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-08-11 Codex weekly + https://roadmap.logos.co/roadmap/codex/updates/2023-08-11 + https://roadmap.logos.co/roadmap/codex/updates/2023-08-11 + Codex update August 11 § Client § Milestone: Merkelizing block data § Initial Merkle Tree implementation - 504 Work on persisting/serializing Merkle Tree is underway, PR upcoming Milestone: Block discovery and retrieval § Continued analysis of block discovery and retrieval - _KOAm8kNQamMx-lkQvw-Iw?both=#fn5 Reviewing papers on peers sampling and related topics Wormhole Peer Sampling paper Smoothcache Starting work on simulations based on the above work Milestone: Distributed Client Testing § Continuing working on log collection/analysis and monitoring Details here 41 More related issues/PRs: 20 20 Testing and debugging Condex in continuous testing environment Debugging continuous tests 44 pod labeling 39 Infra § Milestone: Kubernetes Configuration and Management § Move Dist-Tests cluster to OVH and define naming conventions Configure Ingress Controller for Kibana/Grafana Create documentation for Kubernetes management Configure Dist/Continuous-Tests Pods logs shipping Milestone: Continuous Testing and Labeling § Watch the Continuous tests demo Implement and configure Dist-Tests labeling Set up logs shipping based on labels Improve Docker workflows and add ‘latest’ tag Milestone: CI/CD and Synchronization § Set up synchronization by codex-storage Configure Codex Storage and Demo CI/CD environments Marketplace § Milestone: L2 § Taiko L2 integration Done but merge is blocked by a few issues - 483 Milestone: Marketplace Sales § Lots of cleanup and refactoring Finished refactoring state machine PR link Added support for loading node’s slots during Sale’s module start link DAS § Milestone: DHT simulations § Implementing a DHT in Python for the DAS simulator - py-dht. + Thu, 17 Aug 2023 00:00:00 GMT + + 2023-07-12 Innovation Lab Weekly + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12 + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12 + Logos Lab 12th of July Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. Milestone: deliver the first transactional Waku Object called Payggy (attached some design screenshots). + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-08-02 Innovation Lab weekly + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02 + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02 + Logos Lab 2nd of August Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. The last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite. + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-08-17 <TEAM> weekly + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11 + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11 + Logos Lab 11th of August § Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. We merged the group chat but it surfaced plenty of issues that were not a problem with 1on1 chats, both with our Waku integration and from product perspective as well. + Thu, 17 Aug 2023 00:00:00 GMT + + 2023-07-24 Nomos weekly + https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24 + https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24 + Research Milestone 1: Understanding Data Availability (DA) Problem High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris. + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-07-31 Nomos weekly + https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31 + https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31 + Nomos 31st July [Network implementation and Mixnet]: Research Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder. + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-08-07 Nomos weekly + https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07 + https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07 + Nomos weekly report § Network implementation and Mixnet: § Research § Researched the Nym mixnet architecture in depth in order to design our prototype architecture. + Mon, 07 Aug 2023 00:00:00 GMT + + 2023-08-17 Nomos weekly + https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14 + https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14 + Nomos weekly report 14th August § Network Privacy and Mixnet § Research § Mixnet architecture discussions. Potential agreement on architecture not very different from PoC Mixnet preliminary design [Mixnet-Architecture-613f53cf11a245098c50af6b191d31d2] Development § Mixnet PoC implementation starting [302] Implementation of mixnode: a core module for implementing a mixnode binary Implementation of mixnet-client: a client library for mixnet users, such as nomos-node Private PoS § No progress this week. + Thu, 17 Aug 2023 00:00:00 GMT + + 2023-07-10 Vac Weekly + https://roadmap.logos.co/roadmap/vac/updates/2023-07-10 + https://roadmap.logos.co/roadmap/vac/updates/2023-07-10 + vc::Deep Research refined deep research roadmaps 190, 192 working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus working towards comprehensive current/related work study on gossipsub scaling vsu::P2P Prepared Paris talks Implemented perf protocol to compare the performances with other libp2ps 925 vsu::Tokenomics Fixing bugs on the SNT staking contract; Definition of the first formal verification tests for the SNT staking contract; Slides for the Paris off-site vsu::Distributed Systems Testing Replicated message rate issue (still on it) First mockup of offline data Nomos consensus test working vip::zkVM hiring onboarding new researcher presentation on ECC during Logos Research Call (incl. + Sun, 16 Jul 2023 00:00:00 GMT + + 2023-07-17 Vac weekly + https://roadmap.logos.co/roadmap/vac/updates/2023-07-17 + https://roadmap.logos.co/roadmap/vac/updates/2023-07-17 + Last week vc Vac day in Paris (13th) vc::Deep Research working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node working towards comprehensive current/related work study on gossipsub scaling vsu::P2P Paris offsite Paris (all CCs) vsu::Tokenomics Bugs found and solved in the SNT staking contract attend events in Paris vsu::Distributed Systems Testing Events in Paris QoS on all four infras Continue work on theoretical gossipsub analysis (varying regular graph sizes) Peer extraction using WLS (almost finished) Discv5 testing Wakurtosis CI improvements Provide offline data vip::zkVM onboarding new researcher Prepared and presented ZKVM work during VAC offsite Deep research on Nova vs Stark in terms of performance and related open questions researching Sangria Worked on NEscience document (Nescience-WIP-0645c738eb7a40869d5650ae1d5a4f4e) zerokit: worked on PR for arc-circom vip::RLNP2P offsite Paris This week vc vc::Deep Research working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus working towards comprehensive current/related work study on gossipsub scaling vsu::P2P EthCC & Logos event Paris (all CCs) vsu::Tokenomics Attend EthCC and side events in Paris Integrate staking contracts with radCAD model Work on a new approach for Codex collateral problem vsu::Distributed Systems Testing Events in Paris Finish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report Restructure the Analysis script and start modelling Status control messages Split Wakurtosis analysis module into separate repository (delayed) Deliver simulation results (incl fixing discv5 error with new Kurtosis version) Second iteration Nomos CI vip::zkVM Continue researching on Nova open questions and Sangria Draft the benchmark document (by the end of the week) research hardware for benchmarks research Halo2 cont’ zerokit: merge a PR for deployment of arc-circom deal with arc-circom master fail vip::RLNP2P offsite paris blockers vip::zkVM:zerokit: ark-circom deployment to crates io; contact to ark-circom team . + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-08-03 Vac weekly + https://roadmap.logos.co/roadmap/vac/updates/2023-07-24 + https://roadmap.logos.co/roadmap/vac/updates/2023-07-24 + NOTE: This is a first experimental version moving towards the new reporting structure: Last week vc vc::Deep Research milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission related work section milestone (15%, 2023/08/31) Nimbus Tor-push PoC basic torpush encode/decode ( 1 ) milestone (15%, 2023/11/30) paper on Tor push validator privacy (focus on Tor-push PoC) vsu::P2P admin/misc EthCC (all CCs) vsu::Tokenomics admin/misc Attended EthCC and side events in Paris milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management Kicked off a new approach for Codex collateral problem milestone (50%, 2023/08/30) SNT staking smart contract Integrated SNT staking contracts with Python milestone (50%, 2023/07/14) SNT litepaper (delayed) milestone(30%, 2023/09/29) Nomos Token: requirements and constraints vsu::Distributed Systems Testing milestone (95%, 2023/07/31) Wakurtosis Waku Report Add timout to injection async call in WLS to avoid further issues (PR #139 139) Plotting & analyse 100 msg/s off line Prometehus data milestone (90%, 2023/07/31) Nomos CI testing fixed errors in Nomos consensus simulation milestone (30%, …) gossipsub model analysis add config options to script, allowing to load configs that can be directly compared to Wakurtosis results added support for small world networks admin/misc Interviews & reports for SE and STA positions EthCC (1 CC) vip::zkVM milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…) (write ups will be available here: zkVM-cd358fe429b14fa2ab38ca42835a8451) Solved the open questions on Nova adn completed the document (will update the page) Reviewed Nescience and working on a document Reviewed partly the write up on FHE writeup for Nova and Sangria; research on super nova reading a new paper revisiting Nova (969) milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations zkvm Researching Nova to understand the folding technique for ZKVM adaptation zerokit Rostyslav became circom-compat maintainer vip::RLNP2P milestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro completed milestone (95%, 2023/07/31) RLN-Relay Waku production readiness admin/misc EthCC + offsite This week vc vc::Deep Research milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission working on contributions section, based on X1DoBHtYTtuGqYg0qK4zJw milestone (15%, 2023/08/31) Nimbus Tor-push PoC working on establishing a connection via nim-libp2p tor-transport setting up goerli test node (cont’) milestone (15%, 2023/11/30) paper on Tor push validator privacy continue working on paper vsu::P2P milestone (…) Implement ChokeMessage for GossipSub Continue “limited flood publishing” (911) vsu::Tokenomics admin/misc: (3 CC days off) Catch up with EthCC talks that we couldn’t attend (schedule conflicts) milestone (50%, 2023/07/14) SNT litepaper Start building the SNT agent-based simulation vsu::Distributed Systems Testing milestone (100%, 2023/07/31) Wakurtosis Waku Report finalize simulations finalize report milestone (100%, 2023/07/31) Nomos CI testing finalize milestone milestone (30%, …) gossipsub model analysis Incorporate Status control messages admin/misc Interviews & reports for SE and STA positions EthCC (1 CC) vip::zkVM milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…) Refine the Nescience WIP and FHE documents research HyperNova milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations Continue exploring Nova and other ZKPs and start technical writing on Nova benchmarks zkvm zerokit circom: reach an agreement with other maintainers on master branch situation vip::RLNP2P maintenance investigate why docker builds of nwaku are failing [zerokit dependency related] documentation on how to use rln for projects interested (console) milestone (95%, 2023/07/31) RLN-Relay Waku production readiness revert rln bandwidth reduction based on offsite discussion, move to different validator blockers . + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-07-31 Vac weekly + https://roadmap.logos.co/roadmap/vac/updates/2023-07-31 + https://roadmap.logos.co/roadmap/vac/updates/2023-07-31 + vc::Deep Research milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission proposed solution section milestone (15%, 2023/08/31) Nimbus Tor-push PoC establishing torswitch and testing code milestone (15%, 2023/11/30) paper on Tor push validator privacy addressed feedback on current version of paper vsu::P2P nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH’s EIP-4844 Merged IDontWant (934) & Limit flood publishing (911) 𝕏 This wraps up the “mandatory” optimizations for 4844. + Thu, 03 Aug 2023 00:00:00 GMT + + 2023-08-07 Vac weekly + https://roadmap.logos.co/roadmap/vac/updates/2023-08-07 + https://roadmap.logos.co/roadmap/vac/updates/2023-08-07 + More info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week): Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 Vac week 32 August 7th vsu::P2P vac:p2p:nim-libp2p:vac:maintenance Improve gossipsub DDoS resistance 920 vac:p2p:nim-chronos:vac:maintenance Remove hard-coded ports from test 429 Investigate flaky test using REUSE_PORT vsu::Tokenomics (…) vsu::Distributed Systems Testing vac:dst:wakurtosis:waku:techreport delivered: Wakurtosis Tech Report v2 (edit?usp=sharing) vac:dst:wakurtosis:vac:rlog working on research log post on Waku Wakurtosis simulations vac:dst:gsub-model:status:control-messages delivered: the analytical model can now handle Status messages; status analysis now has a separate cli and config; handles top 5 message types (by expected bandwidth consumption) vac:dst:gsub-model:vac:refactoring Refactoring and bug fixes introduced and tested 2 new analytical models vac:dst:wakurtosis:waku:topology-analysis delivered: extracted into separate module, independent of wls message vac:dst:wakurtosis:nomos:ci-integration_02 planning vac:dst:10ksim:vac:10ksim-bandwidth-test planning; check usage of new codex simulator tool (cs-codex-dist-tests) vip::zkVM vac:zkvm::vac:research-existing-proof-systems 90% Nescience WIP done – to be reviewed carefully since no other follow up documents were giiven to me 50% FHE review - needs to be refined and summarized finished SuperNova writeup ( SuperNova-research-document-8deab397f8fe413fa3a1ef3aa5669f37 ) researched starky 80% Halo2 notes ( halo2-fb8d7d0b857f43af9eb9f01c44e76fb9 ) vac:zkvm::vac:proof-system-benchmarks More discoveries on benchmarks done on ZK-snarks and ZK-starks but all are high level Viewed some circuits on Nova and Poseidon Read through Halo2 code (and Poseidon code) from Axiom vip::RLNP2P vac:acz:rlnp2p:waku:production-readiness Waku rln contract registry - 3 mark duplicated messages as spam - 1867 use waku-org/waku-rln-contract as a submodule in nwaku - 1884 vac:acz:zerokit:vac:maintenance Fixed atomic_operation ffi edge case error - 195 docs cleanup - 196 fixed version tags - 194 released zerokit v0. + Mon, 07 Aug 2023 00:00:00 GMT + + 2023-08-17 Vac weekly + https://roadmap.logos.co/roadmap/vac/updates/2023-08-14 + https://roadmap.logos.co/roadmap/vac/updates/2023-08-14 + Vac Milestones: Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 Vac week 33 August 14th § vsu::P2P § vac:p2p:nim-libp2p:vac:maintenance § Improve gossipsub DDoS resistance 920 delivered: Perf protocol 925 delivered: Test-plans for the perf protocol perf-nim Bandwidth estimate as a parameter (waiting for final review) 941 vac:p2p:nim-chronos:vac:maintenance § delivered: Remove hard-coded ports from test 429 delivered: fixed flaky test using REUSE_PORT 438 vsu::Tokenomics § admin/misc: (5 CC days off) vac:tke::codex:economic-analysis § Filecoin economic structure and Codex token requirements vac:tke::status:SNT-staking § tests with the contracts vac:tke::nomos:economic-analysis § resume discussions with Nomos team vsu::Distributed Systems Testing (DST) § vac:dst:wakurtosis:waku:techreport § 1st Draft of Wakurtosis Research Blog (123) Data Process / Analysis of Non-Discv5 K13 Simulations (Wakurtosis Tech Report v2. + Thu, 17 Aug 2023 00:00:00 GMT + + 2023-08-21 Vac weekly + https://roadmap.logos.co/roadmap/vac/updates/2023-08-21 + https://roadmap.logos.co/roadmap/vac/updates/2023-08-21 + Vac Milestones: Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 Vac Github Repos: Vac-Repositories-75f7feb3861048f897f0fe95ead08b06 Vac week 34 August 21th § vsu::P2P § vac:p2p:nim-libp2p:vac:maintenance Test-plans for the perf protocol (99%: need to find why the executable doesn’t work) 262 WebRTC: Merge all protocols (60%: slowed down by some complications and bad planning with Mbed-TLS) 3 WebRTC: DataChannel (25%) vsu::Tokenomics § admin/misc: (3 CC days off) vac:tke::codex:economic-analysis Call w/ Codex on token incentives, business analysis of Filecoin vac:tke::status:SNT-staking Bug fixes for tests for the contracts vac:tke::nomos:economic-analysis Narrowed focus to: 1) quantifying bribery attacks, 2) assessing how to min risks and max privacy of delegated staking vac:tke::waku:economic-analysis Caught up w/ Waku team on RLN, adopting a proactive effort to pitch them solutions vsu::Distributed Systems Testing (DST) § vac:dst:wakurtosis:vac:rlog Pushed second draft and figures (DST-Wakurtosis) vac:dst:shadow:vac:basic-shadow-simulation Run 10K simulation of basic gossipsub node vac:dst:gsub-model:status:control-messages Got access to status superset vac:dst:analysis:nomos:nomos-simulation-analysis Basic CLI done, json to csv, can handle 10k nodes vac:dst:wakurtosis:waku:topology-analysis Collection + analysis: now supports all waku protocols, along with relay Cannot get gossip-sub peerage from waku or prometheus (working on getting info from gossipsub layer) vac:dst:wakurtosis:waku:techreport_02 Merged 4 pending PRs; master now supports regular graphs vac:dst:eng:vac:bundle-simulation-data Run 1 and 10 rate simulations. + Mon, 21 Aug 2023 00:00:00 GMT + + 2023-07-24 Waku weekly + https://roadmap.logos.co/roadmap/waku/updates/2023-07-24 + https://roadmap.logos.co/roadmap/waku/updates/2023-07-24 + Disclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones. Docs § Milestone: Foundation for Waku docs (done) § achieved: § overall layout concept docs community/showcase pages Milestone: Foundation for node operator docs (done) § achieved: § nodes overview page guide for running nwaku (binaries, source, docker) peer discovery config guide reference docs for config methods and options Milestone: Foundation for js-waku docs § achieved: § js-waku overview + installation guide lightpush + filter guide store guide @waku/create-app guide next: § improve @waku/react guide blocker: § polyfills issue with js-waku Milestone: Docs general improvement/incorporating feedback (continuous) § Milestone: Running nwaku in the cloud § Milestone: Add Waku guide to learnweb3. + Fri, 04 Aug 2023 00:00:00 GMT + + 2023-07-31 Waku weekly + https://roadmap.logos.co/roadmap/waku/updates/2023-07-31 + https://roadmap.logos.co/roadmap/waku/updates/2023-07-31 + Docs § Milestone: Docs general improvement/incorporating feedback (continuous) § next: § rewrite docs in British English Milestone: Running nwaku in the cloud § next: § publish guides for Digital Ocean, Oracle, Fly. + Fri, 04 Aug 2023 00:00:00 GMT + + 2023-08-06 Waku weekly + https://roadmap.logos.co/roadmap/waku/updates/2023-08-06 + https://roadmap.logos.co/roadmap/waku/updates/2023-08-06 + Milestones for current works are created and used. Next steps are: Refine scope of research work for rest of the year and create matching milestones for research and waku clients Review work not coming from research and setting dates Note that format matches the Notion page but can be changed easily as it’s scripted nwaku § Release Process Improvements {E:2023-qa} achieved: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible next: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images blocker: PostgreSQL {E:2023-10k-users} achieved: Docker compose with nwaku + postgres + prometheus + grafana + postgres_exporter 3 next: Carry on with stress testing Autosharding v1 {E:2023-1mil-users} achieved: feedback/update cycles for FILTER & LIGHTPUSH next: New fleet, updating ENR from live subscriptions and merging blocker: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app. + Tue, 08 Aug 2023 00:00:00 GMT + + 2023-08-14 Waku weekly + https://roadmap.logos.co/roadmap/waku/updates/2023-08-14 + https://roadmap.logos.co/roadmap/waku/updates/2023-08-14 + 2023-08-14 Waku weekly § Epics § Waku Network Can Support 10K Users {E:2023-10k-users} All software has been delivered. Pending items are: Running stress testing on PostgreSQL to confirm performance gain 1894 Setting up a staging fleet for Status to try static sharding Running simulations for Store protocol: commitment and probably move this to 1mil epic Eco Dev § Aug 2023 {E:2023-eco-growth} achieved: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub next: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning blocker: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel Docs § Advanced docs for js-waku next: document notes/recommendations for NodeJS, begin docs on js-waku encryption nwaku § Release Process Improvements {E:2023-qa} achieved: minor CI fixes and improvements next: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images PostgreSQL {E:2023-10k-users} achieved: Learned that the insertion rate is constrained by the relay protocol. + Thu, 17 Aug 2023 00:00:00 GMT + + \ No newline at end of file diff --git a/index_default.html b/index_default.html new file mode 100644 index 000000000..3d72f5821 --- /dev/null +++ b/index_default.html @@ -0,0 +1,97 @@ + +Welcome to Quartz 4

Quartz is a fast, batteries-included static-site generator that transforms Markdown content into fully functional websites. Thousands of students, developers, and teachers are already using Quartz to publish personal notes, wikis, and digital gardens to the web.

+

🪴 Get Started

+

Quartz requires at least Node v18.14 to function correctly. Ensure you have this installed on your machine before continuing.

+

Then, in your terminal of choice, enter the following commands line by line:

+
git clone https://github.com/jackyzha0/quartz.git
+cd quartz
+npm i
+npx quartz create
+

This will guide you through initializing your Quartz with content. Once you’ve done so, see how to:

+
    +
  1. Author content in Quartz
  2. +
  3. Configure Quartz’s behaviour
  4. +
  5. Change Quartz’s layout
  6. +
  7. Build and preview Quartz
  8. +
  9. Host Quartz online
  10. +
+
+
+
+

Info

+ +
+

Coming from Quartz 3? See the migration guide for the differences between Quartz 3 and Quartz 4 and how to migrate.

+
+

🔧 Features

+ +

For a comprehensive list of features, visit the features page. You can read more about the why behind these features on the philosophy page and a technical overview on the architecture page.

+

🚧 Troubleshooting + Updating

+

Having trouble with Quartz? Try searching for your issue using the search feature. If you haven’t already, upgrade to the newest version of Quartz to see if this fixes your issue.

+

If you’re still having trouble, feel free to submit an issue if you feel you found a bug or ask for help in our Discord Community.

\ No newline at end of file diff --git a/indices/contentIndex.027ede60fcd9605cf267214a85541ca0.min.json b/indices/contentIndex.027ede60fcd9605cf267214a85541ca0.min.json deleted file mode 100644 index 3d956ed72..000000000 --- a/indices/contentIndex.027ede60fcd9605cf267214a85541ca0.min.json +++ /dev/null @@ -1 +0,0 @@ -{"/":{"title":"Logos Technical Roadmap and Activity","content":"This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. To learn more about the motivation, please visit the [Logos Collective Site](https://logos.co).\n\n## Navigation\n\n### Waku\n- [Milestones](roadmap/waku/milestones-overview.md)\n- [weekly updates](tags/waku-updates)\n\n### Codex\n- [Milestones](roadmap/codex/milestones-overview.md)\n- [weekly updates](tags/codex-updates)\n\n### Nomos\n- [Milestones](roadmap/nomos/milestones-overview.md)\n- [weekly updates](tags/nomos-updates)\n\n### Vac\n- [Milestones](roadmap/vac/milestones-overview.md)\n- [weekly updates](tags/vac-updates)\n\n### Innovation Lab\n- [Milestones](roadmap/innovation_lab/milestones_overview.md)\n- [weekly updates](tags/ilab-updates)\n### Comms (Acid Info)\n- [Milestones](roadmap/acid/milestones-overview.md)\n- [weekly updates](tags/acid-updates)\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":{"title":"CJK + Latex Support (测试)","content":"\n## Chinese, Japanese, Korean Support\n几乎在我们意识到之前,我们已经离开了地面。\n\n우리가 그것을 알기도 전에 우리는 땅을 떠났습니다.\n\n私たちがそれを知るほぼ前に、私たちは地面を離れていました。\n\n## Latex\n\nBlock math works with two dollar signs `$$...$$`\n\n$$f(x) = \\int_{-\\infty}^\\infty\n f\\hat(\\xi),e^{2 \\pi i \\xi x}\n \\,d\\xi$$\n\t\nInline math also works with single dollar signs `$...$`. For example, Euler's identity but inline: $e^{i\\pi} = 0$\n\nAligned equations work quite well:\n\n$$\n\\begin{aligned}\na \u0026= b + c \\\\ \u0026= e + f \\\\\n\\end{aligned}\n$$\n\nAnd matrices\n\n$$\n\\begin{bmatrix}\n1 \u0026 2 \u0026 3 \\\\\na \u0026 b \u0026 c\n\\end{bmatrix}\n$$\n\n## RTL\nMore information on configuring RTL languages like Arabic in the [config](config.md) page.\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/callouts":{"title":"Callouts","content":"\n## Callout support\n\nQuartz supports the same Admonition-callout syntax as Obsidian.\n\nThis includes\n- 12 Distinct callout types (each with several aliases)\n- Collapsable callouts\n\nSee [documentation on supported types and syntax here](https://help.obsidian.md/How+to/Use+callouts#Types).\n\n## Showcase\n\n\u003e [!EXAMPLE] Examples\n\u003e\n\u003e Aliases: example\n\n\u003e [!note] Notes\n\u003e\n\u003e Aliases: note\n\n\u003e [!abstract] Summaries \n\u003e\n\u003e Aliases: abstract, summary, tldr\n\n\u003e [!info] Info \n\u003e\n\u003e Aliases: info, todo\n\n\u003e [!tip] Hint \n\u003e\n\u003e Aliases: tip, hint, important\n\n\u003e [!success] Success \n\u003e\n\u003e Aliases: success, check, done\n\n\u003e [!question] Question \n\u003e\n\u003e Aliases: question, help, faq\n\n\u003e [!warning] Warning \n\u003e\n\u003e Aliases: warning, caution, attention\n\n\u003e [!failure] Failure \n\u003e\n\u003e Aliases: failure, fail, missing\n\n\u003e [!danger] Error\n\u003e\n\u003e Aliases: danger, error\n\n\u003e [!bug] Bug\n\u003e\n\u003e Aliases: bug\n\n\u003e [!quote] Quote\n\u003e\n\u003e Aliases: quote, cite\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/config":{"title":"Configuration","content":"\n## Configuration\nQuartz is designed to be extremely configurable. You can find the bulk of the configuration scattered throughout the repository depending on how in-depth you'd like to get.\n\nThe majority of configuration can be found under `data/config.yaml`. An annotated example configuration is shown below.\n\n```yaml {title=\"data/config.yaml\"}\n# The name to display in the footer\nname: Jacky Zhao\n\n# whether to globally show the table of contents on each page\n# this can be turned off on a per-page basis by adding this to the\n# front-matter of that note\nenableToc: true\n\n# whether to by-default open or close the table of contents on each page\nopenToc: false\n\n# whether to display on-hover link preview cards\nenableLinkPreview: true\n\n# whether to render titles for code blocks\nenableCodeBlockTitle: true \n\n# whether to render copy buttons for code blocks\nenableCodeBlockCopy: true \n\n# whether to render callouts\nenableCallouts: true\n\n# whether to try to process Latex\nenableLatex: true\n\n# whether to enable single-page-app style rendering\n# this prevents flashes of unstyled content and improves\n# smoothness of Quartz. More info in issue #109 on GitHub\nenableSPA: true\n\n# whether to render a footer\nenableFooter: true\n\n# whether backlinks of pages should show the context in which\n# they were mentioned\nenableContextualBacklinks: true\n\n# whether to show a section of recent notes on the home page\nenableRecentNotes: false\n\n# whether to display an 'edit' button next to the last edited field\n# that links to github\nenableGitHubEdit: true\nGitHubLink: https://github.com/jackyzha0/quartz/tree/hugo/content\n\n# whether to use Operand to power semantic search\n# IMPORTANT: replace this API key with your own if you plan on using\n# Operand search!\nenableSemanticSearch: false\noperandApiKey: \"REPLACE-WITH-YOUR-OPERAND-API-KEY\"\n\n# page description used for SEO\ndescription:\n Host your second brain and digital garden for free. Quartz features extremely fast full-text search,\n Wikilink support, backlinks, local graph, tags, and link previews.\n\n# title of the home page (also for SEO)\npage_title:\n \"🪴 Quartz 3.2\"\n\n# links to show in the footer\nlinks:\n - link_name: Twitter\n link: https://twitter.com/_jzhao\n - link_name: Github\n link: https://github.com/jackyzha0\n```\n\n### Code Block Titles\nTo add code block titles with Quartz:\n\n1. Ensure that code block titles are enabled in Quartz's configuration:\n\n ```yaml {title=\"data/config.yaml\", linenos=false}\n enableCodeBlockTitle: true\n ```\n\n2. Add the `title` attribute to the desired [code block\n fence](https://gohugo.io/content-management/syntax-highlighting/#highlighting-in-code-fences):\n\n ```markdown {linenos=false}\n ```yaml {title=\"data/config.yaml\"}\n enableCodeBlockTitle: true # example from step 1\n ```\n ```\n\n**Note** that if `{title=\u003cmy-title\u003e}` is included, and code block titles are not\nenabled, no errors will occur, and the title attribute will be ignored.\n\n### HTML Favicons\nIf you would like to customize the favicons of your Quartz-based website, you \ncan add them to the `data/config.yaml` file. The **default** without any set \n`favicon` key is:\n\n```html {title=\"layouts/partials/head.html\", linenostart=15}\n\u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n```\n\nThe default can be overridden by defining a value to the `favicon` key in your \n`data/config.yaml` file. For example, here is a `List[Dictionary]` example format, which is\nequivalent to the default:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon:\n - { rel: \"shortcut icon\", href: \"icon.png\", type: \"image/png\" }\n# - { ... } # Repeat for each additional favicon you want to add\n```\n\nIn this format, the keys are identical to their HTML representations.\n\nIf you plan to add multiple favicons generated by a website (see list below), it\nmay be easier to define it as HTML. Here is an example which appends the \n**Apple touch icon** to Quartz's default favicon:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon: |\n \u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n \u003clink rel=\"apple-touch-icon\" sizes=\"180x180\" href=\"/apple-touch-icon.png\"\u003e\n```\n\nThis second favicon will now be used as a web page icon when someone adds your \nwebpage to the home screen of their Apple device. If you are interested in more \ninformation about the current and past standards of favicons, you can read \n[this article](https://www.emergeinteractive.com/insights/detail/the-essentials-of-favicons/).\n\n**Note** that all generated favicon paths, defined by the `href` \nattribute, are relative to the `static/` directory.\n\n### Graph View\nTo customize the Interactive Graph view, you can poke around `data/graphConfig.yaml`.\n\n```yaml {title=\"data/graphConfig.yaml\"}\n# if true, a Global Graph will be shown on home page with full width, no backlink.\n# A different set of Local Graphs will be shown on sub pages.\n# if false, Local Graph will be default on every page as usual\nenableGlobalGraph: false\n\n### Local Graph ###\nlocalGraph:\n # whether automatically generate a legend\n enableLegend: false\n \n # whether to allow dragging nodes in the graph\n enableDrag: true\n \n # whether to allow zooming and panning the graph\n enableZoom: true\n \n # how many neighbours of the current node to show (-1 is all nodes)\n depth: 1\n \n # initial zoom factor of the graph\n scale: 1.2\n \n # how strongly nodes should repel each other\n repelForce: 2\n\n # how strongly should nodes be attracted to the center of gravity\n centerForce: 1\n\n # what the default link length should be\n linkDistance: 1\n \n # how big the node labels should be\n fontSize: 0.6\n \n # scale at which to start fading the labes on nodes\n opacityScale: 3\n\n### Global Graph ###\nglobalGraph:\n\t# same settings as above\n\n### For all graphs ###\n# colour specific nodes path off of their path\npaths:\n - /moc: \"#4388cc\"\n```\n\n\n## Styling\nWant to go even more in-depth? You can add custom CSS styling and change existing colours through editing `assets/styles/custom.scss`. If you'd like to target specific parts of the site, you can add ids and classes to the HTML partials in `/layouts/partials`. \n\n### Partials\nPartials are what dictate what gets rendered to the page. Want to change how pages are styled and structured? You can edit the appropriate layout in `/layouts`.\n\nFor example, the structure of the home page can be edited through `/layouts/index.html`. To customize the footer, you can edit `/layouts/partials/footer.html`\n\nMore info about partials on [Hugo's website.](https://gohugo.io/templates/partials/)\n\nStill having problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n\n## Language Support\n[CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) comes out of the box with Quartz.\n\nWant to support languages that read from right-to-left (like Arabic)? Hugo (and by proxy, Quartz) supports this natively.\n\nFollow the steps [Hugo provides here](https://gohugo.io/content-management/multilingual/#configure-languages) and modify your `config.toml`\n\nFor example:\n\n```toml\ndefaultContentLanguage = 'ar'\n[languages]\n [languages.ar]\n languagedirection = 'rtl'\n title = 'مدونتي'\n weight = 1\n```\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/custom-Domain":{"title":"Custom Domain","content":"\n### Registrar\nThis step is only applicable if you are using a **custom domain**! If you are using a `\u003cYOUR-USERNAME\u003e.github.io` domain, you can skip this step.\n\nFor this last bit to take effect, you also need to create a CNAME record with the DNS provider you register your domain with (i.e. NameCheap, Google Domains).\n\nGitHub has some [documentation on this](https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site), but the tldr; is to\n\n1. Go to your forked repository (`github.com/\u003cYOUR-GITHUB-USERNAME\u003e/quartz`) settings page and go to the Pages tab. Under \"Custom domain\", type your custom domain, then click **Save**.\n2. Go to your DNS Provider and create a CNAME record that points from your domain to `\u003cYOUR-GITHUB-USERNAME.github.io.` (yes, with the trailing period).\n\n\t![Example Configuration for Quartz](google-domains.png)*Example Configuration for Quartz*\n3. Wait 30 minutes to an hour for the network changes to kick in.\n4. Done!","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/editing":{"title":"Editing Content in Quartz","content":"\n## Editing \nQuartz runs on top of [Hugo](https://gohugo.io/) so all notes are written in [Markdown](https://www.markdownguide.org/getting-started/).\n\n### Folder Structure\nHere's a rough overview of what's what.\n\n**All content in your garden can found in the `/content` folder.** To make edits, you can open any of the files and make changes directly and save it. You can organize content into any folder you'd like.\n\n**To edit the main home page, open `/content/_index.md`.**\n\nTo create a link between notes in your garden, just create a normal link using Markdown pointing to the document in question. Please note that **all links should be relative to the root `/content` path**. \n\n```markdown\nFor example, I want to link this current document to `notes/config.md`.\n[A link to the config page](notes/config.md)\n```\n\nSimilarly, you can put local images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\nYou can also use wikilinks if that is what you are more comfortable with!\n\n### Front Matter\nHugo is picky when it comes to metadata for files. Make sure that your title is double-quoted and that you have a title defined at the top of your file like so. You can also add tags here as well.\n\n```yaml\n---\ntitle: \"Example Title\"\ntags:\n- example-tag\n---\n\nRest of your content here...\n```\n\n### Obsidian\nI recommend using [Obsidian](http://obsidian.md/) as a way to edit and grow your digital garden. It comes with a really nice editor and graphical interface to preview all of your local files.\n\nThis step is **highly recommended**.\n\n\u003e 🔗 Step 3: [How to setup your Obsidian Vault to work with Quartz](obsidian.md)\n\n## Previewing Changes\nThis step is purely optional and mostly for those who want to see the published version of their digital garden locally before opening it up to the internet. This is *highly recommended* but not required.\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)\n\nFor those who like to live life more on the edge, viewing the garden through Obsidian gets you pretty close to the real thing.\n\n## Publishing Changes\nNow that you know the basics of managing your digital garden using Quartz, you can publish it to the internet!\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/hosting":{"title":"Deploying Quartz to the Web","content":"\n## Hosting on GitHub Pages\nQuartz is designed to be effortless to deploy. If you forked and cloned Quartz directly from the repository, everything should already be good to go! Follow the steps below.\n\n### Enable GitHub Actions\nBy default, GitHub disables workflows from running automatically on Forked Repostories. Head to the 'Actions' tab of your forked repository and Enable Workflows to setup deploying your Quartz site!\n\n![Enable GitHub Actions](github-actions.png)*Enable GitHub Actions*\n\n### Enable GitHub Pages\n\nHead to the 'Settings' tab of your forked repository and go to the 'Pages' tab.\n\n1. (IMPORTANT) Set the source to deploy from `master` (and not `hugo`) using `/ (root)`\n2. Set a custom domain here if you have one!\n\n![Enable GitHub Pages](github-pages.png)*Enable GitHub Pages*\n\n### Pushing Changes\nTo see your changes on the internet, we need to push it them to GitHub. Quartz is a `git` repository so updating it is the same workflow as you would follow as if it were just a regular software project.\n\n```shell\n# Navigate to Quartz folder\ncd \u003cpath-to-quartz\u003e\n\n# Commit all changes\ngit add .\ngit commit -m \"message describing changes\"\n\n# Push to GitHub to update site\ngit push origin hugo\n```\n\nNote: we specifically push to the `hugo` branch here. Our GitHub action automatically runs everytime a push to is detected to that branch and then updates the `master` branch for redeployment.\n\n### Setting up the Site\nNow let's get this site up and running. Never hosted a site before? No problem. Have a fancy custom domain you already own or want to subdomain your Quartz? That's easy too.\n\nHere, we take advantage of GitHub's free page hosting to deploy our site. Change `baseURL` in `/config.toml`. \n\nMake sure that your `baseURL` has a trailing `/`!\n\n[Reference `config.toml` here](https://github.com/jackyzha0/quartz/blob/hugo/config.toml)\n\n```toml\nbaseURL = \"https://\u003cYOUR-DOMAIN\u003e/\"\n```\n\nIf you are using this under a subdomain (e.g. `\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz`), include the trailing `/`. **You need to do this especially if you are using GitHub!**\n\n```toml\nbaseURL = \"https://\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz/\"\n```\n\nChange `cname` in `/.github/workflows/deploy.yaml`. Again, if you don't have a custom domain to use, you can use `\u003cYOUR-USERNAME\u003e.github.io`.\n\nPlease note that the `cname` field should *not* have any path `e.g. end with /quartz` or have a trailing `/`.\n\n[Reference `deploy.yaml` here](https://github.com/jackyzha0/quartz/blob/hugo/.github/workflows/deploy.yaml)\n\n```yaml {title=\".github/workflows/deploy.yaml\"}\n- name: Deploy \n uses: peaceiris/actions-gh-pages@v3 \n with: \n\tgithub_token: ${{ secrets.GITHUB_TOKEN }} # this can stay as is, GitHub fills this in for us!\n\tpublish_dir: ./public \n\tpublish_branch: master\n\tcname: \u003cYOUR-DOMAIN\u003e\n```\n\nHave a custom domain? [Learn how to set it up with Quartz ](custom%20Domain.md).\n\n### Ignoring Files\nOnly want to publish a subset of all of your notes? Don't worry, Quartz makes this a simple two-step process.\n\n❌ [Excluding pages from being published](ignore%20notes.md)\n\n---\n\nNow that your Quartz is live, let's figure out how to make Quartz really *yours*!\n\n\u003e Step 6: 🎨 [Customizing Quartz](config.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/ignore-notes":{"title":"Ignoring Notes","content":"\n### Quartz Ignore\nEdit `ignoreFiles` in `config.toml` to include paths you'd like to exclude from being rendered.\n\n```toml\n...\nignoreFiles = [ \n \"/content/templates/*\", \n \"/content/private/*\", \n \"\u003cyour path here\u003e\"\n]\n```\n\n`ignoreFiles` supports the use of Regular Expressions (RegEx) so you can ignore patterns as well (e.g. ignoring all `.png`s by doing `\\\\.png$`).\nTo ignore a specific file, you can also add the tag `draft: true` to the frontmatter of a note.\n\n```markdown\n---\ntitle: Some Private Note\ndraft: true\n---\n...\n```\n\nMore details in [Hugo's documentation](https://gohugo.io/getting-started/configuration/#ignore-content-and-data-files-when-rendering).\n\n### Global Ignore\nHowever, just adding to the `ignoreFiles` will only prevent the page from being access through Quartz. If you want to prevent the file from being pushed to GitHub (for example if you have a public repository), you need to also add the path to the `.gitignore` file at the root of the repository.","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/obsidian":{"title":"Obsidian Vault Integration","content":"\n## Setup\nObsidian is the preferred way to use Quartz. You can either create a new Obsidian Vault or link one that your already have.\n\n### New Vault\nIf you don't have an existing Vault, [download Obsidian](https://obsidian.md/) and create a new Vault in the `/content` folder that you created and cloned during the [setup](setup.md) step.\n\n### Linking an existing Vault\nThe easiest way to use an existing Vault is to copy all of your files (directory and hierarchies intact) into the `/content` folder.\n\n## Settings\nGreat, now that you have your Obsidian linked to your Quartz, let's fix some settings so that they play well.\n\n1. Under Options \u003e Files and Links, set the New link format to always use Absolute Path in Vault.\n2. Go to Settings \u003e Files \u0026 Links \u003e Turn \"on\" automatically update internal links.\n\n![Obsidian Settings](obsidian-settings.png)*Obsidian Settings*\n\n## Templates\nInserting front matter everytime you want to create a new Note gets annoying really quickly. Luckily, Obsidian supports templates which makes inserting new content really easily.\n\n**If you decide to overwrite the `/content` folder completely, don't remove the `/content/templates` folder!**\n\nHead over to Options \u003e Core Plugins and enable the Templates plugin. Then go to Options \u003e Hotkeys and set a hotkey for 'Insert Template' (I recommend `[cmd]+T`). That way, when you create a new note, you can just press the hotkey for a new template and be ready to go!\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/philosophy":{"title":"Quartz Philosophy","content":"\n\u003e “[One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important.” — Richard Hamming\n\n## Why Quartz?\nHosting a public digital garden isn't easy. There are an overwhelming number of tutorials, resources, and guides for tools like [Notion](https://www.notion.so/), [Roam](https://roamresearch.com/), and [Obsidian](https://obsidian.md/), yet none of them have super easy to use *free* tools to publish that garden to the world.\n\nI've personally found that\n1. It's nice to access notes from anywhere\n2. Having a public digital garden invites open conversations\n3. It makes keeping personal notes and knowledge *playful and fun*\n\nI was really inspired by [Bianca](https://garden.bianca.digital/) and [Joel](https://joelhooks.com/digital-garden)'s digital gardens and wanted to try making my own.\n\n**The goal of Quartz is to make hosting your own public digital garden free and simple.** You don't even need your own website. Quartz does all of that for you and gives your own little corner of the internet.\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/preview-changes":{"title":"Preview Changes","content":"\nIf you'd like to preview what your Quartz site looks like before deploying it to the internet, here's exactly how to do that!\n\nNote that both of these steps need to be completed.\n\n## Install `hugo-obsidian`\nThis step will generate the list of backlinks for Hugo to parse. Ensure you have [Go](https://golang.org/doc/install) (\u003e= 1.16) installed.\n\n```bash\n# Install and link `hugo-obsidian` locally\ngo install github.com/jackyzha0/hugo-obsidian@latest\n```\n\nIf you are running into an error saying that `command not found: hugo-obsidian`, make sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize hugo-obsidian as an executable.\n\nAfterwards, start the Hugo server as shown above and your local backlinks and interactive graph should be populated!\n\n## Installing Hugo\nHugo is the static site generator that powers Quartz. [Install Hugo with \"extended\" Sass/SCSS version](https://gohugo.io/getting-started/installing/) first. Then,\n\n```bash\n# Navigate to your local Quartz folder\ncd \u003clocation-of-your-local-quartz\u003e\n\n# Start local server\nmake serve\n\n# View your site in a browser at http://localhost:1313/\n```\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/search":{"title":"Search","content":"\nQuartz supports two modes of searching through content.\n\n## Full-text\nFull-text search is the default in Quartz. It produces results that *exactly* match the search query. This is easier to setup but usually produces lower quality matches.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: false\n```\n\n## Natural Language\nNatural language search is powered by [Operand](https://operand.ai/). It understands language like a person does and finds results that best match user intent. In this sense, it is closer to how Google Search works.\n\nNatural language search tends to produce higher quality results than full-text search.\n\nHere's how to set it up.\n\n1. Create an Operand Account on [their website](https://operand.ai/).\n2. Go to Dashboard \u003e Settings \u003e Integrations.\n3. Follow the steps to setup the GitHub integration. Operand needs access to GitHub in order to index your digital garden properly!\n4. Head over to Dashboard \u003e Objects and press `(Cmd + K)` to open the omnibar and select 'Create Collection'.\n\t1. Set the 'Collection Label' to something that will help you remember it.\n\t2. You can leave the 'Parent Collection' field empty.\n5. Click into your newly made Collection.\n\t1. Press the 'share' button that looks like three dots connected by lines.\n\t2. Set the 'Interface Type' to `object-search` and click 'Create'.\n\t3. This will bring you to a new page with a search bar. Ignore this for now.\n6. Go back to Dashboard \u003e Settings \u003e API Keys and find your Quartz-specific Operand API key under 'Other keys'.\n\t1. Copy the key (which looks something like `0e733a7f-9b9c-48c6-9691-b54fa1c8b910`).\n\t2. Open `data/config.yaml`. Set `enableSemanticSearch` to `true` and `operandApiKey` to your copied key.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: true\noperandApiKey: \"0e733a7f-9b9c-48c6-9691-b54fa1c8b910\"\n```\n7. Make a commit and push your changes to GitHub. See the [[hosting|hosting]] page if you haven't done this already.\n\t1. This step is *required* for Operand to be able to properly index your content. \n\t2. Head over to Dashboard \u003e Objects and select the collection that you made earlier\n8. Press `(Cmd + K)` to open the omnibar again and select 'Create GitHub Repo'\n\t1. Set the 'Repository Label' to `Quartz`\n\t2. Set the 'Repository Owner' to your GitHub username\n\t3. Set the 'Repository Ref' to `master`\n\t4. Set the 'Repository Name' to the name of your repository (usually just `quartz` if you forked the repository without changing the name)\n\t5. Leave 'Root Path' and 'Root URL' empty\n9. Wait for your repository to index and enjoy natural language search in Quartz! Operand refreshes the index every 2h so all you need to do is just push to GitHub to update the contents in the search.","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/setup":{"title":"Setup","content":"\n## Making your own Quartz\nSetting up Quartz requires a basic understanding of `git`. If you are unfamiliar, [this resource](https://resources.nwplus.io/2-beginner/how-to-git-github.html) is a great place to start!\n\n### Forking\n\u003e A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project.\n\nNavigate to the GitHub repository for the Quartz project:\n\n📁 [Quartz Repository](https://github.com/jackyzha0/quartz)\n\nThen, Fork the repository into your own GitHub account. If you don't have an account, you can make on for free [here](https://github.com/join). More details about forking a repo can be found on [GitHub's documentation](https://docs.github.com/en/get-started/quickstart/fork-a-repo).\n\n### Cloning\nAfter you've made a fork of the repository, you need to download the files locally onto your machine. Ensure you have `git`, then type the following command replacing `YOUR-USERNAME` with your GitHub username.\n\n```shell\ngit clone https://github.com/YOUR-USERNAME/quartz\n```\n\n## Editing\nGreat! Now you have everything you need to start editing and growing your digital garden. If you're ready to start writing content already, check out the recommended flow for editing notes in Quartz.\n\n\u003e ✏️ Step 2: [Editing Notes in Quartz](editing.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/showcase":{"title":"Showcase","content":"\nWant to see what Quartz can do? Here are some cool community gardens :)\n\n- [Quartz Documentation (this site!)](https://quartz.jzhao.xyz/)\n- [Jacky Zhao's Garden](https://jzhao.xyz/)\n- [Scaling Synthesis - A hypertext research notebook](https://scalingsynthesis.com/)\n- [AWAGMI Intern Notes](https://notes.awagmi.xyz/)\n- [Shihyu's PKM](https://shihyuho.github.io/pkm/)\n- [Chloe's Garden](https://garden.chloeabrasada.online/)\n- [SlRvb's Site](https://slrvb.github.io/Site/)\n- [Course notes for Information Technology Advanced Theory](https://a2itnotes.github.io/quartz/)\n- [Brandon Boswell's Garden](https://brandonkboswell.com)\n- [Siyang's Courtyard](https://siyangsun.github.io/courtyard/)\n\nIf you want to see your own on here, submit a [Pull Request adding yourself to this file](https://github.com/jackyzha0/quartz/blob/hugo/content/notes/showcase.md)!\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/troubleshooting":{"title":"Troubleshooting and FAQ","content":"\nStill having trouble? Here are a list of common questions and problems people encounter when installing Quartz.\n\nWhile you're here, join our [Discord](https://discord.gg/cRFFHYye7t) :)\n\n### Does Quartz have Latex support?\nYes! See [CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) for a brief demo.\n\n### Can I use \\\u003cObsidian Plugin\\\u003e in Quartz?\nUnless it produces direct Markdown output in the file, no. There currently is no way to bundle plugin code with Quartz.\n\nThe easiest way would be to add your own HTML partial that supports the functionality you are looking for.\n\n### My GitHub pages is just showing the README and not Quartz\nMake sure you set the source to deploy from `master` (and not `hugo`) using `/ (root)`! See more in the [hosting](hosting.md) guide\n\n### Some of my pages have 'January 1, 0001' as the last modified date\nThis is a problem caused by `git` treating files as case-insensitive by default and some of your posts probably have capitalized file names. You can turn this off in your Quartz by running this command.\n\n```shell\n# in the root of your Quartz (same folder as config.toml)\ngit config core.ignorecase true\n\n# or globally (not recommended)\ngit config --global core.ignorecase true\n```\n\n### Can I publish only a subset of my pages?\nYes! Quartz makes selective publishing really easy. Heres a guide on [excluding pages from being published](ignore%20notes.md).\n\n### Can I host this myself and not on GitHub Pages?\nYes! All built files can be found under `/public` in the `master` branch. More details under [hosting](hosting.md).\n\n### `command not found: hugo-obsidian`\nMake sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize `hugo-obsidian` as an executable.\n\n```shell\n# Add the following 2 lines to your ~/.bash_profile\nexport GOPATH=/Users/$USER/go\nexport PATH=$GOPATH/bin:$PATH\n\n# In your current terminal, to reload the session\nsource ~/.bash_profile\n```\n\n### How come my notes aren't being rendered?\nYou probably forgot to include front matter in your Markdown files. You can either setup [Obsidian](obsidian.md) to do this for you or you need to manually define it. More details in [the 'how to edit' guide](editing.md).\n\n### My custom domain isn't working!\nWalk through the steps in [the hosting guide](hosting.md) again. Make sure you wait 30 min to 1 hour for changes to take effect.\n\n### How do I setup Google Analytics?\nYou can edit it in `config.toml` and either use a V3 (UA-) or V4 (G-) tag.\n\n### How do I change the content on the home page?\nTo edit the main home page, open `/content/_index.md`.\n\n### How do I change the colours?\nYou can change the theme by editing `assets/custom.scss`. More details on customization and themeing can be found in the [customization guide](config.md).\n\n### How do I add images?\nYou can put images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\n### My Interactive Graph and Backlinks aren't up to date\nBy default, the `linkIndex.json` (which Quartz needs to generate the Interactive Graph and Backlinks) are not regenerated locally. To set that up, see the guide on [local editing](editing.md)\n\n### Can I use React/Vue/some other framework?\nNot out of the box. You could probably make it work by editing `/layouts/_default/single.html` but that's not what Quartz is designed to work with. 99% of things you are trying to do with those frameworks you can accomplish perfectly fine using just vanilla HTML/CSS/JS.\n\n## Still Stuck?\nQuartz isn't perfect! If you're still having troubles, file an issue in the GitHub repo with as much information as you can reasonably provide. Alternatively, you can message me on [Twitter](https://twitter.com/_jzhao) and I'll try to get back to you as soon as I can.\n\n🐛 [Submit an Issue](https://github.com/jackyzha0/quartz/issues)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/updating":{"title":"Updating","content":"\nHaven't updated Quartz in a while and want all the cool new optimizations? On Unix/Mac systems you can run the following command for a one-line update! This command will show you a log summary of all commits since you last updated, press `q` to acknowledge this. Then, it will show you each change in turn and press `y` to accept the patch or `n` to reject it. Usually you should press `y` for most of these unless it conflicts with existing changes you've made! \n\n```shell\nmake update\n```\n\nOr, if you don't want the interactive parts and just want to force update your local garden (this assumed that you are okay with some of your personalizations been overriden!)\n\n```shell\nmake update-force\n```\n\nOr, manually checkout the changes yourself.\n\n\u003e [!warning] Warning!\n\u003e\n\u003e If you customized the files in `data/`, or anything inside `layouts/`, your customization may be overwritten!\n\u003e Make sure you have a copy of these changes if you don't want to lose them.\n\n\n```shell\n# add Quartz as a remote host\ngit remote add upstream git@github.com:jackyzha0/quartz.git\n\n# index and fetch changes\ngit fetch upstream\ngit checkout -p upstream/hugo -- layouts .github Makefile assets/js assets/styles/base.scss assets/styles/darkmode.scss config.toml data \n```\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/requirements/overview":{"title":"Logos Network Requirements Overview","content":"\nThis document describes the requirements of the Logos Network.\n\n\u003e Network sovereignty is an extension of the collective sovereignty of the individuals within. \n\n\u003e Meaningful participation in the network should be acheivable by affordable and accessible consumer grade hardware.\n\n\u003e Privacy by default. \n\n\u003e A given CiC should have the option to gracefully exit the network and operate on its own.\n\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["requirements"]},"/private/roadmap/consensus/candidates/carnot/FAQ":{"title":"Frequently Asked Questions","content":"\n## Network Requirements and Assumptions\n\n### What assumptions do we need Waku to fulfill? - Corey\n\u003e `Moh:` Waku needs to fill the following requirements, taken from the Carnot paper:\n\n\u003e **Definition 3** (Probabilistic Reliable Dissemination). _After the GST, and when the leader is correct, all the correct nodes deliver the proposal sent by the leader (w.h.p)._\n\n\u003e **Definition 4** (Probabilistic Fulfillment). _After the GST, and when the current and previous leaders are correct, the number of votes collected by teh current leader is $2c+1$ (w.h.p)._\n\n## Tradeoffs\n\n### I think the main clear disadvantage of such a scheme is the added latency of the multiple layers. - Alvaro\n\n\u003e `Moh:` The added latency will be O(log(n/C)), where C is the committee size. But I guess it will be hard to avoid it. Though it also depends on how fast the network layer (potentially Waku) propagats msgs and also on execution time of the transaction as well.\n\n\u003e `Alvaro:` Well IIUC the only latency we are introducing is directly proportional to the levels of subcommitee nesting (ie the log(n/C)), which is understandably the price to pay. We have to make sure though that what we gain by introducing this is really worth the extra cost vs the typical comittee formation via randao or perhaps VDFs\n\n\u003e `Moh:` Again the Typical committee formation with randao can reduce their wait time value to match our latency, but then it becomes vulnerable and fail if the network latency becomes greater than their slot interval. If they keep it too large it may not fail but becomes slow. We won't have that problem. If an adversary has the power to slow down the network then their liveness will fail, whereas we won't have that issue.\n\n## How would you compare Aptos and Carnot? - Alvaro\n\n\u003e `Moh:` It is variant of DiemBFT, Sui is based on Nahrwal, both cannot scale to more than few hunderd of nodes. That is why they achieve that low latency.\n\n\u003e `Alvaro:` Yes, so they need to select a committee of that size in order to operate at that latency What's wrong with selecting a committee vs Carnot's solution? This I'm asking genuinely to understand and because everyone will ask this question when we release.\n\n\u003e `Moh:` When you select a committee you have to wait for a time slot to make sure the result of consensus has propagated. Again strong synchrony assumption (slot time), formation of forks, increase in PoS attack vector come into play\nWithin committee the protocol does not need a wait time but for its results to get propagated if scalability is to be achieved, then wait time has to be added or signatures have to be collected from thousands of nodes.\n\n\u003e `Alvaro:` Can you elaborate?\n\n\u003e `Moh:` Ethereum (and any other protocol who runs the consensus in a single committee selected from a large group on nodes) has wait time so that the output of the consenus propagates to all honest nodes before the next committee is selected. Else the next committee will fail or only forks will be formed and chain length won't increase. But since this wait time as stated, increases latency, makes the protocol vulnerable, Ethereum wants to avoid it to achieve responsivess. To avoid wait time (add responsiveness) a protocol has to collect attestation signatures from 2/3rd of all nodes (not a single committee) to move to the second round (Carnot is already responsive). But aggregating and verifying signatures thousands of signatures is expensive and time consuming. This is why they are working to improve BLS signatures. Instead we have changed the consensus protocol in such a way that a small number of signatures need to be aggregated and verified to achieve responsiveness and fast finality. We can further improve performance by using the improved BLS signatures.\n\n\u003e One cannot achieve fast finality while running the consensus in a small committee. Because attestation of a Block within the single committee is not enough. This block can be averted if the leader of the next committee has not seen it. Therefore, there should be enough delay so that all honest nodes can see it. This is why we have this wait/slot time. Another issue can be a malicious leader from the next chosen committee can also avert a block of honest leader and hence preventing honest leaders from getting rewards. If blocks of honest leaders are averted for long time, stake of malicious leaders will increase. Moreover, malicious leaders can delay blocks of honest nodes by making fork and averting them. Addressing these issues will further make the protocol complex, while still laking fast finality.\n\n## Data Distribution\n\n### How much failure rate of erasure code transmission are we expecting. Basically, what are the EC coding parameters that we expect to be sending such that we have some failure rate of transmission? Has that been looked into? - Dmitriy\n\u003e `Moh:` This is a great question and it points to the tension between the failure rate vs overhead. We have briefly looked into this (today me and Marcin @madxor discussed such cases), but we haven’t thoroughly analyzed this. In our case, the rate of failure also depends on committee size. We look into $10^{-3}$ to $10^{-6}$ probability of failure. And in this case, the coding overhead can be somewhere between 200%-500% approximately. This means for a committee size of 500 (while expecting receipt of messages from 251 correct nodes), for a failure rate of $10^{-6}$ a single node has to send \u003e 6Mb of data for a 1Mb of actual data. Though 5x overhead is large, it still prevent us from sending/receiving 500 Mb of data in return for a failure probability of 1 proposal out of 1 million. From the protocol perspective, we can address EC failures in multiple ways: a: Since the root committee only forwards the coded chunks only when they have successfully rebuilt the block. This means the root committee can be contacted to download additional coded chunks to decode the block. b: We allow this failure and let the leader be replaced but since there is proof that the failure is due to the reason that a decoder failed to reconstruct the block, therefore, the leader cannot be punished (if we chose to employ punishment in PoS). \n\n### How much data should a given block be. Are there limits on this and if so, what are they and what do they depend on? - Dmitriy\n\u003e `Moh:` This question can be answered during simulations and experiments over links of different bandwidths and latencies. We will test the protocol performances with different block sizes. As we know increasing the block size results in increased throughput as well as latency. What is the most appropriate block size can be determined once we observe the tradeoff between throughput vs latency.\n\n## Signature Propagation\n\n### Who sends the signatures up from a given committee? Do that have any leadered power within the committee? - Tanguy\n\u003e `Moh:` Each node in a committee multicasts its vote to all members of the parent committee. Since the size of the vote is small the bit complexity will be low. Introducing a leader within each committee will create a single point of failure within each committee. This is why we avoid maintaining a leader within each committee\n\n## Network Scale\n\n### What is our expected minimum number of nodes within the network? - Dmitriy\n\u003e `Moh:` For a small number of nodes we can have just a single committee. But I am not sure how many nodes will join our network \n\n## Byzantine Behavior\n\n### Can we also consider a flavor that adds attestation/attribution to misbehaving nodes? That will come at a price but there might be a set of use cases which would like to have lower performance with strong attribution. Not saying that it must be part of the initial design, but can be think-through/added later. - Marcin\n\u003e `Moh:` Attestation to misbehaving nodes is part of this protocol. For example, if a node sends an incorrect vote or if a leader proposes an invalid transaction, then this proof will be shared with the network to punish the misbehaving nodes (Though currently this is not part of pseudocode). But it is not possible to reliably prove the attestation of not participation.\n\n\u003e `Marcin:` Great, and definitely, we cannot attest that a node was not participating - I was not suggesting that;). But we can also think about extending the attestation for lazy-participants case (if it’s not already part of the protocol).\n\n\u003e `Moh:` OK, thanks for the clarification 😁 . Of course we can have this feature to forward the proof of participation of successor committees. In the first version of Carnot we had this feature as a sliding window. One could choose the size of the window (in terms of tree levels) for which a node should forward the proof of participation. In the most recent version the size of sliding window is 0. And it is 1 for the root committee. It means root committee members have to forward the proof of participation of their child committee members. Since I was able to prove protocol correctness without forwarding the proofs so we avoid it. But it can be part of the protocol without any significant changes in the protocol\n\n\u003e If the proof scheme is efficient ( as the results you presented) in practice and the cost of creating and verifying proofs is not significant then actually adding proofs can be good. But not required.\n\n### Also, how do you reward online validators / punish offline ones if you can't prove at the block level that someone attested or not? - Tanguy\n\u003e `Moh:` This is very tricky and so far no one has done it right (to my knowledge). Current reward mechanism for attestation, favours fast nodes.This means if malicious nodes in the network are fast, they can increase their stake in the network faster than the honest nodes and eventually take control of the network. Or in the case of Ethereum a Byzantine leader can include signature of malicious nodes more frequently in the proof of attestation, hence malicious nodes will be rewarded more frequently. Also let me add that I don't have definite answer to your question currently, but I think by revising the protocol assumptions, incentive mechanism and using a game theoretical approach this problem can be resolved.\n\n\u003e An honest node should wait for a specific number of children votes (to make sure everyone is voting on the same proposal) before voting but does not need to provide any cryptographic proof. Though we build a threshold signature from root committee members and it’s children but not from the whole tree. As long as enough number of nodes follow the the protocol we should be fine. I am working on protocol proofs. Also I think bugs should be discovered during development and testing phase. Changing protocol to detect potential bug might not be a good practice.\n\n### doesn't having randomly distributed malicious nodes (say there is a 20%) increase the odds that over a third of a committee end up being from those malicious ones? It seems intuitive: since a 20% at the global scale is always \u003c1/3, but when randomly distributed there is always non-zero chance they end up in a single group, thus affecting liveness more and more the closer we get to that global 1/3. Consequently, if I'm understanding the algorithm correctly, it would have worse liveness guarantees that classical pBFT, say with a randomly-selected commitee from the total set. - Alvaro\n\n\u003e `Alexander:` We assume that fraction of malicious nodes is $1/4$ and given we chooses comm. sizes, which will depend on total number of nodes, appropriately this guarantees that with high probability we are below $1/3$ in each committee.\n\n\u003e `Alvaro:` ok, but then both the global guarantee is below that current \"standard\" of 1/3 of malicious nodes and even then we are talking about non-zero probabilities that a comm has the power to slow down consensus via requiring reformation of comms (is this right?)\n\n\u003e `Alexander:` This is the price we pay to improve scalability. Also these probabilities of failure can be very low.\n\n### What happens in Carnot when one committee is taken over by \u003e1/3 intra-comm byzantine nodes? - Alvaro\n\n\u003e `Moh:` When there is a failure the overlay is recalculated. By gradually increasing the fault tolerance by a small value, the probability of failure of a committee slightly increases but upon recalculating the correct overlay, inactive nodes that caused the failure of previous overlay (when no committee has more than 1/3 Byzantine nodes) will be slashed.\n\n\n\n## Synchronicity\n\n### How to guarantee synchronicity. In particular how to avoid that in a big network different nodes see a proposal with 2c+1 votes but different votes and thus different random seed - Giacomo\n\n\u003e `Moh:` The assumption is that there exists some known finite time bound Δ and a special event called GST (Global Stabilization Time) such that:\n\n\u003e The adversary must cause the GST event to eventually happen after some unknown finite time. Any message sent at time x must be delivered by time $\\delta + \\text{max}(x,GST)$. In the Partial synchrony model, the system behaves asynchronously till GST and synchronously after GST.\n\n\u003e Moreover, votes travel one level at a time from tree leaves to the tree root. We only need the proof of votes of root+child committees to conclude with a high probability that the majority of nodes have voted.\n\n### That's a timeout? How does this work exactly without timing assumptions? Trying to find this in the document -Alvaro\n\n\u003e `Moh:` Each committee only verifies the votes of its child committees. Once a verified 2/3rd votes of its child members, it then sends it vote to its parent. In this way each layer of the tree verifies the votes (attests) the layer below. Thus, a node does not have to collect and verify 2/3rd of all thousands of votes (as done in other responsive BFTs) but only from its child nodes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["Carnot","consensus"]},"/private/roadmap/consensus/candidates/carnot/overview":{"title":"Carnot Overview","content":"\nCarnot (formerly LogosBFT) is a Byzantine Fault Tolerant (BFT) [consensus](roadmap/consensus/index.md) candidate for the Nomos Network that utilizes Fountain Codes and a committees tree structure to optimize message propagation in the presence of a large number of nodes, while maintaining high througput and fast finality. More specifically, these are the research contributions in Carnot. To our knowledge, Carnot is the first consensus protocol that can achieve together all of these properties:\n\n1. Scalability: Carnot is highly scalable, scaling to thousands of nodes.\n2. Responsiveness: The ability of a protocol to operate with the speed of a wire but not a maximum delay (block delay, slot time, etc.) is called responsiveness. Responsiveness reduces latency and helps the Carnot achieve Fast Finality. Moreover, it improves Carnot's resilience against adversaries that can slow down network traffic. \n3. Fork avoidance: Carnot avoids the formation of forks in a happy path. Forks formation has the following adverse consequences that the Carnot avoids.\n 1. Wastage of resources on orphan blocks and reduced throughput with increased latency for transactions in orphan blocks\n 2. Increased attack vector on PoS as attackers can employ a strategy to force the network to accept their fork resulting in increased stake for adversaries.\n\n- [FAQ](FAQ.md): Here is a page that tracks various questions people have around Carnot.\n\n## Work Streams\n\n### Current State of the Art\nAn ongoing survey of the current state of the art around Consensus Mechanisms and their peripheral dependencies is being conducted by Tuanir, and can be found in the following WIP Overleaf document: \n- [WIP Consensus SoK](https://www.overleaf.com/project/633acc1acaa6ffe456d1ab1f)\n\n### Committee Tree Overlay\nThe basis of Carnot is dependent upon establishing an committee overlay tree structure for message distribution. \n\nAn overview video can be found in the following link: \n- [Carnot Overview by Moh during Offsite](https://drive.google.com/file/d/17L0JPgC0L1ejbjga7_6ZitBfHUe3VO11/view?usp=sharing)\n\nThe details of this are being worked on by Moh and Alexander and can be found in the following overleaf documents: \n- [Moh's draft](https://www.overleaf.com/project/6341fb4a3cf4f20f158afad3)\n- [Alexander's notes on the statistical properties of committees](https://www.overleaf.com/project/630c7e20e56998385e7d8416)\n- [Alexander's python code for computing committee sizes](https://github.com/AMozeika/committees)\n\nA simulation notebook is being worked on by Corey to investigate the properties of various tree overlay structures and estimate their practical performance:\n- [Corey's Overlay Jupyter Notebook](https://github.com/logos-co/scratch/tree/main/corpetty/committee_sim)\n\n#### Failure Recovery\nThere exists a timeout that triggers an overlay reconfiguration. Currently work is being done to calculate the probabilities of another failure based on a given percentage of byzantine nodes within the network. \n- [Recovery Failure Probabilities]() - LINK TO WORK HERE\n\n### Random Beacon\nA random beacon is required to choose a leader and establish a seed for defining the overlay tree. Marcin is working on the various avenues. His previous presentations can be found in the following presentation slides (in chronological order):\n- [Intro to Multiparty Random Beacons](https://cloud.logos.co/index.php/s/b39EmQrZRt5rrfL)\n- [Circles of Trust](https://cloud.logos.co/index.php/s/NXJZX8X8pHg6akw)\n- [Compact Certificates of Knowledge](https://cloud.logos.co/index.php/s/oSJ4ykR4A55QHkG)\n\n### Erasure Coding (LT Codes / Fountain Codes / Raptor Codes)\nIn order to reduce message complexity during propagation, we are investigating the use of Luby Transform (LT) codes, more specifically [Fountain Codes](https://en.wikipedia.org/wiki/Fountain_code), to break up the block to be propagated to validators and recombined by local peers within a committee. \n- [LT Code implementation in Rust](https://github.com/chrido/fountain) - unclear about legal status of LT or Raptor Codes, it is currently under investigation.\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","Carnot"]},"/private/roadmap/consensus/candidates/claro":{"title":"Claro: Consensus Candidate","content":"\n\n\n**Claro** (formerly Glacier) is a consensus candidate for the Logos network that aims to be an improvement to the Avalanche family of consensus protocols. \n\n\n### Implementations\nThe protocol has been implemented in multiple languages to facilitate learning and testing. The individual code repositories can be found in the following links:\n- Rust (reference)\n- Python\n- Common Lisp\n\n### Simulations/Experiments/Analysis\nIn order to test the performance of the protocol, and how it stacked up to the Avalanche family of protocols, we have performed a multitude of simulations and experiments under various assumptions. \n- [Alvaro's initial Python implementations and simulation code](https://github.com/status-im/consensus-models)\n\n### Specification\nCurrently the Claro consensus protocol is being drafted into a specification so that other implementations can be created. It's draft resides under [Vac](https://vac.dev) and can be tracked [here](https://github.com/vacp2p/rfc/pull/512/)\n\n### Additional Information\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","claro"]},"/private/roadmap/consensus/development/overview":{"title":"Development Work","content":"","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/development/prototypes":{"title":"Consensus Prototypes","content":"\nConsensus Prototypes is a collection of Rust implementations of the [Consensus Candidates](tags/candidates)\n\n## Tiny Node\n\n\n## Required Roles\n- Lead Developer (filled)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/overview":{"title":"Consensus Work","content":"\nConsensus is the foundation of the network. It is how a group of peer-to-peer nodes understands how to agree on information in a distributed way, particuluarly in the presence of byzantine actors. \n\n## Consensus Roadmap\n### Consensus Candidates\n- [Carnot](private/roadmap/consensus/candidates/carnot/overview.md) - Carnot is the current leading consensus candidate for the Nomos network. It is designed to maximize efficiency of message dissemination while supoorting hundreds of thousands of full validators. It gets its name from the thermodynamic concept of the [Carnot Cycle](https://en.wikipedia.org/wiki/Carnot_cycle), which defines maximal efficiency of work from heat through iterative gas expansions and contractions. \n- [Claro](claro.md) - Claro is a variant of the Avalanche Snow family of protocols, designed to be more efficient at the decision making process by leveraging the concept of \"confidence\" across peer responses. \n\n\n### Theoretical Analysis\n- [snow-family](snow-family.md)\n\n### Development\n- [prototypes](prototypes.md)\n\n## Open Roles\n- [distributed-systems-researcher](distributed-systems-researcher.md)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus"]},"/private/roadmap/consensus/theory/overview":{"title":"Consensus Theory Work","content":"\nThis track of work is dedicated to creating theoretical models of distributed consensus in order to evaluate them from a mathematical standpoint. \n\n## Navigation\n- [Snow Family Analysis](snow-family.md)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory"]},"/private/roadmap/consensus/theory/snow-family":{"title":"Theoretical Analysis of the Snow Family of Consensus Protocols","content":"\nIn order to evaluate the properties of the Avalanche family of consensus protocols more rigorously than the original [whitepapers](), we work to create an analytical framework to explore and better understand the theoretical boundaries of the underlying protocols, and under what parameterization they will break vs a set of adversarial strategies","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory","snow"]},"/private/roadmap/networking/carnot-waku-specification":{"title":"A Specification proposal for using Waku for Carnot Consensus","content":"\n##### Definition Reference \n- $k$ - size of a given committee\n- $n_C$ - number of committees in the overlay, or nodes in the tree\n- $d$ - depth of the overlay tree\n- $n_d$ - number of committees at a given depth of the tree\n\n## Motivation\nIn #Carnot, an overlay is created to facilitate message distribution and voting aggregation. This document will focus on the differentiated channels of communication for message distribution. Whether or not voting aggregation and subsequenty traversal back up the tree can utilize the same channels will be investigated later. \n\nThe overlay is described as a binary tree of committees, where a individual in each committee propogates messages to an assigned node in their two children committees of the tree, until the leaf nodes have recieved enough information to reconstitute the proposal block. \n\nThis communication protocol will naturally form \"pools of information streams\" that people will need to listen to in order to do their assigned work:\n- inner committee communication\n- parent-child chain communication\n- intitial leader distribution\n\n### **inner committee communication** \nall members of a given committee will need to gossip with each other in order to reform the initial proposal block\n- This results in $n_C$ pools of $k$-sized communication pools.\n\n### **parent-child chain communication** \nThe formation of the committee and the lifecycle of a chunk of erasure coded data forms a number of \"parent-child\" chains. \n- If we completely minimize the communcation between committees, then this results in $k$ number of $n_C$-sized communication pools.\n- It is not clear if individual levels of the tree needs to \"execute\" the message to their children, or if the root committee can broadcast to everyone within its assigned parent-chain communcation pool at the same time.\n- It is also unclear if individual levels of the tree need to send independant messages to each of their children, or if a unified communication pool can be leveraged at the tree-level. This results in $d$ communication pools of $n_d$-size. \n\n### **initial leader distribution**\nFor each proposal, a leader needs to distribute the erasure coded proposal block to the root committee\n- This results in a single communication pool of size $k(+1)$.\n- the $(+1)$ above is the leader, who could also be a part of the root committee. The leader changes with each block proposal, and we seek to minimize the time between leader selection and a round start. Thusly, this results in a requirement that each node in the network must maintain a connection to every node in the root committee. \n\n## Proposal\nThis part of the document will attempt to propose using various aspects of Waku, to facilitate both the setup of the above-mentioned communication pools as well as encryption schemes to add a layer of privacy (and hopefully efficiency) to message distribution. \n\nWe seek to minimize the availability of data such that an individual has only the information to do his job and nothing more.\n\nWe also seek to minimize the amount of messages being passed such that eventually everyone can reconstruct the initial proposal block\n\n`???` for Waku-Relay, 6 connections is optimal, resulting in latency ???\n\n`???` Is it better to have multiple pubsub topics with a simple encryption scheme or a single one with a complex encryption scheme\n\nAs there seems to be a lot of dynamic change from one proposal to the next, I would expect [`noise`](https://vac.dev/wakuv2-noise) to be a quality candidate to facilitate the creation of secure ephemeral keys in the to-be proposed encryption scheme. \n\nIt is also of interest how [`contentTopics`](https://rfc.vac.dev/spec/23/) can be leveraged to optimize the communication pools. \n\n## Whiteboard diagram and notes\n![Whiteboard Diagram](images/Overlay-Communications-Brainstorm.png)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku","carnot","networking","consensus"]},"/private/roadmap/networking/overview":{"title":"P2P Networking Overview","content":"\nThis page summarizes the work around the P2P networking layer of the Nomos project.\n\n## Waku\n[Waku](https://waku.org) is an privacy-preserving, ephemeral, peer-to-peer (P2P) messaging suite of protocols which is developed under [Vac](https://vac.dev) and maintained/productionized by the [Logos Collective](https://logos.co). \n\nIt is hopeful that Nomos can leverage the work of the Waku project to provide the P2P networking layer and peripheral services associated with passing messages around the network. Below is a list of the associated work to investigate the use of Waku within the Nomos Project. \n\n### Scalability and Fault-Tolerance Studies\nCurrently, the amount of research and analysis of the scalability of Waku is not sufficient to give enough confidence that Waku can serve as the networking layer for the Nomos project. Thusly, it is our effort to push this analysis forward by investigating the various boundaries of scale for Waku. Below is a list of endeavors in this direction which we hope serves the broader community: \n- [Status' use of Waku study w/ Kurtosis](status-waku-kurtosis.md)\n- [Using Waku for Carnot Overlay](carnot-waku-specification.md)\n\n### Rust implementations\nWe have created and maintain a stop-gap solution to using Waku with the Rust programming language, which is wrapping the [go-waku](https://github.com/status-im/go-waku) library in Rust and publishing it as a crate. This library allows us to do tests with our [Tiny Node](roadmap/development/prototypes.md#Tiny-Node) implementation more quickly while also providing other projects in the ecosystem to leverage Waku within their Rust codebases more quickly. \n\nIt is desired that we implement a more robust and efficient Rust library for Waku, but this is a significant amount of work. \n\nLinks:\n- [Rust bindings to go-waku repo](https://github.com/waku-org/waku-rust-bindings)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","overview"]},"/private/roadmap/networking/status-network-agents":{"title":"Status Network Agents Breakdown","content":"\nThis page creates a model to describe the impact of the various clients within the Status ecosystem by describing their individual contribution to the messages within the Waku network they leverage. \n\nThis model will serve to create a realistic network topology while also informing the appropriate _dimensions of scale_ that are relevant to explore in the [Status Waku scalability study](status-waku-kurtosis.md)\n\nStatus has three main clients that users interface with (in increasing \"network weight\" ordering):\n- Status Web\n- Status Mobile\n- Status Desktop\n\nEach of these clients has differing (on average) resources available to them, and thusly, provides and consumes different Waku protocols and services within the Status network. Here we will detail their associated messaging impact to the network using the following model:\n\n```\nAgent\n - feature\n - protocol\n - contentTopic, messageType, payloadSize, frequency\n```\n\nBy describing all `Agents` and their associated feature list, we should be able do the following:\n\n- Estimate how much impact per unit time an individual `Agent` impacts the Status network\n- Create a realistic network topology and usage within a simulation framework (_e.g._ Kurtosis)\n- Facilitate a Status Specification of `Agents`\n- Set an example for future agent based modeling and simulation work for the Waku protocol suite \n\n## Status Web\n\n## Status Mobile\n\n## Status Desktop\nStatus Desktop serves as the backbone for the Status Network, as the software runs on hardware that is has more available resources, typically has more stable network and robust connections, and generally has a drastically lower churn (or none at all). This results in it running the most Waku protocols for longer periods of time, resulting int he heaviest usage of the Waku network w.r.t. messaging. \n\nHere is the model breakdown of its usage:\n```\nStatus Desktop\n - Prekey bundle broadcast\n - Account sync\n - Historical message melivery\n - Waku-Relay (answering message queries)\n - Message propogation\n - Waku-Relay\n - Waku-Lightpush (receiving)\n```","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["status","waku","scalability"]},"/private/roadmap/networking/status-waku-kurtosis":{"title":"Status' use of Waku - A Scalability Study","content":"\n[Status](https://status.im) is the largest consumer of the Waku protocol, leveraging it for their entire networking stack. THeir upcoming release of Status Desktop and the associated Communities product will heavily push the limits of what Waku can do. As mentioned in the [Networking Overview](private/roadmap/networking/overview.md) page, rigorous scalability studies have yet to be conducted of Waku (v2). \n\nWhile these studies most immediately benefit the Status product suite, it behooves the Nomos Project to assist as the lessons learned immediately inform us the limits of what the Waku protocol suite can handle, and how that fits within our [Technical Requirements](private/requirements/overview.md).\n\nThis work has been kicked off as a partnership with the [Kurtosis](https://kurtosis.com) distributed systems development platform. It is our hope that the experience and accumen gained during this partnership and study will serve us in the future with respect to Nomos developme, and more broadly, all projects under the Logos Collective. \n\nAs such, here is an overview of the various resources towards this endeavor:\n- [Status Network Agent Breakdown](status-network-agents.md) - A document that describes the archetypal agents that participate in the Status Network and their associated Waku consumption.\n- [Wakurtosis repo](https://github.com/logos-co/wakurtosis) - A Kurtosis module to run scalability studies\n- [Waku Topology Test repo](https://github.com/logos-co/Waku-topology-test) - a Python script that facilitates setting up a reasonable network topology for the purpose of injecting the network configuration into the above Kurtosis repo\n- [Initial Vac forum post introducing this work](https://forum.vac.dev/t/waku-v2-scalability-studies/142)\n- [Waku Github Issue detailing work progression](https://github.com/waku-org/pm/issues/2)\n - this is also a place to maintain communications of progress\n- [Initial Waku V2 theoretical scalability study](https://vac.dev/waku-v1-v2-bandwidth-comparison)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","scalability","waku"]},"/private/roadmap/virtual-machines/overview":{"title":"overview","content":"\n## Motivation\nLogos seeks to use a privacy-first virtual machine for transaction execution. We believe this can only be acheived through zero-knowledge. The majority of current work in the field focuses more towards the aggregation and subsequent verification of transactions. This leads us to explore the researching and development of a privacy-first virtual machine. \n\nLINK TO APPROPRIATE NETWORK REQUIREMENTS HERE\n\n#### Educational Resources\n- primer on Zero Knowledge Virtual Machines - [link](https://youtu.be/GRFPGJW0hic)\n\n### Implementations:\n- TinyRAM - link\n- CairoVM\n- zkSync\n- Hermes\n- [MIDEN](https://polygon.technology/solutions/polygon-miden/) (Polygon)\n- RISC-0\n\t- RISC-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General Building Blocks of a ZK-VM\n- CPU\n\t- modeled with \"execution trays\"\n- RAM\n\t- overhead to look out for\n\t\t- range checks\n\t\t- bitwise operations\n\t\t- hashing\n- Specialized circuits\n- Recursion\n\n## Approaches\n- zk-WASM\n- zk-EVM\n- RISC-0\n\t- RISK-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t\t- https://youtu.be/2MXHgUGEsHs - Why use the RISC Zero zkVM?\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General workstreams\n- bytecode compiler\n- zero-knowledge circuit design\n- opcode architecture (???)\n- engineering\n- required proof system\n- control flow\n\t- MAST (as used in MIDEN)\n\n## Roles\n- [ZK Research Engineer](zero-knowledge-research-engineer.md)\n- Senior Rust Developer\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["virtual machines","zero knowledge"]},"/private/roles/distributed-systems-researcher":{"title":"Open Role: Distributed Systems Researcher","content":"\n\n## About Status\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception. Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n \n\n## Who are we?\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the Status Network. We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality\n\n## The job\n\n**Responsibilities:**\n- This role is dedicated to pure research\n- Primarily, ensuring that solutions are sound and diving deeper into their formal definition.\n- Additionally, he/she would be regularly going through papers, bringing new ideas and staying up-to-date.\n- Designing, specifying and verifying distributed systems by leveraging formal and experimental techniques.\n- Conducting theoretical and practical analysis of the performance of distributed systems.\n- Designing and analysing incentive systems.\n- Collaborating with both internal and external customers and the teams responsible for the actual implementation.\n- Researching new techniques for designing, analysing and implementing dependable distributed systems.\n- Publishing and presenting research results both internally and externally.\n\n \n**Ideally you will have:**\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]\n- Strong background in Computer Science and Math, or a related area.\n- Academic background (The ability to analyze, digest and improve the State of the Art in our fields of interest. Specifically, familiarity with formal proofs and/or the scientific method.)\n- Distributed Systems with a focus on Blockchain\n- Analysis of algorithms\n- Familiarity with Python and/or complex systems modeling software\n- Deep knowledge of algorithms (much more academic, such as have dealt with papers, moving from research to pragmatic implementation)\n- Experience in analysing the correctness and security of distributed systems.\n- Familiarity with the application of formal method techniques. \n- Comfortable with “reverse engineering” code in a number of languages including Java, Go, Rust, etc. Even if no experience in these languages, the ability to read and \"reverse engineer\" code of other projects is important.\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Capable of deep and creative thinking.\n- Passionate about blockchain technology in general.\n- Able to manage the uncertainties and ambiguities associated with working in a remote-first, distributed, decentralised environment.\n- A strong alignment to our principles: https://status.im/about/#our-principles\n\n\n**Bonus points:**\n- Experience working remotely. \n- Experience working for an open source organization. \n- TLA+/PRISM would be desirable.\n- PhD in Computer Science, Mathematics, or a related area. \n- Experience Multi-Party Computation and Zero-Knowledge Proofs\n- Track record of scientific publications.\n- Previous experience in remote or globally distributed teams.\n\n## Hiring process\n\nThe hiring process for this role will be:\n- Interview with our People Ops team\n- Interview with Alvaro (Team Lead)\n- Interview with Corey (Chief Security Officer)\n- Interview with Jarrad (Cofounder) or Daniel \n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n \n\n## Compensation\n\nWe are happy to pay salaries in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: https://people-ops.status.im/tag/perks/\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role"]},"/private/roles/rust-developer":{"title":"Rust Developer","content":"\n# Role: Rust Developer\nat Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is an organization building the tools and infrastructure for the advancement of a secure, private, and open web3. We have been completely distributed since inception. Our team is currently 100+ core contributors strong and welcomes a growing number of community members from all walks of life, scattered all around the globe. We care deeply about open source, and our organizational structure has a minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**About Logos**\n\nA group of Status Contributors is also involved in a new community lead project, called Logos, and this particular role will enable you to also focus on this project. Logos is a grassroots movement to provide trust-minimized, corruption-resistant governing services and social institutions to underserved citizens. \n\nLogos’ infrastructure will provide a base for the provisioning of the next-generation of governing services and social institutions - paving the way to economic opportunities for those who need them most, whilst respecting basic human rights through the network’s design.You can read more about Logos here: [in this small handbook](https://github.com/acid-info/public-assets/blob/master/logos-manual.pdf) for mindful readers like yourself.\n\n**Who are we?**\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the [Status Network](https://statusnetwork.com/). We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality.\n\n**Responsibilities:**\n\n- Develop and maintenance of internal rust libraries\n- 1st month: comfortable with dev framework, simulation app. Improve python lib?\n- 2th-3th month: Start dev of prototype node services\n\n**Ideally you will have:**\n\n- “Extensive” Rust experience (Async programming is a must) \n Ideally they have some GitHub projects to show\n- Experience with Python\n- Strong competency in developing and maintaining complex libraries or applications\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles) \n \n\n**Bonus points if**\n\n-  E.g. Comfortable working remotely and asynchronously\n-  Experience working for an open source organization.  \n-  Peer-to-peer or networking experience\n\n_[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]_\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)\n\n**Hiring Process** \n\nThe hiring process for this role will be:\n\n1. Interview with Maya (People Ops team)\n2. Interview with Corey (Logos Program Owner)\n3. Interview with Daniel (Engineering Lead)\n4. Interview with Jarrad (Cofounder)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role","engineering","rust"]},"/private/roles/zero-knowledge-research-engineer":{"title":"Zero Knowledge Research Engineer","content":"at Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception.  Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**Who are we**\n\n[Vac](http://vac.dev/) **builds** [public good](https://en.wikipedia.org/wiki/Public_good) protocols for the decentralized web.\n\nWe do applied research based on which we build protocols, libraries and publications. Custodians of protocols that reflect [a set of principles](http://vac.dev/principles) - liberty, privacy, etc.\n\nYou can see a sample of some of our work here: [Vac, Waku v2 and Ethereum Messaging](https://vac.dev/waku-v2-ethereum-messaging), [Privacy-preserving p2p economic spam protection in Waku v2](https://vac.dev/rln-relay), [Waku v2 RFC](https://rfc.vac.dev/spec/10/). Our attitude towards ZK: [Vac \u003c3 ZK](https://forum.vac.dev/t/vac-3-zk/97).\n\n**The role**\n\nThis role will be part of a new team that will make a provable and private WASM engine that runs everywhere. As a research engineer, you will be responsible for researching, designing, analyzing and implementing circuits that allow for proving private computation of execution in WASM. This includes having a deep understanding of relevant ZK proof systems and tooling (zk-SNARK, Circom, Plonk/Halo 2, zk-STARK, etc), as well as different architectures (zk-EVM Community Effort, Polygon Hermez and similar) and their trade-offs. You will collaborate with the Vac Research team, and work with requirements from our new Logos program. As one of the first hires of a greenfield project, you are expected to take on significant responsibility,  while collaborating with other research engineers, including compiler engineers and senior Rust engineers. \n \n\n**Key responsibilities** \n\n- Research, analyze and design proof systems and architectures for private computation\n- Be familiar and adapt to research needs zero-knowledge circuits written in Rust Design and implement zero-knowledge circuits in Rust\n- Write specifications and communicate research findings through write-ups\n- Break down complex problems, and know what can and what can’t be dealt with later\n- Perform security analysis, measure performance of and debug circuits\n\n**You ideally will have**\n\n- Very strong academic or engineering background (PhD-level or equivalent in industry); relevant research experience\n- Experience with low level/strongly typed languages (C/C++/Go/Rust or Java/C#)\n- Experience with Open Source software\n- Deep understanding of Zero-Knowledge proof systems (zk-SNARK, circom, Plonk/Halo2, zk-STARK), elliptic curve cryptography, and circuit design\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles)\n\n**Bonus points if** \n\n- Experience in provable and/or private computation (zkEVM, other ZK VM)\n- Rust Zero Knowledge tooling\n- Experience with WebAssemblyWASM\n\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role. Just explain to us why in your cover letter].\n\n**Hiring process** \n\nThe hiring process for this role will be:\n\n1. Interview with Angel/Maya from our Talent team\n2. Interview with team member from the Vac team\n3. Pair programming task with the Vac team\n4. Interview with Oskar, the Vac team lead\n5. Interview with Jacek, Program lead\n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["engineering","role","zero knowledge"]},"/roadmap/acid/updates/2023-08-02":{"title":"2023-08-02 Acid weekly","content":"\n## Leads roundup - acid\n\n**Al / Comms**\n\n- Status app relaunch comms campaign plan in the works. Approx. date for launch 31.08.\n- Logos comms + growth plan post launch is next up TBD.\n- Will be waiting for specs for data room, raise etc.\n- Hires: split the role for content studio to be more realistic in getting top level talent.\n\n**Matt / Copy**\n\n- Initiative updating old documentation like CC guide to reflect broader scope of BUs\n- Brand guidelines/ modes of presentation are in process\n- Wikipedia entry on network states and virtual states is live on \n\n**Eddy / Digital Comms**\n\n- Logos Discord will be completed by EOD.\n- Codex Discord will be done tomorrow.\n - LPE rollout plan, currently working on it, will be ready EOW\n- Podcast rollout needs some\n- Overarching BU plan will be ready in next couple of weeks as things on top have taken priority.\n\n**Amir / Studio**\n\n- Started execution of LPE for new requirements, broken down in smaller deliveries. Looking to have it working and live by EOM.\n- Hires: still looking for 3 positions with main focus on developer side. \n\n**Jonny / Podcast**\n\n- Podcast timelines are being set. In production right now. Nick delivered graphics for HiO but we need a full pack.\n- First HiO episode is in the works. Will be ready in 2 weeks to fit in the rollout of the LPE.\n\n**Louisa / Events**\n\n- Global strategy paper for wider comms plan.\n- Template for processes and executions when preparing events.\n- Decision made with Carl to move Network State event to November in satellite of other events. Looking into ETH Lisbon / Staking Summit etc.\n - Seoul Q4 hackathon is already in the works. Needs bounty planning.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/acid/updates/2023-08-09":{"title":"2023-08-09 Acid weekly","content":"\n## **Top level priorities:**\n\nLogos Growth Plan\nStatus Relaunch\nLaunch of LPE\nPodcasts (Target: Every week one podcast out)\nHiring: TD studio and DC studio roles\n\n## **Movement Building:**\n\n- Logos collective comms plan skeleton ready - will be applied for all BUs as next step\n- Goal is to have plan + overview to set realistic KPIs and expectations\n- Discord Server update on various views\n- Status relaunch comms plan is ready for input from John et al.\n- Reach out to BUs for needs and deliverables\n\n## **TD Studio**\n\nFull focus on LPE:\n- On track, target of end of august\n- review of options, more diverse landscape of content\n- Episodes page proposals\n- Players in progress\n- refactoring from prev code base\n- structure of content ready in GDrive\n\n## **Copy**\n\n- Content around LPE\n- Content for podcast launches\n- Status launch - content requirements to receive\n- Organization of doc sites review\n- TBD what type of content and how the generation workflows will look like\n\n## **Podcast**\n\n- Good state in editing and producing the shows\n- First interview edited end to end with XMTP is ready. 2 weeks with social assets and all included. \n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n- 3 recorded for HIO, motion graphics in progress\n- First E2E podcast ready in 2 weeks for LPE\n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n\n## **DC Studio**\n\n- Brand guidelines for HiO are ready and set. Thanks `Shmeda`!\n- Logos State branding assets are being developed\n- Presentation templates update\n\n## **Events**\n\n- Network State event probably in Istanbul in November re: Devconnect will confirm shortly.\n- Program elements and speakers are top priority\n- Hackathon in Seoul in Q1 2024 - late Febuary probably\n- Jarrad will be speaking at HCPP and EthRome\n- Global event strategy written and in review\n- Lou presented social media and event KPIs on Paris event\n\n## **CRM \u0026 Marketing tool**\n\n- Get feedback from stakeholders and users\n- PM implementation to be planned (+- 3 month time TBD) with working group\n- LPE KPI: Collecting email addresses of relevant people\n- Careful on how we manage and use data, important for BizDev\n- Careful on which segments of the project to manage using the CRM as it can be very off brand","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/codex/milestones-overview":{"title":"Codex Milestones Overview","content":"\n## Milestones\n- [Zenhub Tracker](https://app.zenhub.com/workspaces/engineering-62cee4c7a335690012f826fa/roadmap)\n- [Miro Tracker](https://miro.com/app/board/uXjVOtZ40xI=/?share_link_id=33106977104)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones-overview"]},"/roadmap/codex/updates/2023-07-21":{"title":"2023-07-21 Codex weekly","content":"\n## Codex update 07/12/2023 to 07/21/2023\n\nOverall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc...\n\nOur main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. A lot of grunt work is being done to make that possible. Progress is steady, but there are lots of stabilization and testing \u0026 infra related work going on.\n\nWe're also onboarding several new members to the team (4 to be precise), this will ultimately accelerate our progress, but it requires some upfront investment from some of the more experienced team members.\n\n### DevOps/Infrastructure:\n\n- Adopted nim-codex Docker builds for Dist Tests.\n- Ordered Dedicated node on Hetzner.\n- Configured Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Created Geth and Prometheus Docker images for Dist-Tests.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Set up Ingress Controller in Dist-Tests cluster.\n\n### Testing:\n\n- Set up deployer to gather metrics.\n- Debugging and identifying potential deadlock in the Codex client.\n- Added metrics, built image, and ran tests.\n- Updated dist-test log for Kibana compatibility.\n- Ran dist-tests on a new master image.\n- Debugging continuous tests.\n\n### Development:\n\n- Worked on codex-dht nimble updates and fixing key format issue.\n- Updated CI and split Windows CI tests to run on two CI machines.\n- Continued updating dependencies in codex-dht.\n- Fixed decoding large manifests ([PR #479](https://github.com/codex-storage/nim-codex/pull/497)).\n- Explored the existing implementation of NAT Traversal techniques in `nim-libp2p`.\n\n### Research\n\n- Exploring additional directions for remote verification techniques and the interplay of different encoding approaches and cryptographic primitives\n - https://eprint.iacr.org/2021/1500.pdf\n - https://dankradfeist.de/ethereum/2021/06/18/pcs-multiproofs.html\n - https://eprint.iacr.org/2021/1544.pdf\n- Onboarding Balázs as our ZK researcher/engineer\n- Continued research in DAS related topics\n - Running simulation on newly setup infrastructure\n- Devised a new direction to reduce metadata overhead and enable remote verification https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n- Looked into NAT Traversal ([issue #166](https://github.com/codex-storage/nim-codex/issues/166)).\n\n### Cross-functional (Combination of DevOps/Testing/Development):\n\n- Fixed discovery related issues.\n- Planned Codex Demo update for the Logos event and prepared environment for the demo.\n- Described requirements for Dist Tests logs format.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Dist Tests logs adoption requirements - Updated log format for Kibana compatibility.\n- Hetzner Dedicated server was configured.\n- Set up Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper in Dist-Tests cluster.\n- Setup Grafana in Dist-Tests cluster.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Setup Ingress Controller in Dist-Tests cluster.\n\n---\n\n#### Conversations\n1. zk_id _—_ 07/24/2023 11:59 AM\n\u003e \n\u003e We've explored VDI for rollups ourselves in the last week, curious to know your thoughts\n2. dryajov _—_ 07/25/2023 1:28 PM\n\u003e \n\u003e It depends on what you mean, from a high level (A)VID is probably the closest thing to DAS in academic research, in fact DAS is probably either a subset or a superset of VID, so it's definitely worth digging into. But I'm not sure what exactly you're interested in, in the context of rollups...\n1. zk_id _—_ 07/25/2023 3:28 PM\n \n The part of the rollups seems to be the base for choosing proofs that scale linearly with the amount of nodes (which makes it impractical for large numbers of nodes). The protocol is very simple, and would only need to instead provide constant proofs with the Kate commitments (at the cost of large computational resources is my understanding). This was at least the rationale that I get from reading the paper and the conversation with Bunz, one of the founders of the Espresso shared sequencer (which is where I found the first reference to this paper). I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal. You still would need the sampling for the light clients though, of course.\n \n2. dryajov _—_ 07/25/2023 8:31 PM\n \n \u003e I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal.\n \n Yeah, great question. What follows is strictly IMO, as I haven't seen this formally contrasted anywhere, so my reasoning can be wrong in subtle ways.\n \n - (A)VID - **dispersing** and storing data in a verifiable manner\n - Sampling - verifying already **dispersed** data\n \n tl;dr Sampling allows light nodes to protect against dishonest majority attacks. In other words, a light node cannot be tricked to follow an incorrect chain by a dishonest validator majority that withholds data. More details are here - [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\") ------------- First, DAS implies (A)VID, as there is an initial phase where data is distributed to some subset of nodes. Moreover, these nodes, usually the validators, attest that they received the data and that it is correct. If a majority of validators accepts, then the block is considered correct, otherwise it is rejected. This is the verifiable dispersal part. But what if the majority of validators are dishonest? Can you prevent them from tricking the rest of the network from following the chain?\n \n Dankrad Feist\n \n [Data availability checks](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html)\n \n Primer on data availability checks\n \n3. _[_8:31 PM_]_\n \n ## Dealing with dishonest majorities\n \n This is easy if all the data is downloaded by all nodes all the time, but we're trying to avoid just that. But lets assume, for the sake of the argument, that there are full nodes in the network that download all the data and are able to construct fraud proofs for missing data, can this mitigate the problem? It turns out that it can't, because proving data (un)availability isn't a directly attributable fault - in other words, you can observe/detect it but there is no way you can prove it to the rest of the network reliably. More details here [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\") So, if there isn't much that can be done by detecting that a block isn't available, what good is it for? Well nodes can still avoid following the unavailable chain and thus be tricked by a dishonest majority. However, simply attesting that data has been publishing is not enough to prevent a dishonest majority from attacking the network. (edited)\n \n4. dryajov _—_ 07/25/2023 9:06 PM\n \n To complement, the relevant quote from [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\"), is:\n \n \u003e Here, fraud proofs are not a solution, because not publishing data is not a uniquely attributable fault - in any scheme where a node (\"fisherman\") has the ability to \"raise the alarm\" about some piece of data not being available, if the publisher then publishes the remaining data, all nodes who were not paying attention to that specific piece of data at that exact time cannot determine whether it was the publisher that was maliciously withholding data or whether it was the fisherman that was maliciously making a false alarm.\n \n The relevant quote from from [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\"), is:\n \n \u003e There is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a consensus supermajority has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? Honest full nodes, obviously, will not follow this chain, as they can’t download the data. But light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.\n \n Both articles are a bit old, but the intuitions still hold.\n \n\nJuly 26, 2023\n\n6. zk_id _—_ 07/26/2023 10:42 AM\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n7. _[_10:45 AM_]_\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n8. zk_id\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n ### dryajov _—_ 07/26/2023 4:42 PM\n \n Great! Glad to help anytime \n \n9. zk_id\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n dryajov _—_ 07/26/2023 4:43 PM\n \n Yes, I'd argue it is crucial in a network with distributed validation, where all nodes are either fully light or partially light nodes.\n \n10. _[_4:46 PM_]_\n \n Btw, there is probably more we can share/compare notes on in this problem space, we're looking at similar things, perhaps from a slightly different perspective in Codex's case, but the work done on DAS with the EF directly is probably very relevant for you as well \n \n\nJuly 27, 2023\n\n12. zk_id _—_ 07/27/2023 3:05 AM\n \n I would love to. Do you have those notes somewhere?\n \n13. zk_id _—_ 07/27/2023 4:01 AM\n \n all the links you have, anything, would be useful\n \n14. zk_id\n \n I would love to. Do you have those notes somewhere?\n \n dryajov _—_ 07/27/2023 4:50 PM\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n\nJuly 28, 2023\n\n16. zk_id _—_ 07/28/2023 5:47 AM\n \n Would love to see anything that is possible\n \n17. _[_5:47 AM_]_\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n18. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n dryajov _—_ 07/28/2023 4:07 PM\n \n Yes, we're also working in this direction as this is crucial for us as well. There should be some result coming soon(tm), now that @bkomuves is helping us with this part.\n \n19. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n bkomuves _—_ 07/28/2023 4:44 PM\n \n my current view (it's changing pretty often :) is that there is tension between:\n \n - commitment cost\n - proof cost\n - and verification cost\n \n the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n\nJuly 29, 2023\n\n21. bkomuves\n \n my current view (it's changing pretty often :) is that there is tension between: \n \n - commitment cost\n - proof cost\n - and verification cost\n \n  the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n zk_id _—_ 07/29/2023 4:23 AM\n \n I agree. That's also my understanding (although surely much more superficial).\n \n22. _[_4:24 AM_]_\n \n There is also the dimension of computation vs size cost\n \n23. _[_4:25 AM_]_\n \n ie the VID scheme (of the paper that kickstarted this conversation) has all the properties we need, but it scales n^2 in message complexity which makes it lose the properties we are looking for after 1k nodes. We need to scale confortably to 10k nodes.\n \n24. _[_4:29 AM_]_\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is:\n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n\nAugust 1, 2023\n\n26. dryajov\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n Leobago _—_ 08/01/2023 1:13 PM\n \n Note much public write-ups yet. You can find some content here:\n \n - [https://blog.codex.storage/data-availability-sampling/](https://blog.codex.storage/data-availability-sampling/ \"https://blog.codex.storage/data-availability-sampling/\")\n \n - [https://github.com/codex-storage/das-research](https://github.com/codex-storage/das-research \"https://github.com/codex-storage/das-research\")\n \n \n We also have a few Jupiter notebooks but they are not public yet. As soon as that content is out we can let you know ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n Codex Storage Blog\n \n [Data Availability Sampling](https://blog.codex.storage/data-availability-sampling/)\n \n The Codex team is busy building a new web3 decentralized storage platform with the latest advances in erasure coding and verification systems. Part of the challenge of deploying decentralized storage infrastructure is to guarantee that the data that has been stored and is available for retrieval from the beginning until\n \n GitHub\n \n [GitHub - codex-storage/das-research: This repository hosts all the ...](https://github.com/codex-storage/das-research)\n \n This repository hosts all the research on DAS for the collaboration between Codex and the EF. - GitHub - codex-storage/das-research: This repository hosts all the research on DAS for the collabora...\n \n [](https://opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research)\n \n ![GitHub - codex-storage/das-research: This repository hosts all the ...](https://images-ext-2.discordapp.net/external/DxXI-YBkzTrPfx_p6_kVpJzvVe6Ix6DrNxgrCbcsjxo/https/opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research?width=400\u0026height=200)\n \n27. zk_id\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is: \n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n dryajov _—_ 08/01/2023 1:55 PM\n \n This might interest you as well - [https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a \"https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a\")\n \n Medium\n \n [Combining KZG and erasure coding](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a)\n \n The Hitchhiker’s Guide to Subspace  — Episode II\n \n [](https://miro.medium.com/v2/resize:fit:1200/0*KGb5QHFQEd0cvPeP.png)\n \n ![Combining KZG and erasure coding](https://images-ext-2.discordapp.net/external/LkoJxMEskKGMwVs8XTPVQEEu0senjEQf42taOjAYu0k/https/miro.medium.com/v2/resize%3Afit%3A1200/0%2AKGb5QHFQEd0cvPeP.png?width=400\u0026height=200)\n \n28. _[_1:56 PM_]_\n \n This is a great analysis of the current state of the art in structure of data + commitment and the interplay. I would also recoment reading the first article of the series which it also links to\n \n29. zk_id _—_ 08/01/2023 3:04 PM\n \n Thanks @dryajov @Leobago ! Much appreciated!\n \n30. _[_3:05 PM_]_\n \n Very glad that we can discuss these things with you. Maybe I have some specific questions once I finish reading a huge pile of pending docs that I'm tackling starting today...\n \n31. zk_id _—_ 08/01/2023 6:34 PM\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n32. zk_id\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n Leobago _—_ 08/01/2023 6:36 PM\n \n Yes, checkout the visual branch and make sure to enable plotting in the config file, it should produce a bunch of figures ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n33. _[_6:37 PM_]_\n \n You might find also some bugs here and there on that branch ![😅](https://discord.com/assets/b45af785b0e648fe2fb7e318a6b8010c.svg)\n \n34. zk_id _—_ 08/01/2023 7:44 PM\n \n Thanks!","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-01":{"title":"2023-08-01 Codex weekly","content":"\n# Codex update Aug 1st\n\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial design writeup https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n - Work break down and review for Ben and Tomasz (epic coming up)\n - This is required to integrate the proving system\n\n### Milestone: Block discovery and retrieval\n\n- Some initial work break down and milestones here - https://docs.google.com/document/d/1hnYWLvFDgqIYN8Vf9Nf5MZw04L2Lxc9VxaCXmp9Jb3Y/edit\n - Initial analysis of block discovery - https://rpubs.com/giuliano_mega/1067876\n - Initial block discovery simulator - https://gmega.shinyapps.io/block-discovery-sim/\n\n### Milestone: Distributed Client Testing\n\n- Lots of work around log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - This is a first try of running against an L2\n - Mostly done, waiting on related fixes to land before merge - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Reservations and slot management\n\n- Lots of work around slot reservation and queuing https://github.com/codex-storage/nim-codex/pull/455\n\n## Remote auditing\n\n### Milestone: Implement Poseidon2\n\n- First pass at an implementation by Balazs\n - private repo, but can give access if anyone is interested\n\n### Milestone: Refine proving system\n\n- Lost of thinking around storage proofs and proving systems\n - private repo, but can give access if anyone is interested\n\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator.\n- Implemented logical error-rates and delays to interactions between DHT clients.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-11":{"title":"2023-08-11 Codex weekly","content":"\n\n# Codex update August 11\n\n---\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial Merkle Tree implementation - https://github.com/codex-storage/nim-codex/pull/504\n- Work on persisting/serializing Merkle Tree is underway, PR upcoming\n\n### Milestone: Block discovery and retrieval\n\n- Continued analysis of block discovery and retrieval - https://hackmd.io/_KOAm8kNQamMx-lkQvw-Iw?both=#fn5\n - Reviewing papers on peers sampling and related topics\n - [Wormhole Peer Sampling paper](http://publicatio.bibl.u-szeged.hu/3895/1/p2p13.pdf)\n - [Smoothcache](https://dl.acm.org/doi/10.1145/2713168.2713182)\n- Starting work on simulations based on the above work\n\n### Milestone: Distributed Client Testing\n\n- Continuing working on log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n - More related issues/PRs:\n - https://github.com/codex-storage/infra-codex/pull/20\n - https://github.com/codex-storage/infra-codex/pull/20\n- Testing and debugging Condex in continuous testing environment\n - Debugging continuous tests [cs-codex-dist-tests/pull/44](https://github.com/codex-storage/cs-codex-dist-tests/pull/44)\n - pod labeling [cs-codex-dist-tests/issues/39](https://github.com/codex-storage/cs-codex-dist-tests/issues/39)\n\n---\n## Infra\n\n### Milestone: Kubernetes Configuration and Management\n- Move Dist-Tests cluster to OVH and define naming conventions\n- Configure Ingress Controller for Kibana/Grafana\n- **Create documentation for Kubernetes management**\n- **Configure Dist/Continuous-Tests Pods logs shipping**\n\n### Milestone: Continuous Testing and Labeling\n- Watch the Continuous tests demo\n- Implement and configure Dist-Tests labeling\n- Set up logs shipping based on labels\n- Improve Docker workflows and add 'latest' tag\n\n### Milestone: CI/CD and Synchronization\n- Set up synchronization by codex-storage\n- Configure Codex Storage and Demo CI/CD environments\n\n---\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - Done but merge is blocked by a few issues - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Marketplace Sales\n\n- Lots of cleanup and refactoring\n - Finished refactoring state machine PR [link](https://github.com/codex-storage/nim-codex/pull/469)\n - Added support for loading node's slots during Sale's module start [link](https://github.com/codex-storage/nim-codex/pull/510)\n\n---\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator - https://github.com/cortze/py-dht.\n\n\nNOTE: Several people are/where out during the last few weeks, so some milestones are paused until they are back","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/innovation_lab/milestones-overview":{"title":"Innovation Lab Milestones Overview","content":"\niLab Milestones can be found on the [Notion Page](https://www.notion.so/Logos-Innovation-Lab-dcff7b7a984b4f9e946f540c16434dc9?pvs=4)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/innovation_lab/updates/2023-07-12":{"title":"2023-07-12 Innovation Lab Weekly","content":"\n**Logos Lab** 12th of July\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\n**Milestone**: deliver the first transactional Waku Object called Payggy (attached some design screenshots).\n\nIt is now possible to make transactions on the blockchain and the objects send notifications over the messaging layer (e.g. Waku) to the other participants. What is left is the proper transaction status management and some polishing.\n\nThere is also work being done on supporting external objects, this enables creating the objects with any web technology. This work will guide the separation of the interfaces between the app and the objects and lead us to release it as an SDK.\n\n**Next milestone**: group chat support\n\nThe design is already done for the group chat functionality. There is ongoing design work for a new Waku Object that would showcase what can be done in a group chat context.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nLink to Payggy design files:\nhttps://scene.zeplin.io/project/64ae9e965652632169060c7d\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/UtVHf2EU\n\n--- \n\n#### Conversation\n\n1. petty _—_ 07/15/2023 5:49 AM\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n2. petty\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n3. attila🍀 _—_ 07/15/2023 6:18 AM\n \n at the moment most of the code is in the `waku-objects-playground` repo later we may split it to several repos here is the link: [https://github.com/logos-innovation-lab/waku-objects-playground](https://github.com/logos-innovation-lab/waku-objects-playground \"https://github.com/logos-innovation-lab/waku-objects-playground\")","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-02":{"title":"2023-08-02 Innovation Lab weekly","content":"\n**Logos Lab** 2nd of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nThe last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite. \n\nStill, a lot of progress were made and the team released the first version of a color system in the form of an npm package, which lets the users to choose any color they like to customize their app. It is based on grayscale design and uses luminance, hence the name of the library. Try it in the Playground app or check the links below.\n\n**Milestone**: group chat support\n\nThere is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nGrayscale design:\nhttps://grayscale.design/\n\nLuminance package on npm:\nhttps://www.npmjs.com/package/@waku-objects/luminance\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/ZMU4yyWG\n\n--- \n\n### Conversation\n\n1. fryorcraken _—_ Yesterday at 10:58 PM\n \n \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n \n While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n\nAugust 3, 2023\n\n2. fryorcraken\n \n \u003e \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation. While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n3. attila🍀 _—_ Today at 4:21 AM\n \n This is great news and I think it will help with adoption. I did not find a JS API for status (maybe I was looking at the wrong places), the closest was the `status-js-api` project but that still uses whisper and the repo recommends to use `js-waku` instead ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg) [https://github.com/status-im/status-js-api](https://github.com/status-im/status-js-api \"https://github.com/status-im/status-js-api\") Also I also found the `56/STATUS-COMMUNITIES` spec: [https://rfc.vac.dev/spec/56/](https://rfc.vac.dev/spec/56/ \"https://rfc.vac.dev/spec/56/\") It seems to be quite a complete solution for community management with all the bells and whistles. However our use case is a private group chat for your existing contacts, so it seems to be a bit overkill for that.\n \n4. fryorcraken _—_ Today at 5:32 AM\n \n The repo is status-im/status-web\n \n5. _[_5:33 AM_]_\n \n Spec is [https://rfc.vac.dev/spec/55/](https://rfc.vac.dev/spec/55/ \"https://rfc.vac.dev/spec/55/\")\n \n6. fryorcraken\n \n The repo is status-im/status-web\n \n7. attila🍀 _—_ Today at 6:05 AM\n \n As constructive feedback I can tell you that it is not trivial to find it and use it in other projects It is presented as a React component without documentation and by looking at the code it seems to provide you the whole chat UI of the desktop app, which is not necessarily what you need if you want to embed it in your app It seems to be using this package: [https://www.npmjs.com/package/@status-im/js](https://www.npmjs.com/package/@status-im/js \"https://www.npmjs.com/package/@status-im/js\") Which also does not have documentation I assume that package is built from this: [https://github.com/status-im/status-web/tree/main/packages/status-js](https://github.com/status-im/status-web/tree/main/packages/status-js \"https://github.com/status-im/status-web/tree/main/packages/status-js\") This looks promising, but again there is no documentation. Of course you can use the code to figure out things, but at least I would be interested in what are the requirements and high level architecture (does it require an ethereum RPC endpoint, where does it store data, etc.) so that I can evaluate if this is the right approach for me. So maybe a lesson here is to put effort in the documentation and the presentation as well and if you have the budget then have someone on the team whose main responsibility is that (like a devrel or dev evangelist role)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-11":{"title":"2023-08-17 \u003cTEAM\u003e weekly","content":"\n\n# **Logos Lab** 11th of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nWe merged the group chat but it surfaced plenty of issues that were not a problem with 1on1 chats, both with our Waku integration and from product perspective as well. Spent the bigger part of the week with fixing these. We also registered a new domain, wakuplay.im where the latest version is deployed. It uses the Gnosis chain for transactions and currently the xDai and Gno tokens are supported, but it is easy to add other ERC-20 tokens now.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions. The design is ready and the implementaton has started.\n\n**Next milestone**: Basic Waku Objects website\n\nWork started toward having a structure for a website and the content is shaping up nicely. The implementation has been started on it as well.\n\nDeployed version of the main branch:\nhttps://www.wakuplay.im/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/eaYVgSUG","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["\u003cTEAM\u003e-updates"]},"/roadmap/nomos/milestones-overview":{"title":"Nomos Milestones Overview","content":"\n[Milestones Overview Notion Page](https://www.notion.so/ec57b205d4b443aeb43ee74ecc91c701?v=e782d519939f449c974e53fa3ab6978c)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/nomos/updates/2023-07-24":{"title":"2023-07-24 Nomos weekly","content":"\n**Research**\n\n- Milestone 1: Understanding Data Availability (DA) Problem\n - High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris.\n - Explored the necessity and key challenges associated with DA.\n - In-depth study of Verifiable Information Dispersal (VID) as it relates to data availability.\n - **Blocker:** The experimental tests for our specific EC scheme are pending, which is blocking progress to make final decision on KZG + commitments for our architecture.\n- Milestone 2: Privacy for Proof of Stake (PoS)\n - Analyzed the capabilities and limitations of mixnets, specifically within the context of timing attacks in private PoS.\n - Invested time in understanding timing attacks and how Nym mixnet caters to these challenges.\n - Reviewed the Crypsinous paper to understand its privacy vulnerabilities, notably the issue with probabilistic leader election and the vulnerability of anonymous broadcast channels to timing attacks.\n\n**Development**\n\n- Milestone 1: Mixnet and Networking\n - Initiated integration of libp2p to be used as the full node's backend, planning to complete in the next phase.\n - Begun planning for the next steps for mixnet integration, with a focus on understanding the components of the Nym mixnet, its problem-solving mechanisms, and the potential for integrating some of its components into our codebase.\n- Milestone 2: Simulation Application\n - Completed pseudocode for Carnot Simulator, created a test pseudocode, and provided a detailed description of the simulation. The relevant resources can be found at the following links:\n - Carnot Simulator pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/carnot_simulation_psuedocode.py)\n - Test pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/test_carnot_simulation.py)\n - Description of the simulation (https://www.notion.so/Carnot-Simulation-c025dbab6b374c139004aae45831cf78)\n - Implemented simulation network fixes and warding improvements, and increased the run duration of integration tests. The corresponding pull requests can be accessed here:\n - Simulation network fix (https://github.com/logos-co/nomos-node/pull/262)\n - Vote tally fix (https://github.com/logos-co/nomos-node/pull/268)\n - Increased run duration of integration tests (https://github.com/logos-co/nomos-node/pull/263)\n - Warding improvements (https://github.com/logos-co/nomos-node/pull/269)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-07-31":{"title":"2023-07-31 Nomos weekly","content":"\n**Nomos 31st July**\n\n[Network implementation and Mixnet]:\n\nResearch\n- Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder.\n- Considered the use of a new NetworkInterface in the simulation to mimic the mixnet, but currently, no significant benefits from doing so have been identified.\nDevelopment\n- Fixes were made on the Overlay interface.\n- Near completion of the libp2p integration with all tests passing so far, a PR is expected to be opened soon.\n- Link to libp2p PRs: https://github.com/logos-co/nomos-node/pull/278, https://github.com/logos-co/nomos-node/pull/279, https://github.com/logos-co/nomos-node/pull/280, https://github.com/logos-co/nomos-node/pull/281\n- Started working on the foundation of the libp2p-mixnet transport.\n\n[Private PoS]:\n\nResearch\n- Discussions were held on the Privacy PoS (PPoS) proposal, aligning a general direction of team members.\n- Reviews on the PPoS proposal were done.\n- A proposal to merge the PPoS proposal with the efficient one was made, in order to have both privacy and efficiency.\n- Discussions on merging Efficient PoS (EPoS) with PPoS are in progress.\n\n[Carnot]:\n\nResearch\n- Analyzing Bribery attack scenarios, which seem to make Carnot more vulnerable than expected.\n\n\n**Development**\n\n- Improved simulation application to meet test scale requirements (https://github.com/logos-co/nomos-node/pull/274).\n- Created a strategy to solve the large message sending issue in the simulation application.\n\n[Data Availability Sampling (or VID)]:\n\nResearch\n- Conducted an analysis of stored data \"degradation\" problem for data availability, modeling fractions of nodes which leave the system at regular time intervals\n- Continued literature reading on Verifiable Information Dispersal (VID) for DA problem, as well as encoding/commitment schemes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-07":{"title":"2023-08-07 Nomos weekly","content":"\nNomos weekly report\n================\n\n### Network implementation and Mixnet:\n#### Research\n- Researched the Nym mixnet architecture in depth in order to design our prototype architecture.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1661386628)\n- Discussions about how to manage the mixnet topology.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1665101243)\n#### Development\n- Implemented a prototype for building a Sphinx packet, mixing packets at the first hop of gossipsub with 3 mixnodes (+ encryption + delay), raw TCP connections between mixnodes, and the static entire mixnode topology.\n (Link: https://github.com/logos-co/nomos-node/pull/288)\n- Added support for libp2p in tests.\n (Link: https://github.com/logos-co/nomos-node/pull/287)\n- Added support for libp2p in nomos node.\n (Link: https://github.com/logos-co/nomos-node/pull/285)\n\n### Private PoS:\n#### Research\n- Worked on PPoS design and addressed potential metadata leakage due to staking and rewarding.\n- Focus on potential bribery attacks and privacy reasoning, but not much progress yet.\n- Stopped work on Accountability mechanism and PPoS efficiency due to prioritizing bribery attacks.\n\n### Carnot:\n#### Research\n- Addressed two solutions for the bribery attack. Proposals pending.\n- Work on accountability against attacks in Carnot including Slashing mechanism for attackers is paused at the moment.\n- Modeled data decimation using a specific set of parameters and derived equations related to it.\n- Proposed solutions to address bribery attacks without compromising the protocol's scalability.\n\n### Data Availability Sampling (VID):\n#### Research\n- Analyzed data decimation in data availability problem.\n (Link: https://www.overleaf.com/read/gzqvbbmfnxyp)\n- DA benchmarks and analysis for data commitments and encoding. This confirms that (for now), we are on the right path.\n- Explored the idea of node sharding: https://arxiv.org/abs/1907.03331 (taken from Celestia), but discarded it because it doesn't fit our architecture.\n\n#### Testing and Node development:\n- Fixes and enhancements made to nomos-node.\n (Link: https://github.com/logos-co/nomos-node/pull/282)\n (Link: https://github.com/logos-co/nomos-node/pull/289)\n (Link: https://github.com/logos-co/nomos-node/pull/293)\n (Link: https://github.com/logos-co/nomos-node/pull/295)\n- Ran simulations with 10K nodes.\n- Updated integration tests in CI to use waku or libp2p network.\n (Link: https://github.com/logos-co/nomos-node/pull/290)\n- Fix for the node throughput during simulations.\n (Link: https://github.com/logos-co/nomos-node/pull/295)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-14":{"title":"2023-08-17 Nomos weekly","content":"\n\n# **Nomos weekly report 14th August**\n---\n\n## **Network Privacy and Mixnet**\n\n### Research\n- Mixnet architecture discussions. Potential agreement on architecture not very different from PoC\n- Mixnet preliminary design [https://www.notion.so/Mixnet-Architecture-613f53cf11a245098c50af6b191d31d2]\n### Development\n- Mixnet PoC implementation starting [https://github.com/logos-co/nomos-node/pull/302]\n- Implementation of mixnode: a core module for implementing a mixnode binary\n- Implementation of mixnet-client: a client library for mixnet users, such as nomos-node\n\n### **Private PoS**\n- No progress this week.\n\n---\n## **Data Availability**\n### Research\n- Continued analysis of node decay in data availability problem\n- Improved upper bound on the probability of the event that data is no longer available given by the (K,N) erasure ECC scheme [https://www.overleaf.com/read/gzqvbbmfnxyp]\n\n### Development\n- Library survey: Library used for the benchmarks is not yet ready for requirements, looking for alternatives\n- RS \u0026 KZG benchmarking for our use case https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450\n- Study documentation on Danksharding and set of questions for Leonardo [https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450]\n\n---\n## **Testing, CI and Simulation App**\n\n### Development\n- Sim fixes/improvements [https://github.com/logos-co/nomos-node/pull/299], [https://github.com/logos-co/nomos-node/pull/298], [https://github.com/logos-co/nomos-node/pull/295]\n- Simulation app and instructions shared [https://github.com/logos-co/nomos-node/pull/300], [https://github.com/logos-co/nomos-node/pull/291], [https://github.com/logos-co/nomos-node/pull/294]\n- CI: Updated and merged [https://github.com/logos-co/nomos-node/pull/290]\n- Parallel node init for improved simulation run times [https://github.com/logos-co/nomos-node/pull/300]\n- Implemented branch overlay for simulating 100K+ nodes [https://github.com/logos-co/nomos-node/pull/291]\n- Sequential builds for nomos node features updated in CI [https://github.com/logos-co/nomos-node/pull/290]","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/vac/milestones-overview":{"title":"Vac Milestones Overview","content":"\n[Overview Notion Page](https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632?pvs=4) - Information copied here for now\n\n## Info\n### Structure of milestone names:\n\n`vac:\u003cunit\u003e:\u003ctag\u003e:\u003cfor_project\u003e:\u003ctitle\u003e_\u003ccounter\u003e`\n- `vac` indicates it is a vac milestone\n- `unit` indicates the vac unit `p2p`, `dst`, `tke`, `acz`, `sc`, `zkvm`, `dr`, `rfc`\n- `tag` tags a specific area / project / epic within the respective vac unit, e.g. `nimlibp2p`, or `zerokit`\n- `for_project` indicates which Logos project the milestone is mainly for `nomos`, `waku`, `codex`, `nimbus`, `status`; or `vac` (meaning it is internal / helping all projects as a base layer)\n- `title` the title of the milestone\n- `counter` an optional counter; `01` is implicit; marked with a `02` onward indicates extensions of previous milestones\n\n## Vac Unit Roadmaps\n- [Roadmap: P2P](https://www.notion.so/Roadmap-P2P-a409c34cb95b4b81af03f60cbf32f9c1?pvs=21)\n- [Roadmap: Token Economics](https://www.notion.so/Roadmap-Token-Economics-e91f1cb58ebc4b1eb46b074220f535d0?pvs=21)\n- [Roadmap: Distributed Systems Testing (DST))](https://www.notion.so/Roadmap-Distributed-Systems-Testing-DST-4ef0d8694d3e40d6a0cfe706855c43e6?pvs=21)\n- [Roadmap: Applied Cryptography and ZK (ACZ)](https://www.notion.so/Roadmap-Applied-Cryptography-and-ZK-ACZ-00b3ba101fae4a099a2d7af2144ca66c?pvs=21)\n- [Roadmap: Smart Contracts (SC)](https://www.notion.so/Roadmap-Smart-Contracts-SC-e60e0103cad543d5832144d5dd4611a0?pvs=21)\n- [Roadmap: zkVM](https://www.notion.so/Roadmap-zkVM-59cb588bd2404e659633e008101310b5?pvs=21)\n- [Roadmap: Deep Research (DR)](https://www.notion.so/Roadmap-Deep-Research-DR-561a864c890549c3861bf52ab979d7ab?pvs=21)\n- [Roadmap: RFC Process](https://www.notion.so/Roadmap-RFC-Process-f8516d19132b41a0beb29c24510ebc09?pvs=21)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/vac/updates/2023-07-10":{"title":"2023-07-10 Vac Weekly","content":"- *vc::Deep Research*\n - refined deep research roadmaps https://github.com/vacp2p/research/issues/190, https://github.com/vacp2p/research/issues/192\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Prepared Paris talks\n - Implemented perf protocol to compare the performances with other libp2ps https://github.com/status-im/nim-libp2p/pull/925\n- *vsu::Tokenomics*\n - Fixing bugs on the SNT staking contract;\n - Definition of the first formal verification tests for the SNT staking contract;\n - Slides for the Paris off-site\n- *vsu::Distributed Systems Testing*\n - Replicated message rate issue (still on it)\n - First mockup of offline data\n - Nomos consensus test working\n- *vip::zkVM*\n - hiring\n - onboarding new researcher\n - presentation on ECC during Logos Research Call (incl. preparation)\n - more research on nova, considering additional options\n - Identified 3 research questions to be taken into consideration for the ZKVM and the publication\n - Researched Poseidon implementation for Nova, Nova-Scotia, Circom\n- *vip::RLNP2P*\n - finished rln contract for waku product - https://github.com/waku-org/rln-contract\n - fixed homebrew issue that prevented zerokit from building - https://github.com/vacp2p/zerokit/commit/8a365f0c9e5c4a744f70c5dd4904ce8d8f926c34\n - rln-relay: verify proofs based upon bandwidth usage - https://github.com/waku-org/nwaku/commit/3fe4522a7e9e48a3196c10973975d924269d872a\n - RLN contract audit cont' https://hackmd.io/@blockdev/B195lgIth\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-17":{"title":"2023-07-17 Vac weekly","content":"\n**Last week**\n- *vc*\n - Vac day in Paris (13th)\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Paris offsite Paris (all CCs)\n- *vsu::Tokenomics*\n - Bugs found and solved in the SNT staking contract\n - attend events in Paris\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - QoS on all four infras\n - Continue work on theoretical gossipsub analysis (varying regular graph sizes)\n - Peer extraction using WLS (almost finished)\n - Discv5 testing\n - Wakurtosis CI improvements\n - Provide offline data\n- *vip::zkVM*\n - onboarding new researcher\n - Prepared and presented ZKVM work during VAC offsite\n - Deep research on Nova vs Stark in terms of performance and related open questions\n - researching Sangria\n - Worked on NEscience document (https://www.notion.so/Nescience-WIP-0645c738eb7a40869d5650ae1d5a4f4e)\n - zerokit:\n - worked on PR for arc-circom\n- *vip::RLNP2P*\n - offsite Paris\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - EthCC \u0026 Logos event Paris (all CCs)\n- *vsu::Tokenomics*\n - Attend EthCC and side events in Paris\n - Integrate staking contracts with radCAD model\n - Work on a new approach for Codex collateral problem\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - Finish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report\n - Restructure the Analysis script and start modelling Status control messages\n - Split Wakurtosis analysis module into separate repository (delayed)\n - Deliver simulation results (incl fixing discv5 error with new Kurtosis version)\n - Second iteration Nomos CI\n- *vip::zkVM*\n - Continue researching on Nova open questions and Sangria\n - Draft the benchmark document (by the end of the week)\n - research hardware for benchmarks\n - research Halo2 cont'\n - zerokit:\n - merge a PR for deployment of arc-circom\n - deal with arc-circom master fail\n- *vip::RLNP2P*\n - offsite paris\n- *blockers*\n - *vip::zkVM:zerokit*: ark-circom deployment to crates io; contact to ark-circom team","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-24":{"title":"2023-08-03 Vac weekly","content":"\nNOTE: This is a first experimental version moving towards the new reporting structure:\n\n**Last week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - related work section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - basic torpush encode/decode ( https://github.com/vacp2p/nim-libp2p-experimental/pull/1 )\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - (focus on Tor-push PoC)\n- *vsu::P2P*\n - admin/misc\n - EthCC (all CCs)\n- *vsu::Tokenomics*\n - admin/misc\n - Attended EthCC and side events in Paris\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - Kicked off a new approach for Codex collateral problem\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - Integrated SNT staking contracts with Python\n - milestone (50%, 2023/07/14) SNT litepaper\n - (delayed)\n - milestone(30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - milestone (95%, 2023/07/31) Wakurtosis Waku Report\n - Add timout to injection async call in WLS to avoid further issues (PR #139 https://github.com/vacp2p/wakurtosis/pull/139)\n - Plotting \u0026 analyse 100 msg/s off line Prometehus data\n - milestone (90%, 2023/07/31) Nomos CI testing\n - fixed errors in Nomos consensus simulation\n - milestone (30%, ...) gossipsub model analysis\n - add config options to script, allowing to load configs that can be directly compared to Wakurtosis results\n - added support for small world networks\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - (write ups will be available here: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Solved the open questions on Nova adn completed the document (will update the page)\n - Reviewed Nescience and working on a document\n - Reviewed partly the write up on FHE\n - writeup for Nova and Sangria; research on super nova\n - reading a new paper revisiting Nova (https://eprint.iacr.org/2023/969)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - zkvm\n - Researching Nova to understand the folding technique for ZKVM adaptation\n - zerokit\n - Rostyslav became circom-compat maintainer\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro\n - completed\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - admin/misc\n - EthCC + offsite\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - working on contributions section, based on https://hackmd.io/X1DoBHtYTtuGqYg0qK4zJw\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - working on establishing a connection via nim-libp2p tor-transport\n - setting up goerli test node (cont')\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - continue working on paper\n- *vsu::P2P*\n - milestone (...)\n - Implement ChokeMessage for GossipSub\n - Continue \"limited flood publishing\" (https://github.com/status-im/nim-libp2p/pull/911)\n- *vsu::Tokenomics*\n - admin/misc:\n - (3 CC days off)\n - Catch up with EthCC talks that we couldn't attend (schedule conflicts)\n - milestone (50%, 2023/07/14) SNT litepaper\n - Start building the SNT agent-based simulation\n- *vsu::Distributed Systems Testing*\n - milestone (100%, 2023/07/31) Wakurtosis Waku Report\n - finalize simulations\n - finalize report\n - milestone (100%, 2023/07/31) Nomos CI testing\n - finalize milestone\n - milestone (30%, ...) gossipsub model analysis\n - Incorporate Status control messages\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - Refine the Nescience WIP and FHE documents\n - research HyperNova\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Continue exploring Nova and other ZKPs and start technical writing on Nova benchmarks\n - zkvm\n - zerokit\n - circom: reach an agreement with other maintainers on master branch situation\n- *vip::RLNP2P*\n - maintenance\n - investigate why docker builds of nwaku are failing [zerokit dependency related]\n - documentation on how to use rln for projects interested (https://discord.com/channels/864066763682218004/1131734908474236968/1131735766163267695)(https://ci.infra.status.im/job/nim-waku/job/manual/45/console)\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - revert rln bandwidth reduction based on offsite discussion, move to different validator\n- *blockers*","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-31":{"title":"2023-07-31 Vac weekly","content":"\n- *vc::Deep Research*\n - milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission\n - proposed solution section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - establishing torswitch and testing code\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - addressed feedback on current version of paper\n- *vsu::P2P*\n - nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH's EIP-4844\n - Merged IDontWant (https://github.com/status-im/nim-libp2p/pull/934) \u0026 Limit flood publishing (https://github.com/status-im/nim-libp2p/pull/911) 𝕏\n - This wraps up the \"mandatory\" optimizations for 4844. We will continue working on stagger sending and other optimizations\n - nim-libp2p: (70%, 2023/07/31) WebRTC transport\n- *vsu::Tokenomics*\n - admin/misc\n - 2 CCs off for the week\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - milestone (50%, 2023/07/14) SNT litepaper\n - milestone (30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - admin/misc\n - Analysis module extracted from wakurtosis repo (https://github.com/vacp2p/wakurtosis/pull/142, https://github.com/vacp2p/DST-Analysis)\n - hiring\n - milestone (99%, 2023/07/31) Wakurtosis Waku Report\n - Re-run simulations\n - merge Discv5 PR (https://github.com/vacp2p/wakurtosis/pull/129).\n - finalize Wakurtosis Tech Report v2\n - milestone (100%, 2023/07/31) Nomos CI testing\n - delivered first version of Nomos CI integration (https://github.com/vacp2p/wakurtosis/pull/141)\n - milestone (30%, 2023/08/31 gossipsub model: Status control messages\n - Waku model is updated to model topics/content-topics\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - achievment :: nova questions answered (see document in Project: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Nescience WIP done (to be delivered next week, priority)\n - FHE review (lower prio)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Working on discoveries about other benchmarks done on plonky2, starky, and halo2\n - zkvm\n - zerokit\n - fixed ark-circom master \n - achievment :: publish ark-circom https://crates.io/crates/ark-circom\n - achievment :: publish zerokit_utils https://crates.io/crates/zerokit_utils\n - achievment :: publish rln https://crates.io/crates/rln (𝕏 jointly with RLNP2P)\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) RLN-Relay Waku production readiness\n - Updated rln-contract to be more modular - and downstreamed to waku fork of rln-contract - https://github.com/vacp2p/rln-contract and http://github.com/waku-org/waku-rln-contract\n - Deployed to sepolia\n - Fixed rln enabled docker image building in nwaku - https://github.com/waku-org/nwaku/pull/1853\n - zerokit:\n - achievement :: zerokit v0.3.0 release done - https://github.com/vacp2p/zerokit/releases/tag/v0.3.0 (𝕏 jointly with zkVM)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-07":{"title":"2023-08-07 Vac weekly","content":"\n\nMore info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week):\nhttps://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n**Vac week 32** August 7th\n- *vsu::P2P*\n - `vac:p2p:nim-libp2p:vac:maintenance`\n - Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n - `vac:p2p:nim-chronos:vac:maintenance`\n - Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n - Investigate flaky test using REUSE_PORT\n- *vsu::Tokenomics*\n - (...)\n- *vsu::Distributed Systems Testing*\n - `vac:dst:wakurtosis:waku:techreport`\n - delivered: Wakurtosis Tech Report v2 (https://docs.google.com/document/d/1U3bzlbk_Z3ZxN9tPAnORfYdPRWyskMuShXbdxCj4xOM/edit?usp=sharing)\n - `vac:dst:wakurtosis:vac:rlog`\n - working on research log post on Waku Wakurtosis simulations\n - `vac:dst:gsub-model:status:control-messages`\n - delivered: the analytical model can now handle Status messages; status analysis now has a separate cli and config; handles top 5 message types (by expected bandwidth consumption)\n - `vac:dst:gsub-model:vac:refactoring`\n - Refactoring and bug fixes\n - introduced and tested 2 new analytical models\n - `vac:dst:wakurtosis:waku:topology-analysis`\n - delivered: extracted into separate module, independent of wls message\n - `vac:dst:wakurtosis:nomos:ci-integration_02`\n - planning\n - `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n - planning; check usage of new codex simulator tool (https://github.com/codex-storage/cs-codex-dist-tests)\n- *vip::zkVM*\n - `vac:zkvm::vac:research-existing-proof-systems`\n - 90% Nescience WIP done – to be reviewed carefully since no other follow up documents were giiven to me\n - 50% FHE review - needs to be refined and summarized\n - finished SuperNova writeup ( https://www.notion.so/SuperNova-research-document-8deab397f8fe413fa3a1ef3aa5669f37 )\n - researched starky\n - 80% Halo2 notes ( https://www.notion.so/halo2-fb8d7d0b857f43af9eb9f01c44e76fb9 )\n - `vac:zkvm::vac:proof-system-benchmarks`\n - More discoveries on benchmarks done on ZK-snarks and ZK-starks but all are high level\n - Viewed some circuits on Nova and Poseidon\n - Read through Halo2 code (and Poseidon code) from Axiom\n- *vip::RLNP2P*\n - `vac:acz:rlnp2p:waku:production-readiness`\n - Waku rln contract registry - https://github.com/waku-org/waku-rln-contract/pull/3\n - mark duplicated messages as spam - https://github.com/waku-org/nwaku/pull/1867\n - use waku-org/waku-rln-contract as a submodule in nwaku - https://github.com/waku-org/nwaku/pull/1884\n - `vac:acz:zerokit:vac:maintenance`\n - Fixed atomic_operation ffi edge case error - https://github.com/vacp2p/zerokit/pull/195\n - docs cleanup - https://github.com/vacp2p/zerokit/pull/196\n - fixed version tags - https://github.com/vacp2p/zerokit/pull/194\n - released zerokit v0.3.1 - https://github.com/vacp2p/zerokit/pull/198\n - marked all functions as virtual in rln-contract for inheritors - https://github.com/vacp2p/rln-contract/commit/a092b934a6293203abbd4b9e3412db23ff59877e\n - make nwaku use zerokit v0.3.1 - https://github.com/waku-org/nwaku/pull/1886\n - rlnp2p implementers draft - https://hackmd.io/@rymnc/rln-impl-w-waku\n - `vac:acz:zerokit:vac:zerokit-v0.4`\n - zerokit v0.4.0 release planning - https://github.com/vacp2p/zerokit/issues/197\n- *vc::Deep Research*\n - `vac:dr:valpriv:vac:tor-push-poc`\n - redesigned the torpush integration in nimbus https://github.com/vacp2p/nimbus-eth2-experimental/pull/2\n - `vac:dr:valpriv:vac:tor-push-relwork`\n - Addressed further comments in paper, improved intro, added source level variation approach\n - `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n - cont' work on the document","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-14":{"title":"2023-08-17 Vac weekly","content":"\n\nVac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n# Vac week 33 August 14th\n\n---\n## *vsu::P2P*\n### `vac:p2p:nim-libp2p:vac:maintenance`\n- Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n- delivered: Perf protocol https://github.com/status-im/nim-libp2p/pull/925\n- delivered: Test-plans for the perf protocol https://github.com/lchenut/test-plans/tree/perf-nim\n- Bandwidth estimate as a parameter (waiting for final review) https://github.com/status-im/nim-libp2p/pull/941\n### `vac:p2p:nim-chronos:vac:maintenance`\n- delivered: Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n- delivered: fixed flaky test using REUSE_PORT https://github.com/status-im/nim-chronos/pull/438\n\n---\n## *vsu::Tokenomics*\n - admin/misc:\n - (5 CC days off)\n### `vac:tke::codex:economic-analysis`\n- Filecoin economic structure and Codex token requirements\n### `vac:tke::status:SNT-staking`\n- tests with the contracts\n### `vac:tke::nomos:economic-analysis`\n- resume discussions with Nomos team\n\n---\n## *vsu::Distributed Systems Testing (DST)*\n### `vac:dst:wakurtosis:waku:techreport`\n- 1st Draft of Wakurtosis Research Blog (https://github.com/vacp2p/vac.dev/pull/123)\n- Data Process / Analysis of Non-Discv5 K13 Simulations (Wakurtosis Tech Report v2.5)\n### `vac:dst:shadow:vac:basic-shadow-simulation`\n- Basic Shadow Simulation of a gossipsub node (Setup, 5nodes)\n### `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n- Try and plan on how to refactor/generalize testing tool from Codex.\n- Learn more about Kubernetes\n### `vac:dst:wakurtosis:nomos:ci-integration_02`\n- Enable subnetworks\n- Plan how to use wakurtosis with fixed version\n### `vac:dst:eng:vac:bundle-simulation-data`\n- Run requested simulations\n\n---\n## *vsu:Smart Contracts (SC)*\n### `vac:sc::vac:secureum-upskilling`\n - Learned about \n - cold vs warm storage reads and their gas implications\n - UTXO vs account models\n - `DELEGATECALL` vs `CALLCODE` opcodes, `CREATE` vs `CREATE2` opcodes; Yul Assembly\n - Unstructured proxies https://eips.ethereum.org/EIPS/eip-1967\n - C3 Linearization https://forum.openzeppelin.com/t/solidity-diamond-inheritance/2694) (Diamond inheritance and resolution)\n - Uniswap deep dive\n - Finished Secureum slot 2 and 3\n### `vac:sc::vac:maintainance/misc`\n - Introduced Vac's own `foundry-template` for smart contract projects\n - Goal is to have the same project structure across projects\n - Github repository: https://github.com/vacp2p/foundry-template\n\n---\n## *vsu:Applied Cryptogarphy \u0026 ZK (ACZ)*\n - `vac:acz:zerokit:vac:maintenance`\n - PR reviews https://github.com/vacp2p/zerokit/pull/200, https://github.com/vacp2p/zerokit/pull/201\n\n---\n## *vip::zkVM*\n### `vac:zkvm::vac:research-existing-proof-systems`\n- delivered Nescience WIP doc\n- delivered FHE review\n- delivered Nova vs Sangria done - Some discussions during the meeting\n- started HyperNova writeup\n- started writing a trimmed version of FHE writeup\n- researched CCS (for HyperNova)\n- Research Protogalaxy https://eprint.iacr.org/2023/1106 and Protostar https://eprint.iacr.org/2023/620.\n### `vac:zkvm::vac:proof-system-benchmarks`\n- More work on benchmarks is ongoing\n- Putting down a document that explains the differences\n\n---\n## *vc::Deep Research*\n### `vac:dr:valpriv:vac:tor-push-poc`\n- revised the code for PR\n### `vac:dr:valpriv:vac:tor-push-relwork`\n- added section for mixnet, non-Tor/non-onion routing-based anonymity network\n### `vac:dr:gsub-scaling:vac:gossipsub-simulation`\n- Used shadow simulator to run first GossibSub simulation\n### `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n- Finalized 1st draft of the GossipSub scaling article","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/waku/milestone-waku-10-users":{"title":"Milestone: Waku Network supports 10k Users","content":"\n```mermaid\n%%{ \n init: { \n 'theme': 'base', \n 'themeVariables': { \n 'primaryColor': '#BB2528', \n 'primaryTextColor': '#fff', \n 'primaryBorderColor': '#7C0000', \n 'lineColor': '#F8B229', \n 'secondaryColor': '#006100', \n 'tertiaryColor': '#fff' \n } \n } \n}%%\ngantt\n\tdateFormat YYYY-MM-DD \n\tsection Scaling\n\t\t10k Users :done, 2023-01-20, 2023-07-31\n```\n\n## Completion Deliverable\nTBD\n\n## Epics\n- [Github Issue Tracker](https://github.com/waku-org/pm/issues/12)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/milestones-overview":{"title":"Waku Milestones Overview","content":"\n- 90% - [Waku Network support for 10k users](roadmap/waku/milestone-waku-10-users.md)\n- 80% - Waku Network support for 1MM users\n- 65% - Restricted-run (light node) protocols are production ready\n- 60% - Peer management strategy for relay and light nodes are defined and implemented\n- 10% - Quality processes are implemented for `nwaku` and `go-waku`\n- 80% - Define and track network and community metrics for continuous monitoring improvement\n- 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties)\n- 15% - Dogfooding of RLN by platforms has started\n- 06% - First protocol to incentivize operators has been defined","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/updates/2023-07-24":{"title":"2023-07-24 Waku weekly","content":"\nDisclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones.\n\n---\n\n## Docs\n\n### **Milestone**: Foundation for Waku docs (done)\n\n#### _achieved_:\n- overall layout\n- concept docs\n- community/showcase pages\n\n### **Milestone**: Foundation for node operator docs (done)\n#### _achieved_:\n- nodes overview page\n- guide for running nwaku (binaries, source, docker)\n- peer discovery config guide\n- reference docs for config methods and options\n\n### **Milestone**: Foundation for js-waku docs\n#### _achieved_:\n- js-waku overview + installation guide\n- lightpush + filter guide\n- store guide\n- @waku/create-app guide\n\n#### _next:_\n- improve @waku/react guide\n\n#### _blocker:_\n- polyfills issue with [js-waku](https://github.com/waku-org/js-waku/issues/1415)\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n### **Milestone**: Running nwaku in the cloud\n### **Milestone**: Add Waku guide to learnweb3.io\n### **Milestone**: Encryption docs for js-waku\n### **Milestone**: Advanced node operator doc (postgres, WSS, monitoring, common config)\n### **Milestone**: Foundation for go-waku docs\n### **Milestone**: Foundation for rust-waku-bindings docs\n### **Milestone**: Waku architecture docs\n### **Milestone**: Waku detailed roadmap and milestones\n### **Milestone**: Explain RLN\n\n---\n\n## Eco Dev (WIP)\n\n### **Milestone**: EthCC Logos side event organisation (done)\n### **Milestone**: Community Growth\n#### _achieved_: \n- Wrote several bounties, improved template; setup onboarding flow in Discord.\n\n#### _next_: \n- Review template, publish on GitHub\n\n### **Milestone**: Business Development (continuous)\n#### _achieved_: \n- Discussions with various leads in EthCC\n#### _next_: \n- Booking calls with said leads\n\n### **Milestone**: Setting Up Content Strategy for Waku\n\n#### _achieved_: \n- Discussions with Comms Hubs re Waku Blog \n- expressed needs and intent around future blog post and needed amplification\n- discuss strategies to onboard/involve non-dev and potential CTAs.\n\n### **Milestone**: Web3Conf (dates)\n### **Milestone**: DeCompute conf\n\n---\n\n## Research (WIP)\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- rendezvous hashing \n- weighting function \n- updated LIGHTPUSH to handle autosharding\n\n#### _next:_\n- update FILTER \u0026 STORE for autosharding\n\n---\n\n## nwaku (WIP)\n\n### **Milestone**: Postgres integration.\n#### _achieved:_\n- nwaku can store messages in a Postgres database\n- we started to perform stress tests\n\n#### _next:_\n- Analyse why some messages are not stored during stress tests happened in both sqlite and Postgres, so maybe the issue isn't directly related to _store_.\n\n### **Milestone**: nwaku as a library (C-bindings)\n#### _achieved:_\n- The integration is in progress through N-API framework\n\n#### _next:_\n- Make the nodejs to properly work by running the _nwaku_ node in a separate thread.\n\n---\n\n## go-waku (WIP)\n\n\n---\n\n## js-waku (WIP)\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved: \n- spec test for connection manager\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n### **Milestone**: Static Sharding\n#### _next_: \n- start implementation of static sharding in js-waku\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- js-lip2p upgrade to remove usage of polyfills (draft PR)\n\n#### _next_: \n- merge and release js-libp2p upgrade\n\n### **Milestone**: Waku Relay in the Browser\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-07-31":{"title":"2023-07-31 Waku weekly","content":"\n## Docs\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n#### _next:_ \n- rewrite docs in British English\n### **Milestone**: Running nwaku in the cloud\n#### _next:_ \n- publish guides for Digital Ocean, Oracle, Fly.io\n\n---\n## Eco Dev (WIP)\n\n---\n## Research\n\n### **Milestone**: Detailed network requirements and task breakdown\n#### _achieved:_ \n- gathering rough network requirements\n#### _next:_ \n- detailed task breakdown per milestone and effort allocation\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- update FILTER \u0026 STORE for autosharding\n#### _next:_ \n- RFC review \u0026 updates \n- code review \u0026 updates\n\n---\n## nwaku\n\n### **Milestone**: nwaku release process automation\n#### _next_:\n- setup automation to test/simulate current `master` to prevent/limit regressions\n- expand target architectures and platforms for release artifacts (e.g. arm64, Win...)\n### **Milestone**: HTTP Rest API for protocols\n#### _next:_ \n- Filter API added \n- tests to complete.\n\n---\n## go-waku\n\n### **Milestone**: Increase Maintability Score. Refer to [CodeClimate report](https://codeclimate.com/github/waku-org/go-waku)\n#### _next:_ \n- define scope on which issues reported by CodeClimate should be fixed. Initially it should be limited to reduce code complexity and duplication.\n\n### **Milestone**: RLN updates, refer [issue](https://github.com/waku-org/go-waku/issues/608).\n_achieved_:\n- expose `set_tree`, `key_gen`, `seeded_key_gen`, `extended_seeded_keygen`, `recover_id_secret`, `set_leaf`, `init_tree_with_leaves`, `set_metadata`, `get_metadata` and `get_leaf` \n- created an example on how to use RLN with go-waku\n- service node can pass in index to keystore credentials and can verify proofs based on bandwidth usage\n#### _next_: \n- merkle tree batch operations (in progress) \n- usage of persisted merkle tree db\n\n### **Milestone**: Improve test coverage for functional tests of all protocols. Refer to [CodeClimate report]\n#### _next_: \n- define scope on which code sections should be covered by tests\n\n### **Milestone**: C-Bindings\n#### _next_: \n- update API to match nwaku's (by using callbacks instead of strings that require freeing)\n\n---\n## js-waku\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved_: \n- extend ConnectionManager with EventEmitter and dispatch peers tagged with their discovery + make it public on the Waku interface\n#### _next_: \n- fallback improvement for peer connect rejection\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n#### _next_: \n- robusting support around peer-exchange for examples\n### **Milestone**: Static Sharding\n#### _achieved_: \n- WIP implementation of static sharding in js-waku\n#### _next_: \n- investigation around gauging connection loss;\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- improve \u0026 update @waku/react \n- merge and release js-libp2p upgrade\n\n#### _next:_\n- update examples to latest release + make sure no old/unused packages there\n\n### **Milestone**: Maintenance\n#### _achieved_: \n- update to libp2p@0.46.0\n#### _next_:\n- suit of optional tests in pipeline\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-06":{"title":"2023-08-06 Waku weekly","content":"\nMilestones for current works are created and used. Next steps are:\n1) Refine scope of [research work](https://github.com/waku-org/research/issues/3) for rest of the year and create matching milestones for research and waku clients\n2) Review work not coming from research and setting dates\nNote that format matches the Notion page but can be changed easily as it's scripted\n\n\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n- _blocker_: \n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Docker compose with `nwaku` + `postgres` + `prometheus` + `grafana` + `postgres_exporter` https://github.com/alrevuelta/nwaku-compose/pull/3\n- _next_: Carry on with stress testing\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: feedback/update cycles for FILTER \u0026 LIGHTPUSH\n- _next_: New fleet, updating ENR from live subscriptions and merging\n- _blocker_: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app.\n\n**[Move Waku v1 and Waku-Bridge to new repos](https://github.com/waku-org/nwaku/issues/1767)** {E:2023-qa}\n\n- _achieved_: Removed v1 and wakubridge code from nwaku repo\n- _next_: Remove references to `v2` from nwaku directory structure and documents\n\n**[nwaku c-bindings](https://github.com/waku-org/nwaku/issues/1332)** {E:2023-many-platforms}\n\n- _achieved_:\n - Moved the Waku execution into a secondary working thread. Essential for NodeJs.\n - Adapted the NodeJs example to use the `libwaku` with the working-thread approach. The example had been receiving relay messages during a weekend. The memory was stable without crashing. \n- _next_: start applying the thread-safety recommendations https://github.com/waku-org/nwaku/issues/1878\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node.\n\n---\n## js-waku\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: robustness around peer-exchange, and highlight discovery vs connections for PX on the web-chat example\n- _next_: saving successfully connected PX peers to local storage for easier connections on reload\n\n**[Waku Relay scalability in the Browser](https://github.com/waku-org/js-waku/issues/905)** {NO EPIC}\n\n- _achieved_: draft of direct browser-browser RTC example https://github.com/waku-org/js-waku-examples/pull/260 \n- _next_: improve the example (connection re-usage), work on contentTopic based RTC example\n\n---\n## go-waku\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: updated c-bindings to use callbacks\n- _next_: refactor v1 encoding functions and update RFC\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Enabled -race flag and ran all unit tests to identify data races.\n- _next_: Fix issues reported by the data race detector tool\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: use zerokit batch insert/delete for members, exposed function to retrieve data from merkle tree, modified zerokit and go-zerokit-rln to pass merkle tree persistance configuration settings\n- _next_: resume onchain sync from persisted tree db\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: Basic peer management to ensure standard in/out ratio for relay peers.\n- _next_: add service slots to peer manager\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: production of swags and marketing collaterals for web3conf completed\n- _next_: web3conf talk and side event production. various calls with commshub for preparing marketing collaterals.\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)** {E:2023-eco-growth}\n\n- _next_: create guide on `@waku/react` and debugging js-waku web apps\n\n**[Docs general improvement/incorporating feedback (2023)](https://github.com/waku-org/docs.waku.org/issues/102)** {E:2023-eco-growth}\n\n- _achieved_: rewrote the docs in UK English\n- _next_: update docs terms, announce js-waku docs\n\n**[Foundation of js-waku docs](https://github.com/waku-org/docs.waku.org/issues/101)** {E:2023-eco-growth}\n\n_achieved_: added guide on js-waku bootstrapping\n\n---\n## Research\n\n**[1.1 Network requirements and task breakdown](https://github.com/waku-org/research/issues/6)** {E:2023-1mil-users}\n\n- _achieved_: Setup project management tools; determined number of shards to 8; some conversations on RLN memberships\n- _next_: Breakdown and assign tasks under each milestone for the 1 million users/public Waku Network epic.\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-14":{"title":"2023-08-14 Waku weekly","content":"\n\n# 2023-08-14 Waku weekly\n---\n## Epics\n\n**[Waku Network Can Support 10K Users](https://github.com/waku-org/pm/issues/12)** {E:2023-10k-users}\n\nAll software has been delivered. Pending items are:\n- Running stress testing on PostgreSQL to confirm performance gain https://github.com/waku-org/nwaku/issues/1894\n- Setting up a staging fleet for Status to try static sharding\n- Running simulations for Store protocol: [Will confirm with Vac/DST on dates/commitment](https://github.com/vacp2p/research/issues/191#issuecomment-1672542165) and probably move this to [1mil epic](https://github.com/waku-org/pm/issues/31)\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub\n- _next_: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning\n- _blocker_: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)**\n\n- _next_: document notes/recommendations for NodeJS, begin docs on `js-waku` encryption\n\n---\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: minor CI fixes and improvements\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Learned that the insertion rate is constrained by the `relay` protocol. i.e. the maximum insert rate is limited by `relay` so I couldn't push the \"insert\" operation to a limit from a _Postgres_ point of view. For example, if 25 clients publish messages concurrently, and each client publishes 300 msgs, all the messages are correctly stored. If repeating the same operation but with 50 clients, then many messages are lost because the _relay_ protocol doesn't process all of them.\n- _next_: Carry on with stress testing. Analyze the performance differences between _Postgres_ and _SQLite_ regarding the _read_ operations.\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: many feedback/update cycles for FILTER, LIGHTPUSH, STORE \u0026 RFC\n- _next_: updating ENR for live subscriptions\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node. Add more tests.\n\n---\n## js-waku\n\n**[Maintenance](https://github.com/waku-org/js-waku/issues/1455)** {E:2023-qa}\n\n- achieved: upgrade libp2p \u0026 chainsafe deps to libp2p 0.46.3 while removing deprecated libp2p standalone interface packages (new breaking change libp2p w/ other deps), add tsdoc for referenced types, setting up/fixing prettier/eslint conflict \n\n**[Developer Experience (2023)](https://github.com/waku-org/js-waku/issues/1453)** {E:2023-eco-growth}\n\n- _achieved_: non blocking pipeline step (https://github.com/waku-org/js-waku/issues/1411)\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: close the \"fallback mechanism for peer rejections\", refactor peer-exchange compliance test\n- _next_: peer-exchange to be included with default discovery, action peer-exchange browser feedback\n\n---\n## go-waku\n\n**[Maintenance](https://github.com/waku-org/go-waku/issues/634)** {E:2023-qa}\n\n- _achieved_: improved keep alive logic for identifying if machine is waking up; added vacuum feature to sqlite and postgresql; made migrations optional; refactored db and migration code, extracted code to generate node key to its own separate subcommand\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: PR for updating the RFC to use callbacks, and refactored the encoding functions\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Fixed issues reported by the data race detector tool.\n- _next_: identify areas where test coverage needs improvement.\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: exposed merkle tree configuration, removed embedded resources from go-zerokit-rln, fixed nwaku / go-waku rlnKeystore compatibility, added merkle tree persistence and modified zerokit to print to stderr any error obtained while executing functions via FFI.\n- _next_: interop with nwaku\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: add service slots to peer manager.\n- _next_: implement relay connectivity loop, integrate gossipsub scoring for peer disconnections\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]}} \ No newline at end of file diff --git a/indices/contentIndex.45924f9af8144078960b64186b90a620.min.json b/indices/contentIndex.45924f9af8144078960b64186b90a620.min.json deleted file mode 100644 index 0b06d2729..000000000 --- a/indices/contentIndex.45924f9af8144078960b64186b90a620.min.json +++ /dev/null @@ -1 +0,0 @@ -{"/":{"title":"Logos Technical Roadmap and Activity","content":"This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. To learn more about the motivation, please visit the [Logos Collective Site](https://logos.co).\n\n## Navigation\n\n### Waku\n- [Milestones](roadmap/waku/milestones-overview.md)\n- [weekly updates](tags/waku-updates)\n\n### Codex\n- [Milestones](roadmap/codex/milestones-overview.md)\n- [weekly updates](tags/codex-updates)\n\n### Nomos\n- [Milestones](roadmap/nomos/milestones-overview.md)\n- [weekly updates](tags/nomos-updates)\n\n### Vac\n- [Milestones](roadmap/vac/milestones-overview.md)\n- [weekly updates](tags/vac-updates)\n\n### Innovation Lab\n- [Milestones](roadmap/innovation_lab/milestones_overview.md)\n- [weekly updates](tags/ilab-updates)\n### Comms (Acid Info)\n- [Milestones](roadmap/acid/milestones-overview.md)\n- [weekly updates](tags/acid-updates)\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":{"title":"CJK + Latex Support (测试)","content":"\n## Chinese, Japanese, Korean Support\n几乎在我们意识到之前,我们已经离开了地面。\n\n우리가 그것을 알기도 전에 우리는 땅을 떠났습니다.\n\n私たちがそれを知るほぼ前に、私たちは地面を離れていました。\n\n## Latex\n\nBlock math works with two dollar signs `$$...$$`\n\n$$f(x) = \\int_{-\\infty}^\\infty\n f\\hat(\\xi),e^{2 \\pi i \\xi x}\n \\,d\\xi$$\n\t\nInline math also works with single dollar signs `$...$`. For example, Euler's identity but inline: $e^{i\\pi} = 0$\n\nAligned equations work quite well:\n\n$$\n\\begin{aligned}\na \u0026= b + c \\\\ \u0026= e + f \\\\\n\\end{aligned}\n$$\n\nAnd matrices\n\n$$\n\\begin{bmatrix}\n1 \u0026 2 \u0026 3 \\\\\na \u0026 b \u0026 c\n\\end{bmatrix}\n$$\n\n## RTL\nMore information on configuring RTL languages like Arabic in the [config](config.md) page.\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/callouts":{"title":"Callouts","content":"\n## Callout support\n\nQuartz supports the same Admonition-callout syntax as Obsidian.\n\nThis includes\n- 12 Distinct callout types (each with several aliases)\n- Collapsable callouts\n\nSee [documentation on supported types and syntax here](https://help.obsidian.md/How+to/Use+callouts#Types).\n\n## Showcase\n\n\u003e [!EXAMPLE] Examples\n\u003e\n\u003e Aliases: example\n\n\u003e [!note] Notes\n\u003e\n\u003e Aliases: note\n\n\u003e [!abstract] Summaries \n\u003e\n\u003e Aliases: abstract, summary, tldr\n\n\u003e [!info] Info \n\u003e\n\u003e Aliases: info, todo\n\n\u003e [!tip] Hint \n\u003e\n\u003e Aliases: tip, hint, important\n\n\u003e [!success] Success \n\u003e\n\u003e Aliases: success, check, done\n\n\u003e [!question] Question \n\u003e\n\u003e Aliases: question, help, faq\n\n\u003e [!warning] Warning \n\u003e\n\u003e Aliases: warning, caution, attention\n\n\u003e [!failure] Failure \n\u003e\n\u003e Aliases: failure, fail, missing\n\n\u003e [!danger] Error\n\u003e\n\u003e Aliases: danger, error\n\n\u003e [!bug] Bug\n\u003e\n\u003e Aliases: bug\n\n\u003e [!quote] Quote\n\u003e\n\u003e Aliases: quote, cite\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/config":{"title":"Configuration","content":"\n## Configuration\nQuartz is designed to be extremely configurable. You can find the bulk of the configuration scattered throughout the repository depending on how in-depth you'd like to get.\n\nThe majority of configuration can be found under `data/config.yaml`. An annotated example configuration is shown below.\n\n```yaml {title=\"data/config.yaml\"}\n# The name to display in the footer\nname: Jacky Zhao\n\n# whether to globally show the table of contents on each page\n# this can be turned off on a per-page basis by adding this to the\n# front-matter of that note\nenableToc: true\n\n# whether to by-default open or close the table of contents on each page\nopenToc: false\n\n# whether to display on-hover link preview cards\nenableLinkPreview: true\n\n# whether to render titles for code blocks\nenableCodeBlockTitle: true \n\n# whether to render copy buttons for code blocks\nenableCodeBlockCopy: true \n\n# whether to render callouts\nenableCallouts: true\n\n# whether to try to process Latex\nenableLatex: true\n\n# whether to enable single-page-app style rendering\n# this prevents flashes of unstyled content and improves\n# smoothness of Quartz. More info in issue #109 on GitHub\nenableSPA: true\n\n# whether to render a footer\nenableFooter: true\n\n# whether backlinks of pages should show the context in which\n# they were mentioned\nenableContextualBacklinks: true\n\n# whether to show a section of recent notes on the home page\nenableRecentNotes: false\n\n# whether to display an 'edit' button next to the last edited field\n# that links to github\nenableGitHubEdit: true\nGitHubLink: https://github.com/jackyzha0/quartz/tree/hugo/content\n\n# whether to use Operand to power semantic search\n# IMPORTANT: replace this API key with your own if you plan on using\n# Operand search!\nenableSemanticSearch: false\noperandApiKey: \"REPLACE-WITH-YOUR-OPERAND-API-KEY\"\n\n# page description used for SEO\ndescription:\n Host your second brain and digital garden for free. Quartz features extremely fast full-text search,\n Wikilink support, backlinks, local graph, tags, and link previews.\n\n# title of the home page (also for SEO)\npage_title:\n \"🪴 Quartz 3.2\"\n\n# links to show in the footer\nlinks:\n - link_name: Twitter\n link: https://twitter.com/_jzhao\n - link_name: Github\n link: https://github.com/jackyzha0\n```\n\n### Code Block Titles\nTo add code block titles with Quartz:\n\n1. Ensure that code block titles are enabled in Quartz's configuration:\n\n ```yaml {title=\"data/config.yaml\", linenos=false}\n enableCodeBlockTitle: true\n ```\n\n2. Add the `title` attribute to the desired [code block\n fence](https://gohugo.io/content-management/syntax-highlighting/#highlighting-in-code-fences):\n\n ```markdown {linenos=false}\n ```yaml {title=\"data/config.yaml\"}\n enableCodeBlockTitle: true # example from step 1\n ```\n ```\n\n**Note** that if `{title=\u003cmy-title\u003e}` is included, and code block titles are not\nenabled, no errors will occur, and the title attribute will be ignored.\n\n### HTML Favicons\nIf you would like to customize the favicons of your Quartz-based website, you \ncan add them to the `data/config.yaml` file. The **default** without any set \n`favicon` key is:\n\n```html {title=\"layouts/partials/head.html\", linenostart=15}\n\u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n```\n\nThe default can be overridden by defining a value to the `favicon` key in your \n`data/config.yaml` file. For example, here is a `List[Dictionary]` example format, which is\nequivalent to the default:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon:\n - { rel: \"shortcut icon\", href: \"icon.png\", type: \"image/png\" }\n# - { ... } # Repeat for each additional favicon you want to add\n```\n\nIn this format, the keys are identical to their HTML representations.\n\nIf you plan to add multiple favicons generated by a website (see list below), it\nmay be easier to define it as HTML. Here is an example which appends the \n**Apple touch icon** to Quartz's default favicon:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon: |\n \u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n \u003clink rel=\"apple-touch-icon\" sizes=\"180x180\" href=\"/apple-touch-icon.png\"\u003e\n```\n\nThis second favicon will now be used as a web page icon when someone adds your \nwebpage to the home screen of their Apple device. If you are interested in more \ninformation about the current and past standards of favicons, you can read \n[this article](https://www.emergeinteractive.com/insights/detail/the-essentials-of-favicons/).\n\n**Note** that all generated favicon paths, defined by the `href` \nattribute, are relative to the `static/` directory.\n\n### Graph View\nTo customize the Interactive Graph view, you can poke around `data/graphConfig.yaml`.\n\n```yaml {title=\"data/graphConfig.yaml\"}\n# if true, a Global Graph will be shown on home page with full width, no backlink.\n# A different set of Local Graphs will be shown on sub pages.\n# if false, Local Graph will be default on every page as usual\nenableGlobalGraph: false\n\n### Local Graph ###\nlocalGraph:\n # whether automatically generate a legend\n enableLegend: false\n \n # whether to allow dragging nodes in the graph\n enableDrag: true\n \n # whether to allow zooming and panning the graph\n enableZoom: true\n \n # how many neighbours of the current node to show (-1 is all nodes)\n depth: 1\n \n # initial zoom factor of the graph\n scale: 1.2\n \n # how strongly nodes should repel each other\n repelForce: 2\n\n # how strongly should nodes be attracted to the center of gravity\n centerForce: 1\n\n # what the default link length should be\n linkDistance: 1\n \n # how big the node labels should be\n fontSize: 0.6\n \n # scale at which to start fading the labes on nodes\n opacityScale: 3\n\n### Global Graph ###\nglobalGraph:\n\t# same settings as above\n\n### For all graphs ###\n# colour specific nodes path off of their path\npaths:\n - /moc: \"#4388cc\"\n```\n\n\n## Styling\nWant to go even more in-depth? You can add custom CSS styling and change existing colours through editing `assets/styles/custom.scss`. If you'd like to target specific parts of the site, you can add ids and classes to the HTML partials in `/layouts/partials`. \n\n### Partials\nPartials are what dictate what gets rendered to the page. Want to change how pages are styled and structured? You can edit the appropriate layout in `/layouts`.\n\nFor example, the structure of the home page can be edited through `/layouts/index.html`. To customize the footer, you can edit `/layouts/partials/footer.html`\n\nMore info about partials on [Hugo's website.](https://gohugo.io/templates/partials/)\n\nStill having problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n\n## Language Support\n[CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) comes out of the box with Quartz.\n\nWant to support languages that read from right-to-left (like Arabic)? Hugo (and by proxy, Quartz) supports this natively.\n\nFollow the steps [Hugo provides here](https://gohugo.io/content-management/multilingual/#configure-languages) and modify your `config.toml`\n\nFor example:\n\n```toml\ndefaultContentLanguage = 'ar'\n[languages]\n [languages.ar]\n languagedirection = 'rtl'\n title = 'مدونتي'\n weight = 1\n```\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/custom-Domain":{"title":"Custom Domain","content":"\n### Registrar\nThis step is only applicable if you are using a **custom domain**! If you are using a `\u003cYOUR-USERNAME\u003e.github.io` domain, you can skip this step.\n\nFor this last bit to take effect, you also need to create a CNAME record with the DNS provider you register your domain with (i.e. NameCheap, Google Domains).\n\nGitHub has some [documentation on this](https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site), but the tldr; is to\n\n1. Go to your forked repository (`github.com/\u003cYOUR-GITHUB-USERNAME\u003e/quartz`) settings page and go to the Pages tab. Under \"Custom domain\", type your custom domain, then click **Save**.\n2. Go to your DNS Provider and create a CNAME record that points from your domain to `\u003cYOUR-GITHUB-USERNAME.github.io.` (yes, with the trailing period).\n\n\t![Example Configuration for Quartz](google-domains.png)*Example Configuration for Quartz*\n3. Wait 30 minutes to an hour for the network changes to kick in.\n4. Done!","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/editing":{"title":"Editing Content in Quartz","content":"\n## Editing \nQuartz runs on top of [Hugo](https://gohugo.io/) so all notes are written in [Markdown](https://www.markdownguide.org/getting-started/).\n\n### Folder Structure\nHere's a rough overview of what's what.\n\n**All content in your garden can found in the `/content` folder.** To make edits, you can open any of the files and make changes directly and save it. You can organize content into any folder you'd like.\n\n**To edit the main home page, open `/content/_index.md`.**\n\nTo create a link between notes in your garden, just create a normal link using Markdown pointing to the document in question. Please note that **all links should be relative to the root `/content` path**. \n\n```markdown\nFor example, I want to link this current document to `notes/config.md`.\n[A link to the config page](notes/config.md)\n```\n\nSimilarly, you can put local images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\nYou can also use wikilinks if that is what you are more comfortable with!\n\n### Front Matter\nHugo is picky when it comes to metadata for files. Make sure that your title is double-quoted and that you have a title defined at the top of your file like so. You can also add tags here as well.\n\n```yaml\n---\ntitle: \"Example Title\"\ntags:\n- example-tag\n---\n\nRest of your content here...\n```\n\n### Obsidian\nI recommend using [Obsidian](http://obsidian.md/) as a way to edit and grow your digital garden. It comes with a really nice editor and graphical interface to preview all of your local files.\n\nThis step is **highly recommended**.\n\n\u003e 🔗 Step 3: [How to setup your Obsidian Vault to work with Quartz](obsidian.md)\n\n## Previewing Changes\nThis step is purely optional and mostly for those who want to see the published version of their digital garden locally before opening it up to the internet. This is *highly recommended* but not required.\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)\n\nFor those who like to live life more on the edge, viewing the garden through Obsidian gets you pretty close to the real thing.\n\n## Publishing Changes\nNow that you know the basics of managing your digital garden using Quartz, you can publish it to the internet!\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/hosting":{"title":"Deploying Quartz to the Web","content":"\n## Hosting on GitHub Pages\nQuartz is designed to be effortless to deploy. If you forked and cloned Quartz directly from the repository, everything should already be good to go! Follow the steps below.\n\n### Enable GitHub Actions\nBy default, GitHub disables workflows from running automatically on Forked Repostories. Head to the 'Actions' tab of your forked repository and Enable Workflows to setup deploying your Quartz site!\n\n![Enable GitHub Actions](github-actions.png)*Enable GitHub Actions*\n\n### Enable GitHub Pages\n\nHead to the 'Settings' tab of your forked repository and go to the 'Pages' tab.\n\n1. (IMPORTANT) Set the source to deploy from `master` (and not `hugo`) using `/ (root)`\n2. Set a custom domain here if you have one!\n\n![Enable GitHub Pages](github-pages.png)*Enable GitHub Pages*\n\n### Pushing Changes\nTo see your changes on the internet, we need to push it them to GitHub. Quartz is a `git` repository so updating it is the same workflow as you would follow as if it were just a regular software project.\n\n```shell\n# Navigate to Quartz folder\ncd \u003cpath-to-quartz\u003e\n\n# Commit all changes\ngit add .\ngit commit -m \"message describing changes\"\n\n# Push to GitHub to update site\ngit push origin hugo\n```\n\nNote: we specifically push to the `hugo` branch here. Our GitHub action automatically runs everytime a push to is detected to that branch and then updates the `master` branch for redeployment.\n\n### Setting up the Site\nNow let's get this site up and running. Never hosted a site before? No problem. Have a fancy custom domain you already own or want to subdomain your Quartz? That's easy too.\n\nHere, we take advantage of GitHub's free page hosting to deploy our site. Change `baseURL` in `/config.toml`. \n\nMake sure that your `baseURL` has a trailing `/`!\n\n[Reference `config.toml` here](https://github.com/jackyzha0/quartz/blob/hugo/config.toml)\n\n```toml\nbaseURL = \"https://\u003cYOUR-DOMAIN\u003e/\"\n```\n\nIf you are using this under a subdomain (e.g. `\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz`), include the trailing `/`. **You need to do this especially if you are using GitHub!**\n\n```toml\nbaseURL = \"https://\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz/\"\n```\n\nChange `cname` in `/.github/workflows/deploy.yaml`. Again, if you don't have a custom domain to use, you can use `\u003cYOUR-USERNAME\u003e.github.io`.\n\nPlease note that the `cname` field should *not* have any path `e.g. end with /quartz` or have a trailing `/`.\n\n[Reference `deploy.yaml` here](https://github.com/jackyzha0/quartz/blob/hugo/.github/workflows/deploy.yaml)\n\n```yaml {title=\".github/workflows/deploy.yaml\"}\n- name: Deploy \n uses: peaceiris/actions-gh-pages@v3 \n with: \n\tgithub_token: ${{ secrets.GITHUB_TOKEN }} # this can stay as is, GitHub fills this in for us!\n\tpublish_dir: ./public \n\tpublish_branch: master\n\tcname: \u003cYOUR-DOMAIN\u003e\n```\n\nHave a custom domain? [Learn how to set it up with Quartz ](custom%20Domain.md).\n\n### Ignoring Files\nOnly want to publish a subset of all of your notes? Don't worry, Quartz makes this a simple two-step process.\n\n❌ [Excluding pages from being published](ignore%20notes.md)\n\n---\n\nNow that your Quartz is live, let's figure out how to make Quartz really *yours*!\n\n\u003e Step 6: 🎨 [Customizing Quartz](config.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/ignore-notes":{"title":"Ignoring Notes","content":"\n### Quartz Ignore\nEdit `ignoreFiles` in `config.toml` to include paths you'd like to exclude from being rendered.\n\n```toml\n...\nignoreFiles = [ \n \"/content/templates/*\", \n \"/content/private/*\", \n \"\u003cyour path here\u003e\"\n]\n```\n\n`ignoreFiles` supports the use of Regular Expressions (RegEx) so you can ignore patterns as well (e.g. ignoring all `.png`s by doing `\\\\.png$`).\nTo ignore a specific file, you can also add the tag `draft: true` to the frontmatter of a note.\n\n```markdown\n---\ntitle: Some Private Note\ndraft: true\n---\n...\n```\n\nMore details in [Hugo's documentation](https://gohugo.io/getting-started/configuration/#ignore-content-and-data-files-when-rendering).\n\n### Global Ignore\nHowever, just adding to the `ignoreFiles` will only prevent the page from being access through Quartz. If you want to prevent the file from being pushed to GitHub (for example if you have a public repository), you need to also add the path to the `.gitignore` file at the root of the repository.","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/obsidian":{"title":"Obsidian Vault Integration","content":"\n## Setup\nObsidian is the preferred way to use Quartz. You can either create a new Obsidian Vault or link one that your already have.\n\n### New Vault\nIf you don't have an existing Vault, [download Obsidian](https://obsidian.md/) and create a new Vault in the `/content` folder that you created and cloned during the [setup](setup.md) step.\n\n### Linking an existing Vault\nThe easiest way to use an existing Vault is to copy all of your files (directory and hierarchies intact) into the `/content` folder.\n\n## Settings\nGreat, now that you have your Obsidian linked to your Quartz, let's fix some settings so that they play well.\n\n1. Under Options \u003e Files and Links, set the New link format to always use Absolute Path in Vault.\n2. Go to Settings \u003e Files \u0026 Links \u003e Turn \"on\" automatically update internal links.\n\n![Obsidian Settings](obsidian-settings.png)*Obsidian Settings*\n\n## Templates\nInserting front matter everytime you want to create a new Note gets annoying really quickly. Luckily, Obsidian supports templates which makes inserting new content really easily.\n\n**If you decide to overwrite the `/content` folder completely, don't remove the `/content/templates` folder!**\n\nHead over to Options \u003e Core Plugins and enable the Templates plugin. Then go to Options \u003e Hotkeys and set a hotkey for 'Insert Template' (I recommend `[cmd]+T`). That way, when you create a new note, you can just press the hotkey for a new template and be ready to go!\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/philosophy":{"title":"Quartz Philosophy","content":"\n\u003e “[One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important.” — Richard Hamming\n\n## Why Quartz?\nHosting a public digital garden isn't easy. There are an overwhelming number of tutorials, resources, and guides for tools like [Notion](https://www.notion.so/), [Roam](https://roamresearch.com/), and [Obsidian](https://obsidian.md/), yet none of them have super easy to use *free* tools to publish that garden to the world.\n\nI've personally found that\n1. It's nice to access notes from anywhere\n2. Having a public digital garden invites open conversations\n3. It makes keeping personal notes and knowledge *playful and fun*\n\nI was really inspired by [Bianca](https://garden.bianca.digital/) and [Joel](https://joelhooks.com/digital-garden)'s digital gardens and wanted to try making my own.\n\n**The goal of Quartz is to make hosting your own public digital garden free and simple.** You don't even need your own website. Quartz does all of that for you and gives your own little corner of the internet.\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/preview-changes":{"title":"Preview Changes","content":"\nIf you'd like to preview what your Quartz site looks like before deploying it to the internet, here's exactly how to do that!\n\nNote that both of these steps need to be completed.\n\n## Install `hugo-obsidian`\nThis step will generate the list of backlinks for Hugo to parse. Ensure you have [Go](https://golang.org/doc/install) (\u003e= 1.16) installed.\n\n```bash\n# Install and link `hugo-obsidian` locally\ngo install github.com/jackyzha0/hugo-obsidian@latest\n```\n\nIf you are running into an error saying that `command not found: hugo-obsidian`, make sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize hugo-obsidian as an executable.\n\nAfterwards, start the Hugo server as shown above and your local backlinks and interactive graph should be populated!\n\n## Installing Hugo\nHugo is the static site generator that powers Quartz. [Install Hugo with \"extended\" Sass/SCSS version](https://gohugo.io/getting-started/installing/) first. Then,\n\n```bash\n# Navigate to your local Quartz folder\ncd \u003clocation-of-your-local-quartz\u003e\n\n# Start local server\nmake serve\n\n# View your site in a browser at http://localhost:1313/\n```\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/search":{"title":"Search","content":"\nQuartz supports two modes of searching through content.\n\n## Full-text\nFull-text search is the default in Quartz. It produces results that *exactly* match the search query. This is easier to setup but usually produces lower quality matches.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: false\n```\n\n## Natural Language\nNatural language search is powered by [Operand](https://operand.ai/). It understands language like a person does and finds results that best match user intent. In this sense, it is closer to how Google Search works.\n\nNatural language search tends to produce higher quality results than full-text search.\n\nHere's how to set it up.\n\n1. Create an Operand Account on [their website](https://operand.ai/).\n2. Go to Dashboard \u003e Settings \u003e Integrations.\n3. Follow the steps to setup the GitHub integration. Operand needs access to GitHub in order to index your digital garden properly!\n4. Head over to Dashboard \u003e Objects and press `(Cmd + K)` to open the omnibar and select 'Create Collection'.\n\t1. Set the 'Collection Label' to something that will help you remember it.\n\t2. You can leave the 'Parent Collection' field empty.\n5. Click into your newly made Collection.\n\t1. Press the 'share' button that looks like three dots connected by lines.\n\t2. Set the 'Interface Type' to `object-search` and click 'Create'.\n\t3. This will bring you to a new page with a search bar. Ignore this for now.\n6. Go back to Dashboard \u003e Settings \u003e API Keys and find your Quartz-specific Operand API key under 'Other keys'.\n\t1. Copy the key (which looks something like `0e733a7f-9b9c-48c6-9691-b54fa1c8b910`).\n\t2. Open `data/config.yaml`. Set `enableSemanticSearch` to `true` and `operandApiKey` to your copied key.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: true\noperandApiKey: \"0e733a7f-9b9c-48c6-9691-b54fa1c8b910\"\n```\n7. Make a commit and push your changes to GitHub. See the [[hosting|hosting]] page if you haven't done this already.\n\t1. This step is *required* for Operand to be able to properly index your content. \n\t2. Head over to Dashboard \u003e Objects and select the collection that you made earlier\n8. Press `(Cmd + K)` to open the omnibar again and select 'Create GitHub Repo'\n\t1. Set the 'Repository Label' to `Quartz`\n\t2. Set the 'Repository Owner' to your GitHub username\n\t3. Set the 'Repository Ref' to `master`\n\t4. Set the 'Repository Name' to the name of your repository (usually just `quartz` if you forked the repository without changing the name)\n\t5. Leave 'Root Path' and 'Root URL' empty\n9. Wait for your repository to index and enjoy natural language search in Quartz! Operand refreshes the index every 2h so all you need to do is just push to GitHub to update the contents in the search.","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/setup":{"title":"Setup","content":"\n## Making your own Quartz\nSetting up Quartz requires a basic understanding of `git`. If you are unfamiliar, [this resource](https://resources.nwplus.io/2-beginner/how-to-git-github.html) is a great place to start!\n\n### Forking\n\u003e A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project.\n\nNavigate to the GitHub repository for the Quartz project:\n\n📁 [Quartz Repository](https://github.com/jackyzha0/quartz)\n\nThen, Fork the repository into your own GitHub account. If you don't have an account, you can make on for free [here](https://github.com/join). More details about forking a repo can be found on [GitHub's documentation](https://docs.github.com/en/get-started/quickstart/fork-a-repo).\n\n### Cloning\nAfter you've made a fork of the repository, you need to download the files locally onto your machine. Ensure you have `git`, then type the following command replacing `YOUR-USERNAME` with your GitHub username.\n\n```shell\ngit clone https://github.com/YOUR-USERNAME/quartz\n```\n\n## Editing\nGreat! Now you have everything you need to start editing and growing your digital garden. If you're ready to start writing content already, check out the recommended flow for editing notes in Quartz.\n\n\u003e ✏️ Step 2: [Editing Notes in Quartz](editing.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/showcase":{"title":"Showcase","content":"\nWant to see what Quartz can do? Here are some cool community gardens :)\n\n- [Quartz Documentation (this site!)](https://quartz.jzhao.xyz/)\n- [Jacky Zhao's Garden](https://jzhao.xyz/)\n- [Scaling Synthesis - A hypertext research notebook](https://scalingsynthesis.com/)\n- [AWAGMI Intern Notes](https://notes.awagmi.xyz/)\n- [Shihyu's PKM](https://shihyuho.github.io/pkm/)\n- [Chloe's Garden](https://garden.chloeabrasada.online/)\n- [SlRvb's Site](https://slrvb.github.io/Site/)\n- [Course notes for Information Technology Advanced Theory](https://a2itnotes.github.io/quartz/)\n- [Brandon Boswell's Garden](https://brandonkboswell.com)\n- [Siyang's Courtyard](https://siyangsun.github.io/courtyard/)\n\nIf you want to see your own on here, submit a [Pull Request adding yourself to this file](https://github.com/jackyzha0/quartz/blob/hugo/content/notes/showcase.md)!\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/troubleshooting":{"title":"Troubleshooting and FAQ","content":"\nStill having trouble? Here are a list of common questions and problems people encounter when installing Quartz.\n\nWhile you're here, join our [Discord](https://discord.gg/cRFFHYye7t) :)\n\n### Does Quartz have Latex support?\nYes! See [CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) for a brief demo.\n\n### Can I use \\\u003cObsidian Plugin\\\u003e in Quartz?\nUnless it produces direct Markdown output in the file, no. There currently is no way to bundle plugin code with Quartz.\n\nThe easiest way would be to add your own HTML partial that supports the functionality you are looking for.\n\n### My GitHub pages is just showing the README and not Quartz\nMake sure you set the source to deploy from `master` (and not `hugo`) using `/ (root)`! See more in the [hosting](hosting.md) guide\n\n### Some of my pages have 'January 1, 0001' as the last modified date\nThis is a problem caused by `git` treating files as case-insensitive by default and some of your posts probably have capitalized file names. You can turn this off in your Quartz by running this command.\n\n```shell\n# in the root of your Quartz (same folder as config.toml)\ngit config core.ignorecase true\n\n# or globally (not recommended)\ngit config --global core.ignorecase true\n```\n\n### Can I publish only a subset of my pages?\nYes! Quartz makes selective publishing really easy. Heres a guide on [excluding pages from being published](ignore%20notes.md).\n\n### Can I host this myself and not on GitHub Pages?\nYes! All built files can be found under `/public` in the `master` branch. More details under [hosting](hosting.md).\n\n### `command not found: hugo-obsidian`\nMake sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize `hugo-obsidian` as an executable.\n\n```shell\n# Add the following 2 lines to your ~/.bash_profile\nexport GOPATH=/Users/$USER/go\nexport PATH=$GOPATH/bin:$PATH\n\n# In your current terminal, to reload the session\nsource ~/.bash_profile\n```\n\n### How come my notes aren't being rendered?\nYou probably forgot to include front matter in your Markdown files. You can either setup [Obsidian](obsidian.md) to do this for you or you need to manually define it. More details in [the 'how to edit' guide](editing.md).\n\n### My custom domain isn't working!\nWalk through the steps in [the hosting guide](hosting.md) again. Make sure you wait 30 min to 1 hour for changes to take effect.\n\n### How do I setup Google Analytics?\nYou can edit it in `config.toml` and either use a V3 (UA-) or V4 (G-) tag.\n\n### How do I change the content on the home page?\nTo edit the main home page, open `/content/_index.md`.\n\n### How do I change the colours?\nYou can change the theme by editing `assets/custom.scss`. More details on customization and themeing can be found in the [customization guide](config.md).\n\n### How do I add images?\nYou can put images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\n### My Interactive Graph and Backlinks aren't up to date\nBy default, the `linkIndex.json` (which Quartz needs to generate the Interactive Graph and Backlinks) are not regenerated locally. To set that up, see the guide on [local editing](editing.md)\n\n### Can I use React/Vue/some other framework?\nNot out of the box. You could probably make it work by editing `/layouts/_default/single.html` but that's not what Quartz is designed to work with. 99% of things you are trying to do with those frameworks you can accomplish perfectly fine using just vanilla HTML/CSS/JS.\n\n## Still Stuck?\nQuartz isn't perfect! If you're still having troubles, file an issue in the GitHub repo with as much information as you can reasonably provide. Alternatively, you can message me on [Twitter](https://twitter.com/_jzhao) and I'll try to get back to you as soon as I can.\n\n🐛 [Submit an Issue](https://github.com/jackyzha0/quartz/issues)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/updating":{"title":"Updating","content":"\nHaven't updated Quartz in a while and want all the cool new optimizations? On Unix/Mac systems you can run the following command for a one-line update! This command will show you a log summary of all commits since you last updated, press `q` to acknowledge this. Then, it will show you each change in turn and press `y` to accept the patch or `n` to reject it. Usually you should press `y` for most of these unless it conflicts with existing changes you've made! \n\n```shell\nmake update\n```\n\nOr, if you don't want the interactive parts and just want to force update your local garden (this assumed that you are okay with some of your personalizations been overriden!)\n\n```shell\nmake update-force\n```\n\nOr, manually checkout the changes yourself.\n\n\u003e [!warning] Warning!\n\u003e\n\u003e If you customized the files in `data/`, or anything inside `layouts/`, your customization may be overwritten!\n\u003e Make sure you have a copy of these changes if you don't want to lose them.\n\n\n```shell\n# add Quartz as a remote host\ngit remote add upstream git@github.com:jackyzha0/quartz.git\n\n# index and fetch changes\ngit fetch upstream\ngit checkout -p upstream/hugo -- layouts .github Makefile assets/js assets/styles/base.scss assets/styles/darkmode.scss config.toml data \n```\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/requirements/overview":{"title":"Logos Network Requirements Overview","content":"\nThis document describes the requirements of the Logos Network.\n\n\u003e Network sovereignty is an extension of the collective sovereignty of the individuals within. \n\n\u003e Meaningful participation in the network should be acheivable by affordable and accessible consumer grade hardware.\n\n\u003e Privacy by default. \n\n\u003e A given CiC should have the option to gracefully exit the network and operate on its own.\n\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["requirements"]},"/private/roadmap/consensus/candidates/carnot/FAQ":{"title":"Frequently Asked Questions","content":"\n## Network Requirements and Assumptions\n\n### What assumptions do we need Waku to fulfill? - Corey\n\u003e `Moh:` Waku needs to fill the following requirements, taken from the Carnot paper:\n\n\u003e **Definition 3** (Probabilistic Reliable Dissemination). _After the GST, and when the leader is correct, all the correct nodes deliver the proposal sent by the leader (w.h.p)._\n\n\u003e **Definition 4** (Probabilistic Fulfillment). _After the GST, and when the current and previous leaders are correct, the number of votes collected by teh current leader is $2c+1$ (w.h.p)._\n\n## Tradeoffs\n\n### I think the main clear disadvantage of such a scheme is the added latency of the multiple layers. - Alvaro\n\n\u003e `Moh:` The added latency will be O(log(n/C)), where C is the committee size. But I guess it will be hard to avoid it. Though it also depends on how fast the network layer (potentially Waku) propagats msgs and also on execution time of the transaction as well.\n\n\u003e `Alvaro:` Well IIUC the only latency we are introducing is directly proportional to the levels of subcommitee nesting (ie the log(n/C)), which is understandably the price to pay. We have to make sure though that what we gain by introducing this is really worth the extra cost vs the typical comittee formation via randao or perhaps VDFs\n\n\u003e `Moh:` Again the Typical committee formation with randao can reduce their wait time value to match our latency, but then it becomes vulnerable and fail if the network latency becomes greater than their slot interval. If they keep it too large it may not fail but becomes slow. We won't have that problem. If an adversary has the power to slow down the network then their liveness will fail, whereas we won't have that issue.\n\n## How would you compare Aptos and Carnot? - Alvaro\n\n\u003e `Moh:` It is variant of DiemBFT, Sui is based on Nahrwal, both cannot scale to more than few hunderd of nodes. That is why they achieve that low latency.\n\n\u003e `Alvaro:` Yes, so they need to select a committee of that size in order to operate at that latency What's wrong with selecting a committee vs Carnot's solution? This I'm asking genuinely to understand and because everyone will ask this question when we release.\n\n\u003e `Moh:` When you select a committee you have to wait for a time slot to make sure the result of consensus has propagated. Again strong synchrony assumption (slot time), formation of forks, increase in PoS attack vector come into play\nWithin committee the protocol does not need a wait time but for its results to get propagated if scalability is to be achieved, then wait time has to be added or signatures have to be collected from thousands of nodes.\n\n\u003e `Alvaro:` Can you elaborate?\n\n\u003e `Moh:` Ethereum (and any other protocol who runs the consensus in a single committee selected from a large group on nodes) has wait time so that the output of the consenus propagates to all honest nodes before the next committee is selected. Else the next committee will fail or only forks will be formed and chain length won't increase. But since this wait time as stated, increases latency, makes the protocol vulnerable, Ethereum wants to avoid it to achieve responsivess. To avoid wait time (add responsiveness) a protocol has to collect attestation signatures from 2/3rd of all nodes (not a single committee) to move to the second round (Carnot is already responsive). But aggregating and verifying signatures thousands of signatures is expensive and time consuming. This is why they are working to improve BLS signatures. Instead we have changed the consensus protocol in such a way that a small number of signatures need to be aggregated and verified to achieve responsiveness and fast finality. We can further improve performance by using the improved BLS signatures.\n\n\u003e One cannot achieve fast finality while running the consensus in a small committee. Because attestation of a Block within the single committee is not enough. This block can be averted if the leader of the next committee has not seen it. Therefore, there should be enough delay so that all honest nodes can see it. This is why we have this wait/slot time. Another issue can be a malicious leader from the next chosen committee can also avert a block of honest leader and hence preventing honest leaders from getting rewards. If blocks of honest leaders are averted for long time, stake of malicious leaders will increase. Moreover, malicious leaders can delay blocks of honest nodes by making fork and averting them. Addressing these issues will further make the protocol complex, while still laking fast finality.\n\n## Data Distribution\n\n### How much failure rate of erasure code transmission are we expecting. Basically, what are the EC coding parameters that we expect to be sending such that we have some failure rate of transmission? Has that been looked into? - Dmitriy\n\u003e `Moh:` This is a great question and it points to the tension between the failure rate vs overhead. We have briefly looked into this (today me and Marcin @madxor discussed such cases), but we haven’t thoroughly analyzed this. In our case, the rate of failure also depends on committee size. We look into $10^{-3}$ to $10^{-6}$ probability of failure. And in this case, the coding overhead can be somewhere between 200%-500% approximately. This means for a committee size of 500 (while expecting receipt of messages from 251 correct nodes), for a failure rate of $10^{-6}$ a single node has to send \u003e 6Mb of data for a 1Mb of actual data. Though 5x overhead is large, it still prevent us from sending/receiving 500 Mb of data in return for a failure probability of 1 proposal out of 1 million. From the protocol perspective, we can address EC failures in multiple ways: a: Since the root committee only forwards the coded chunks only when they have successfully rebuilt the block. This means the root committee can be contacted to download additional coded chunks to decode the block. b: We allow this failure and let the leader be replaced but since there is proof that the failure is due to the reason that a decoder failed to reconstruct the block, therefore, the leader cannot be punished (if we chose to employ punishment in PoS). \n\n### How much data should a given block be. Are there limits on this and if so, what are they and what do they depend on? - Dmitriy\n\u003e `Moh:` This question can be answered during simulations and experiments over links of different bandwidths and latencies. We will test the protocol performances with different block sizes. As we know increasing the block size results in increased throughput as well as latency. What is the most appropriate block size can be determined once we observe the tradeoff between throughput vs latency.\n\n## Signature Propagation\n\n### Who sends the signatures up from a given committee? Do that have any leadered power within the committee? - Tanguy\n\u003e `Moh:` Each node in a committee multicasts its vote to all members of the parent committee. Since the size of the vote is small the bit complexity will be low. Introducing a leader within each committee will create a single point of failure within each committee. This is why we avoid maintaining a leader within each committee\n\n## Network Scale\n\n### What is our expected minimum number of nodes within the network? - Dmitriy\n\u003e `Moh:` For a small number of nodes we can have just a single committee. But I am not sure how many nodes will join our network \n\n## Byzantine Behavior\n\n### Can we also consider a flavor that adds attestation/attribution to misbehaving nodes? That will come at a price but there might be a set of use cases which would like to have lower performance with strong attribution. Not saying that it must be part of the initial design, but can be think-through/added later. - Marcin\n\u003e `Moh:` Attestation to misbehaving nodes is part of this protocol. For example, if a node sends an incorrect vote or if a leader proposes an invalid transaction, then this proof will be shared with the network to punish the misbehaving nodes (Though currently this is not part of pseudocode). But it is not possible to reliably prove the attestation of not participation.\n\n\u003e `Marcin:` Great, and definitely, we cannot attest that a node was not participating - I was not suggesting that;). But we can also think about extending the attestation for lazy-participants case (if it’s not already part of the protocol).\n\n\u003e `Moh:` OK, thanks for the clarification 😁 . Of course we can have this feature to forward the proof of participation of successor committees. In the first version of Carnot we had this feature as a sliding window. One could choose the size of the window (in terms of tree levels) for which a node should forward the proof of participation. In the most recent version the size of sliding window is 0. And it is 1 for the root committee. It means root committee members have to forward the proof of participation of their child committee members. Since I was able to prove protocol correctness without forwarding the proofs so we avoid it. But it can be part of the protocol without any significant changes in the protocol\n\n\u003e If the proof scheme is efficient ( as the results you presented) in practice and the cost of creating and verifying proofs is not significant then actually adding proofs can be good. But not required.\n\n### Also, how do you reward online validators / punish offline ones if you can't prove at the block level that someone attested or not? - Tanguy\n\u003e `Moh:` This is very tricky and so far no one has done it right (to my knowledge). Current reward mechanism for attestation, favours fast nodes.This means if malicious nodes in the network are fast, they can increase their stake in the network faster than the honest nodes and eventually take control of the network. Or in the case of Ethereum a Byzantine leader can include signature of malicious nodes more frequently in the proof of attestation, hence malicious nodes will be rewarded more frequently. Also let me add that I don't have definite answer to your question currently, but I think by revising the protocol assumptions, incentive mechanism and using a game theoretical approach this problem can be resolved.\n\n\u003e An honest node should wait for a specific number of children votes (to make sure everyone is voting on the same proposal) before voting but does not need to provide any cryptographic proof. Though we build a threshold signature from root committee members and it’s children but not from the whole tree. As long as enough number of nodes follow the the protocol we should be fine. I am working on protocol proofs. Also I think bugs should be discovered during development and testing phase. Changing protocol to detect potential bug might not be a good practice.\n\n### doesn't having randomly distributed malicious nodes (say there is a 20%) increase the odds that over a third of a committee end up being from those malicious ones? It seems intuitive: since a 20% at the global scale is always \u003c1/3, but when randomly distributed there is always non-zero chance they end up in a single group, thus affecting liveness more and more the closer we get to that global 1/3. Consequently, if I'm understanding the algorithm correctly, it would have worse liveness guarantees that classical pBFT, say with a randomly-selected commitee from the total set. - Alvaro\n\n\u003e `Alexander:` We assume that fraction of malicious nodes is $1/4$ and given we chooses comm. sizes, which will depend on total number of nodes, appropriately this guarantees that with high probability we are below $1/3$ in each committee.\n\n\u003e `Alvaro:` ok, but then both the global guarantee is below that current \"standard\" of 1/3 of malicious nodes and even then we are talking about non-zero probabilities that a comm has the power to slow down consensus via requiring reformation of comms (is this right?)\n\n\u003e `Alexander:` This is the price we pay to improve scalability. Also these probabilities of failure can be very low.\n\n### What happens in Carnot when one committee is taken over by \u003e1/3 intra-comm byzantine nodes? - Alvaro\n\n\u003e `Moh:` When there is a failure the overlay is recalculated. By gradually increasing the fault tolerance by a small value, the probability of failure of a committee slightly increases but upon recalculating the correct overlay, inactive nodes that caused the failure of previous overlay (when no committee has more than 1/3 Byzantine nodes) will be slashed.\n\n\n\n## Synchronicity\n\n### How to guarantee synchronicity. In particular how to avoid that in a big network different nodes see a proposal with 2c+1 votes but different votes and thus different random seed - Giacomo\n\n\u003e `Moh:` The assumption is that there exists some known finite time bound Δ and a special event called GST (Global Stabilization Time) such that:\n\n\u003e The adversary must cause the GST event to eventually happen after some unknown finite time. Any message sent at time x must be delivered by time $\\delta + \\text{max}(x,GST)$. In the Partial synchrony model, the system behaves asynchronously till GST and synchronously after GST.\n\n\u003e Moreover, votes travel one level at a time from tree leaves to the tree root. We only need the proof of votes of root+child committees to conclude with a high probability that the majority of nodes have voted.\n\n### That's a timeout? How does this work exactly without timing assumptions? Trying to find this in the document -Alvaro\n\n\u003e `Moh:` Each committee only verifies the votes of its child committees. Once a verified 2/3rd votes of its child members, it then sends it vote to its parent. In this way each layer of the tree verifies the votes (attests) the layer below. Thus, a node does not have to collect and verify 2/3rd of all thousands of votes (as done in other responsive BFTs) but only from its child nodes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["Carnot","consensus"]},"/private/roadmap/consensus/candidates/carnot/overview":{"title":"Carnot Overview","content":"\nCarnot (formerly LogosBFT) is a Byzantine Fault Tolerant (BFT) [consensus](roadmap/consensus/index.md) candidate for the Nomos Network that utilizes Fountain Codes and a committees tree structure to optimize message propagation in the presence of a large number of nodes, while maintaining high througput and fast finality. More specifically, these are the research contributions in Carnot. To our knowledge, Carnot is the first consensus protocol that can achieve together all of these properties:\n\n1. Scalability: Carnot is highly scalable, scaling to thousands of nodes.\n2. Responsiveness: The ability of a protocol to operate with the speed of a wire but not a maximum delay (block delay, slot time, etc.) is called responsiveness. Responsiveness reduces latency and helps the Carnot achieve Fast Finality. Moreover, it improves Carnot's resilience against adversaries that can slow down network traffic. \n3. Fork avoidance: Carnot avoids the formation of forks in a happy path. Forks formation has the following adverse consequences that the Carnot avoids.\n 1. Wastage of resources on orphan blocks and reduced throughput with increased latency for transactions in orphan blocks\n 2. Increased attack vector on PoS as attackers can employ a strategy to force the network to accept their fork resulting in increased stake for adversaries.\n\n- [FAQ](FAQ.md): Here is a page that tracks various questions people have around Carnot.\n\n## Work Streams\n\n### Current State of the Art\nAn ongoing survey of the current state of the art around Consensus Mechanisms and their peripheral dependencies is being conducted by Tuanir, and can be found in the following WIP Overleaf document: \n- [WIP Consensus SoK](https://www.overleaf.com/project/633acc1acaa6ffe456d1ab1f)\n\n### Committee Tree Overlay\nThe basis of Carnot is dependent upon establishing an committee overlay tree structure for message distribution. \n\nAn overview video can be found in the following link: \n- [Carnot Overview by Moh during Offsite](https://drive.google.com/file/d/17L0JPgC0L1ejbjga7_6ZitBfHUe3VO11/view?usp=sharing)\n\nThe details of this are being worked on by Moh and Alexander and can be found in the following overleaf documents: \n- [Moh's draft](https://www.overleaf.com/project/6341fb4a3cf4f20f158afad3)\n- [Alexander's notes on the statistical properties of committees](https://www.overleaf.com/project/630c7e20e56998385e7d8416)\n- [Alexander's python code for computing committee sizes](https://github.com/AMozeika/committees)\n\nA simulation notebook is being worked on by Corey to investigate the properties of various tree overlay structures and estimate their practical performance:\n- [Corey's Overlay Jupyter Notebook](https://github.com/logos-co/scratch/tree/main/corpetty/committee_sim)\n\n#### Failure Recovery\nThere exists a timeout that triggers an overlay reconfiguration. Currently work is being done to calculate the probabilities of another failure based on a given percentage of byzantine nodes within the network. \n- [Recovery Failure Probabilities]() - LINK TO WORK HERE\n\n### Random Beacon\nA random beacon is required to choose a leader and establish a seed for defining the overlay tree. Marcin is working on the various avenues. His previous presentations can be found in the following presentation slides (in chronological order):\n- [Intro to Multiparty Random Beacons](https://cloud.logos.co/index.php/s/b39EmQrZRt5rrfL)\n- [Circles of Trust](https://cloud.logos.co/index.php/s/NXJZX8X8pHg6akw)\n- [Compact Certificates of Knowledge](https://cloud.logos.co/index.php/s/oSJ4ykR4A55QHkG)\n\n### Erasure Coding (LT Codes / Fountain Codes / Raptor Codes)\nIn order to reduce message complexity during propagation, we are investigating the use of Luby Transform (LT) codes, more specifically [Fountain Codes](https://en.wikipedia.org/wiki/Fountain_code), to break up the block to be propagated to validators and recombined by local peers within a committee. \n- [LT Code implementation in Rust](https://github.com/chrido/fountain) - unclear about legal status of LT or Raptor Codes, it is currently under investigation.\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","Carnot"]},"/private/roadmap/consensus/candidates/claro":{"title":"Claro: Consensus Candidate","content":"\n\n\n**Claro** (formerly Glacier) is a consensus candidate for the Logos network that aims to be an improvement to the Avalanche family of consensus protocols. \n\n\n### Implementations\nThe protocol has been implemented in multiple languages to facilitate learning and testing. The individual code repositories can be found in the following links:\n- Rust (reference)\n- Python\n- Common Lisp\n\n### Simulations/Experiments/Analysis\nIn order to test the performance of the protocol, and how it stacked up to the Avalanche family of protocols, we have performed a multitude of simulations and experiments under various assumptions. \n- [Alvaro's initial Python implementations and simulation code](https://github.com/status-im/consensus-models)\n\n### Specification\nCurrently the Claro consensus protocol is being drafted into a specification so that other implementations can be created. It's draft resides under [Vac](https://vac.dev) and can be tracked [here](https://github.com/vacp2p/rfc/pull/512/)\n\n### Additional Information\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","claro"]},"/private/roadmap/consensus/development/overview":{"title":"Development Work","content":"","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/development/prototypes":{"title":"Consensus Prototypes","content":"\nConsensus Prototypes is a collection of Rust implementations of the [Consensus Candidates](tags/candidates)\n\n## Tiny Node\n\n\n## Required Roles\n- Lead Developer (filled)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/overview":{"title":"Consensus Work","content":"\nConsensus is the foundation of the network. It is how a group of peer-to-peer nodes understands how to agree on information in a distributed way, particuluarly in the presence of byzantine actors. \n\n## Consensus Roadmap\n### Consensus Candidates\n- [Carnot](private/roadmap/consensus/candidates/carnot/overview.md) - Carnot is the current leading consensus candidate for the Nomos network. It is designed to maximize efficiency of message dissemination while supoorting hundreds of thousands of full validators. It gets its name from the thermodynamic concept of the [Carnot Cycle](https://en.wikipedia.org/wiki/Carnot_cycle), which defines maximal efficiency of work from heat through iterative gas expansions and contractions. \n- [Claro](claro.md) - Claro is a variant of the Avalanche Snow family of protocols, designed to be more efficient at the decision making process by leveraging the concept of \"confidence\" across peer responses. \n\n\n### Theoretical Analysis\n- [snow-family](snow-family.md)\n\n### Development\n- [prototypes](prototypes.md)\n\n## Open Roles\n- [distributed-systems-researcher](distributed-systems-researcher.md)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus"]},"/private/roadmap/consensus/theory/overview":{"title":"Consensus Theory Work","content":"\nThis track of work is dedicated to creating theoretical models of distributed consensus in order to evaluate them from a mathematical standpoint. \n\n## Navigation\n- [Snow Family Analysis](snow-family.md)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory"]},"/private/roadmap/consensus/theory/snow-family":{"title":"Theoretical Analysis of the Snow Family of Consensus Protocols","content":"\nIn order to evaluate the properties of the Avalanche family of consensus protocols more rigorously than the original [whitepapers](), we work to create an analytical framework to explore and better understand the theoretical boundaries of the underlying protocols, and under what parameterization they will break vs a set of adversarial strategies","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory","snow"]},"/private/roadmap/networking/carnot-waku-specification":{"title":"A Specification proposal for using Waku for Carnot Consensus","content":"\n##### Definition Reference \n- $k$ - size of a given committee\n- $n_C$ - number of committees in the overlay, or nodes in the tree\n- $d$ - depth of the overlay tree\n- $n_d$ - number of committees at a given depth of the tree\n\n## Motivation\nIn #Carnot, an overlay is created to facilitate message distribution and voting aggregation. This document will focus on the differentiated channels of communication for message distribution. Whether or not voting aggregation and subsequenty traversal back up the tree can utilize the same channels will be investigated later. \n\nThe overlay is described as a binary tree of committees, where a individual in each committee propogates messages to an assigned node in their two children committees of the tree, until the leaf nodes have recieved enough information to reconstitute the proposal block. \n\nThis communication protocol will naturally form \"pools of information streams\" that people will need to listen to in order to do their assigned work:\n- inner committee communication\n- parent-child chain communication\n- intitial leader distribution\n\n### **inner committee communication** \nall members of a given committee will need to gossip with each other in order to reform the initial proposal block\n- This results in $n_C$ pools of $k$-sized communication pools.\n\n### **parent-child chain communication** \nThe formation of the committee and the lifecycle of a chunk of erasure coded data forms a number of \"parent-child\" chains. \n- If we completely minimize the communcation between committees, then this results in $k$ number of $n_C$-sized communication pools.\n- It is not clear if individual levels of the tree needs to \"execute\" the message to their children, or if the root committee can broadcast to everyone within its assigned parent-chain communcation pool at the same time.\n- It is also unclear if individual levels of the tree need to send independant messages to each of their children, or if a unified communication pool can be leveraged at the tree-level. This results in $d$ communication pools of $n_d$-size. \n\n### **initial leader distribution**\nFor each proposal, a leader needs to distribute the erasure coded proposal block to the root committee\n- This results in a single communication pool of size $k(+1)$.\n- the $(+1)$ above is the leader, who could also be a part of the root committee. The leader changes with each block proposal, and we seek to minimize the time between leader selection and a round start. Thusly, this results in a requirement that each node in the network must maintain a connection to every node in the root committee. \n\n## Proposal\nThis part of the document will attempt to propose using various aspects of Waku, to facilitate both the setup of the above-mentioned communication pools as well as encryption schemes to add a layer of privacy (and hopefully efficiency) to message distribution. \n\nWe seek to minimize the availability of data such that an individual has only the information to do his job and nothing more.\n\nWe also seek to minimize the amount of messages being passed such that eventually everyone can reconstruct the initial proposal block\n\n`???` for Waku-Relay, 6 connections is optimal, resulting in latency ???\n\n`???` Is it better to have multiple pubsub topics with a simple encryption scheme or a single one with a complex encryption scheme\n\nAs there seems to be a lot of dynamic change from one proposal to the next, I would expect [`noise`](https://vac.dev/wakuv2-noise) to be a quality candidate to facilitate the creation of secure ephemeral keys in the to-be proposed encryption scheme. \n\nIt is also of interest how [`contentTopics`](https://rfc.vac.dev/spec/23/) can be leveraged to optimize the communication pools. \n\n## Whiteboard diagram and notes\n![Whiteboard Diagram](images/Overlay-Communications-Brainstorm.png)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku","carnot","networking","consensus"]},"/private/roadmap/networking/overview":{"title":"P2P Networking Overview","content":"\nThis page summarizes the work around the P2P networking layer of the Nomos project.\n\n## Waku\n[Waku](https://waku.org) is an privacy-preserving, ephemeral, peer-to-peer (P2P) messaging suite of protocols which is developed under [Vac](https://vac.dev) and maintained/productionized by the [Logos Collective](https://logos.co). \n\nIt is hopeful that Nomos can leverage the work of the Waku project to provide the P2P networking layer and peripheral services associated with passing messages around the network. Below is a list of the associated work to investigate the use of Waku within the Nomos Project. \n\n### Scalability and Fault-Tolerance Studies\nCurrently, the amount of research and analysis of the scalability of Waku is not sufficient to give enough confidence that Waku can serve as the networking layer for the Nomos project. Thusly, it is our effort to push this analysis forward by investigating the various boundaries of scale for Waku. Below is a list of endeavors in this direction which we hope serves the broader community: \n- [Status' use of Waku study w/ Kurtosis](status-waku-kurtosis.md)\n- [Using Waku for Carnot Overlay](carnot-waku-specification.md)\n\n### Rust implementations\nWe have created and maintain a stop-gap solution to using Waku with the Rust programming language, which is wrapping the [go-waku](https://github.com/status-im/go-waku) library in Rust and publishing it as a crate. This library allows us to do tests with our [Tiny Node](roadmap/development/prototypes.md#Tiny-Node) implementation more quickly while also providing other projects in the ecosystem to leverage Waku within their Rust codebases more quickly. \n\nIt is desired that we implement a more robust and efficient Rust library for Waku, but this is a significant amount of work. \n\nLinks:\n- [Rust bindings to go-waku repo](https://github.com/waku-org/waku-rust-bindings)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","overview"]},"/private/roadmap/networking/status-network-agents":{"title":"Status Network Agents Breakdown","content":"\nThis page creates a model to describe the impact of the various clients within the Status ecosystem by describing their individual contribution to the messages within the Waku network they leverage. \n\nThis model will serve to create a realistic network topology while also informing the appropriate _dimensions of scale_ that are relevant to explore in the [Status Waku scalability study](status-waku-kurtosis.md)\n\nStatus has three main clients that users interface with (in increasing \"network weight\" ordering):\n- Status Web\n- Status Mobile\n- Status Desktop\n\nEach of these clients has differing (on average) resources available to them, and thusly, provides and consumes different Waku protocols and services within the Status network. Here we will detail their associated messaging impact to the network using the following model:\n\n```\nAgent\n - feature\n - protocol\n - contentTopic, messageType, payloadSize, frequency\n```\n\nBy describing all `Agents` and their associated feature list, we should be able do the following:\n\n- Estimate how much impact per unit time an individual `Agent` impacts the Status network\n- Create a realistic network topology and usage within a simulation framework (_e.g._ Kurtosis)\n- Facilitate a Status Specification of `Agents`\n- Set an example for future agent based modeling and simulation work for the Waku protocol suite \n\n## Status Web\n\n## Status Mobile\n\n## Status Desktop\nStatus Desktop serves as the backbone for the Status Network, as the software runs on hardware that is has more available resources, typically has more stable network and robust connections, and generally has a drastically lower churn (or none at all). This results in it running the most Waku protocols for longer periods of time, resulting int he heaviest usage of the Waku network w.r.t. messaging. \n\nHere is the model breakdown of its usage:\n```\nStatus Desktop\n - Prekey bundle broadcast\n - Account sync\n - Historical message melivery\n - Waku-Relay (answering message queries)\n - Message propogation\n - Waku-Relay\n - Waku-Lightpush (receiving)\n```","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["status","waku","scalability"]},"/private/roadmap/networking/status-waku-kurtosis":{"title":"Status' use of Waku - A Scalability Study","content":"\n[Status](https://status.im) is the largest consumer of the Waku protocol, leveraging it for their entire networking stack. THeir upcoming release of Status Desktop and the associated Communities product will heavily push the limits of what Waku can do. As mentioned in the [Networking Overview](private/roadmap/networking/overview.md) page, rigorous scalability studies have yet to be conducted of Waku (v2). \n\nWhile these studies most immediately benefit the Status product suite, it behooves the Nomos Project to assist as the lessons learned immediately inform us the limits of what the Waku protocol suite can handle, and how that fits within our [Technical Requirements](private/requirements/overview.md).\n\nThis work has been kicked off as a partnership with the [Kurtosis](https://kurtosis.com) distributed systems development platform. It is our hope that the experience and accumen gained during this partnership and study will serve us in the future with respect to Nomos developme, and more broadly, all projects under the Logos Collective. \n\nAs such, here is an overview of the various resources towards this endeavor:\n- [Status Network Agent Breakdown](status-network-agents.md) - A document that describes the archetypal agents that participate in the Status Network and their associated Waku consumption.\n- [Wakurtosis repo](https://github.com/logos-co/wakurtosis) - A Kurtosis module to run scalability studies\n- [Waku Topology Test repo](https://github.com/logos-co/Waku-topology-test) - a Python script that facilitates setting up a reasonable network topology for the purpose of injecting the network configuration into the above Kurtosis repo\n- [Initial Vac forum post introducing this work](https://forum.vac.dev/t/waku-v2-scalability-studies/142)\n- [Waku Github Issue detailing work progression](https://github.com/waku-org/pm/issues/2)\n - this is also a place to maintain communications of progress\n- [Initial Waku V2 theoretical scalability study](https://vac.dev/waku-v1-v2-bandwidth-comparison)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","scalability","waku"]},"/private/roadmap/virtual-machines/overview":{"title":"overview","content":"\n## Motivation\nLogos seeks to use a privacy-first virtual machine for transaction execution. We believe this can only be acheived through zero-knowledge. The majority of current work in the field focuses more towards the aggregation and subsequent verification of transactions. This leads us to explore the researching and development of a privacy-first virtual machine. \n\nLINK TO APPROPRIATE NETWORK REQUIREMENTS HERE\n\n#### Educational Resources\n- primer on Zero Knowledge Virtual Machines - [link](https://youtu.be/GRFPGJW0hic)\n\n### Implementations:\n- TinyRAM - link\n- CairoVM\n- zkSync\n- Hermes\n- [MIDEN](https://polygon.technology/solutions/polygon-miden/) (Polygon)\n- RISC-0\n\t- RISC-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General Building Blocks of a ZK-VM\n- CPU\n\t- modeled with \"execution trays\"\n- RAM\n\t- overhead to look out for\n\t\t- range checks\n\t\t- bitwise operations\n\t\t- hashing\n- Specialized circuits\n- Recursion\n\n## Approaches\n- zk-WASM\n- zk-EVM\n- RISC-0\n\t- RISK-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t\t- https://youtu.be/2MXHgUGEsHs - Why use the RISC Zero zkVM?\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General workstreams\n- bytecode compiler\n- zero-knowledge circuit design\n- opcode architecture (???)\n- engineering\n- required proof system\n- control flow\n\t- MAST (as used in MIDEN)\n\n## Roles\n- [ZK Research Engineer](zero-knowledge-research-engineer.md)\n- Senior Rust Developer\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["virtual machines","zero knowledge"]},"/private/roles/distributed-systems-researcher":{"title":"Open Role: Distributed Systems Researcher","content":"\n\n## About Status\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception. Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n \n\n## Who are we?\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the Status Network. We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality\n\n## The job\n\n**Responsibilities:**\n- This role is dedicated to pure research\n- Primarily, ensuring that solutions are sound and diving deeper into their formal definition.\n- Additionally, he/she would be regularly going through papers, bringing new ideas and staying up-to-date.\n- Designing, specifying and verifying distributed systems by leveraging formal and experimental techniques.\n- Conducting theoretical and practical analysis of the performance of distributed systems.\n- Designing and analysing incentive systems.\n- Collaborating with both internal and external customers and the teams responsible for the actual implementation.\n- Researching new techniques for designing, analysing and implementing dependable distributed systems.\n- Publishing and presenting research results both internally and externally.\n\n \n**Ideally you will have:**\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]\n- Strong background in Computer Science and Math, or a related area.\n- Academic background (The ability to analyze, digest and improve the State of the Art in our fields of interest. Specifically, familiarity with formal proofs and/or the scientific method.)\n- Distributed Systems with a focus on Blockchain\n- Analysis of algorithms\n- Familiarity with Python and/or complex systems modeling software\n- Deep knowledge of algorithms (much more academic, such as have dealt with papers, moving from research to pragmatic implementation)\n- Experience in analysing the correctness and security of distributed systems.\n- Familiarity with the application of formal method techniques. \n- Comfortable with “reverse engineering” code in a number of languages including Java, Go, Rust, etc. Even if no experience in these languages, the ability to read and \"reverse engineer\" code of other projects is important.\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Capable of deep and creative thinking.\n- Passionate about blockchain technology in general.\n- Able to manage the uncertainties and ambiguities associated with working in a remote-first, distributed, decentralised environment.\n- A strong alignment to our principles: https://status.im/about/#our-principles\n\n\n**Bonus points:**\n- Experience working remotely. \n- Experience working for an open source organization. \n- TLA+/PRISM would be desirable.\n- PhD in Computer Science, Mathematics, or a related area. \n- Experience Multi-Party Computation and Zero-Knowledge Proofs\n- Track record of scientific publications.\n- Previous experience in remote or globally distributed teams.\n\n## Hiring process\n\nThe hiring process for this role will be:\n- Interview with our People Ops team\n- Interview with Alvaro (Team Lead)\n- Interview with Corey (Chief Security Officer)\n- Interview with Jarrad (Cofounder) or Daniel \n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n \n\n## Compensation\n\nWe are happy to pay salaries in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: https://people-ops.status.im/tag/perks/\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role"]},"/private/roles/rust-developer":{"title":"Rust Developer","content":"\n# Role: Rust Developer\nat Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is an organization building the tools and infrastructure for the advancement of a secure, private, and open web3. We have been completely distributed since inception. Our team is currently 100+ core contributors strong and welcomes a growing number of community members from all walks of life, scattered all around the globe. We care deeply about open source, and our organizational structure has a minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**About Logos**\n\nA group of Status Contributors is also involved in a new community lead project, called Logos, and this particular role will enable you to also focus on this project. Logos is a grassroots movement to provide trust-minimized, corruption-resistant governing services and social institutions to underserved citizens. \n\nLogos’ infrastructure will provide a base for the provisioning of the next-generation of governing services and social institutions - paving the way to economic opportunities for those who need them most, whilst respecting basic human rights through the network’s design.You can read more about Logos here: [in this small handbook](https://github.com/acid-info/public-assets/blob/master/logos-manual.pdf) for mindful readers like yourself.\n\n**Who are we?**\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the [Status Network](https://statusnetwork.com/). We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality.\n\n**Responsibilities:**\n\n- Develop and maintenance of internal rust libraries\n- 1st month: comfortable with dev framework, simulation app. Improve python lib?\n- 2th-3th month: Start dev of prototype node services\n\n**Ideally you will have:**\n\n- “Extensive” Rust experience (Async programming is a must) \n Ideally they have some GitHub projects to show\n- Experience with Python\n- Strong competency in developing and maintaining complex libraries or applications\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles) \n \n\n**Bonus points if**\n\n-  E.g. Comfortable working remotely and asynchronously\n-  Experience working for an open source organization.  \n-  Peer-to-peer or networking experience\n\n_[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]_\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)\n\n**Hiring Process** \n\nThe hiring process for this role will be:\n\n1. Interview with Maya (People Ops team)\n2. Interview with Corey (Logos Program Owner)\n3. Interview with Daniel (Engineering Lead)\n4. Interview with Jarrad (Cofounder)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role","engineering","rust"]},"/private/roles/zero-knowledge-research-engineer":{"title":"Zero Knowledge Research Engineer","content":"at Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception.  Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**Who are we**\n\n[Vac](http://vac.dev/) **builds** [public good](https://en.wikipedia.org/wiki/Public_good) protocols for the decentralized web.\n\nWe do applied research based on which we build protocols, libraries and publications. Custodians of protocols that reflect [a set of principles](http://vac.dev/principles) - liberty, privacy, etc.\n\nYou can see a sample of some of our work here: [Vac, Waku v2 and Ethereum Messaging](https://vac.dev/waku-v2-ethereum-messaging), [Privacy-preserving p2p economic spam protection in Waku v2](https://vac.dev/rln-relay), [Waku v2 RFC](https://rfc.vac.dev/spec/10/). Our attitude towards ZK: [Vac \u003c3 ZK](https://forum.vac.dev/t/vac-3-zk/97).\n\n**The role**\n\nThis role will be part of a new team that will make a provable and private WASM engine that runs everywhere. As a research engineer, you will be responsible for researching, designing, analyzing and implementing circuits that allow for proving private computation of execution in WASM. This includes having a deep understanding of relevant ZK proof systems and tooling (zk-SNARK, Circom, Plonk/Halo 2, zk-STARK, etc), as well as different architectures (zk-EVM Community Effort, Polygon Hermez and similar) and their trade-offs. You will collaborate with the Vac Research team, and work with requirements from our new Logos program. As one of the first hires of a greenfield project, you are expected to take on significant responsibility,  while collaborating with other research engineers, including compiler engineers and senior Rust engineers. \n \n\n**Key responsibilities** \n\n- Research, analyze and design proof systems and architectures for private computation\n- Be familiar and adapt to research needs zero-knowledge circuits written in Rust Design and implement zero-knowledge circuits in Rust\n- Write specifications and communicate research findings through write-ups\n- Break down complex problems, and know what can and what can’t be dealt with later\n- Perform security analysis, measure performance of and debug circuits\n\n**You ideally will have**\n\n- Very strong academic or engineering background (PhD-level or equivalent in industry); relevant research experience\n- Experience with low level/strongly typed languages (C/C++/Go/Rust or Java/C#)\n- Experience with Open Source software\n- Deep understanding of Zero-Knowledge proof systems (zk-SNARK, circom, Plonk/Halo2, zk-STARK), elliptic curve cryptography, and circuit design\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles)\n\n**Bonus points if** \n\n- Experience in provable and/or private computation (zkEVM, other ZK VM)\n- Rust Zero Knowledge tooling\n- Experience with WebAssemblyWASM\n\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role. Just explain to us why in your cover letter].\n\n**Hiring process** \n\nThe hiring process for this role will be:\n\n1. Interview with Angel/Maya from our Talent team\n2. Interview with team member from the Vac team\n3. Pair programming task with the Vac team\n4. Interview with Oskar, the Vac team lead\n5. Interview with Jacek, Program lead\n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["engineering","role","zero knowledge"]},"/roadmap/acid/updates/2023-08-02":{"title":"2023-08-02 Acid weekly","content":"\n## Leads roundup - acid\n\n**Al / Comms**\n\n- Status app relaunch comms campaign plan in the works. Approx. date for launch 31.08.\n- Logos comms + growth plan post launch is next up TBD.\n- Will be waiting for specs for data room, raise etc.\n- Hires: split the role for content studio to be more realistic in getting top level talent.\n\n**Matt / Copy**\n\n- Initiative updating old documentation like CC guide to reflect broader scope of BUs\n- Brand guidelines/ modes of presentation are in process\n- Wikipedia entry on network states and virtual states is live on \n\n**Eddy / Digital Comms**\n\n- Logos Discord will be completed by EOD.\n- Codex Discord will be done tomorrow.\n - LPE rollout plan, currently working on it, will be ready EOW\n- Podcast rollout needs some\n- Overarching BU plan will be ready in next couple of weeks as things on top have taken priority.\n\n**Amir / Studio**\n\n- Started execution of LPE for new requirements, broken down in smaller deliveries. Looking to have it working and live by EOM.\n- Hires: still looking for 3 positions with main focus on developer side. \n\n**Jonny / Podcast**\n\n- Podcast timelines are being set. In production right now. Nick delivered graphics for HiO but we need a full pack.\n- First HiO episode is in the works. Will be ready in 2 weeks to fit in the rollout of the LPE.\n\n**Louisa / Events**\n\n- Global strategy paper for wider comms plan.\n- Template for processes and executions when preparing events.\n- Decision made with Carl to move Network State event to November in satellite of other events. Looking into ETH Lisbon / Staking Summit etc.\n - Seoul Q4 hackathon is already in the works. Needs bounty planning.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/acid/updates/2023-08-09":{"title":"2023-08-09 Acid weekly","content":"\n## **Top level priorities:**\n\nLogos Growth Plan\nStatus Relaunch\nLaunch of LPE\nPodcasts (Target: Every week one podcast out)\nHiring: TD studio and DC studio roles\n\n## **Movement Building:**\n\n- Logos collective comms plan skeleton ready - will be applied for all BUs as next step\n- Goal is to have plan + overview to set realistic KPIs and expectations\n- Discord Server update on various views\n- Status relaunch comms plan is ready for input from John et al.\n- Reach out to BUs for needs and deliverables\n\n## **TD Studio**\n\nFull focus on LPE:\n- On track, target of end of august\n- review of options, more diverse landscape of content\n- Episodes page proposals\n- Players in progress\n- refactoring from prev code base\n- structure of content ready in GDrive\n\n## **Copy**\n\n- Content around LPE\n- Content for podcast launches\n- Status launch - content requirements to receive\n- Organization of doc sites review\n- TBD what type of content and how the generation workflows will look like\n\n## **Podcast**\n\n- Good state in editing and producing the shows\n- First interview edited end to end with XMTP is ready. 2 weeks with social assets and all included. \n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n- 3 recorded for HIO, motion graphics in progress\n- First E2E podcast ready in 2 weeks for LPE\n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n\n## **DC Studio**\n\n- Brand guidelines for HiO are ready and set. Thanks `Shmeda`!\n- Logos State branding assets are being developed\n- Presentation templates update\n\n## **Events**\n\n- Network State event probably in Istanbul in November re: Devconnect will confirm shortly.\n- Program elements and speakers are top priority\n- Hackathon in Seoul in Q1 2024 - late Febuary probably\n- Jarrad will be speaking at HCPP and EthRome\n- Global event strategy written and in review\n- Lou presented social media and event KPIs on Paris event\n\n## **CRM \u0026 Marketing tool**\n\n- Get feedback from stakeholders and users\n- PM implementation to be planned (+- 3 month time TBD) with working group\n- LPE KPI: Collecting email addresses of relevant people\n- Careful on how we manage and use data, important for BizDev\n- Careful on which segments of the project to manage using the CRM as it can be very off brand","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/codex/milestones-overview":{"title":"Codex Milestones Overview","content":"\n\n\n## Milestones","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["milestones-overview"]},"/roadmap/codex/updates/2023-07-21":{"title":"2023-07-21 Codex weekly","content":"\n## Codex update 07/12/2023 to 07/21/2023\n\nOverall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc...\n\nOur main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. A lot of grunt work is being done to make that possible. Progress is steady, but there are lots of stabilization and testing \u0026 infra related work going on.\n\nWe're also onboarding several new members to the team (4 to be precise), this will ultimately accelerate our progress, but it requires some upfront investment from some of the more experienced team members.\n\n### DevOps/Infrastructure:\n\n- Adopted nim-codex Docker builds for Dist Tests.\n- Ordered Dedicated node on Hetzner.\n- Configured Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Created Geth and Prometheus Docker images for Dist-Tests.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Set up Ingress Controller in Dist-Tests cluster.\n\n### Testing:\n\n- Set up deployer to gather metrics.\n- Debugging and identifying potential deadlock in the Codex client.\n- Added metrics, built image, and ran tests.\n- Updated dist-test log for Kibana compatibility.\n- Ran dist-tests on a new master image.\n- Debugging continuous tests.\n\n### Development:\n\n- Worked on codex-dht nimble updates and fixing key format issue.\n- Updated CI and split Windows CI tests to run on two CI machines.\n- Continued updating dependencies in codex-dht.\n- Fixed decoding large manifests ([PR #479](https://github.com/codex-storage/nim-codex/pull/497)).\n- Explored the existing implementation of NAT Traversal techniques in `nim-libp2p`.\n\n### Research\n\n- Exploring additional directions for remote verification techniques and the interplay of different encoding approaches and cryptographic primitives\n - https://eprint.iacr.org/2021/1500.pdf\n - https://dankradfeist.de/ethereum/2021/06/18/pcs-multiproofs.html\n - https://eprint.iacr.org/2021/1544.pdf\n- Onboarding Balázs as our ZK researcher/engineer\n- Continued research in DAS related topics\n - Running simulation on newly setup infrastructure\n- Devised a new direction to reduce metadata overhead and enable remote verification https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n- Looked into NAT Traversal ([issue #166](https://github.com/codex-storage/nim-codex/issues/166)).\n\n### Cross-functional (Combination of DevOps/Testing/Development):\n\n- Fixed discovery related issues.\n- Planned Codex Demo update for the Logos event and prepared environment for the demo.\n- Described requirements for Dist Tests logs format.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Dist Tests logs adoption requirements - Updated log format for Kibana compatibility.\n- Hetzner Dedicated server was configured.\n- Set up Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper in Dist-Tests cluster.\n- Setup Grafana in Dist-Tests cluster.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Setup Ingress Controller in Dist-Tests cluster.\n\n---\n\n#### Conversations\n1. zk_id _—_ 07/24/2023 11:59 AM\n\u003e \n\u003e We've explored VDI for rollups ourselves in the last week, curious to know your thoughts\n2. dryajov _—_ 07/25/2023 1:28 PM\n\u003e \n\u003e It depends on what you mean, from a high level (A)VID is probably the closest thing to DAS in academic research, in fact DAS is probably either a subset or a superset of VID, so it's definitely worth digging into. But I'm not sure what exactly you're interested in, in the context of rollups...\n1. zk_id _—_ 07/25/2023 3:28 PM\n \n The part of the rollups seems to be the base for choosing proofs that scale linearly with the amount of nodes (which makes it impractical for large numbers of nodes). The protocol is very simple, and would only need to instead provide constant proofs with the Kate commitments (at the cost of large computational resources is my understanding). This was at least the rationale that I get from reading the paper and the conversation with Bunz, one of the founders of the Espresso shared sequencer (which is where I found the first reference to this paper). I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal. You still would need the sampling for the light clients though, of course.\n \n2. dryajov _—_ 07/25/2023 8:31 PM\n \n \u003e I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal.\n \n Yeah, great question. What follows is strictly IMO, as I haven't seen this formally contrasted anywhere, so my reasoning can be wrong in subtle ways.\n \n - (A)VID - **dispersing** and storing data in a verifiable manner\n - Sampling - verifying already **dispersed** data\n \n tl;dr Sampling allows light nodes to protect against dishonest majority attacks. In other words, a light node cannot be tricked to follow an incorrect chain by a dishonest validator majority that withholds data. More details are here - [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\") ------------- First, DAS implies (A)VID, as there is an initial phase where data is distributed to some subset of nodes. Moreover, these nodes, usually the validators, attest that they received the data and that it is correct. If a majority of validators accepts, then the block is considered correct, otherwise it is rejected. This is the verifiable dispersal part. But what if the majority of validators are dishonest? Can you prevent them from tricking the rest of the network from following the chain?\n \n Dankrad Feist\n \n [Data availability checks](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html)\n \n Primer on data availability checks\n \n3. _[_8:31 PM_]_\n \n ## Dealing with dishonest majorities\n \n This is easy if all the data is downloaded by all nodes all the time, but we're trying to avoid just that. But lets assume, for the sake of the argument, that there are full nodes in the network that download all the data and are able to construct fraud proofs for missing data, can this mitigate the problem? It turns out that it can't, because proving data (un)availability isn't a directly attributable fault - in other words, you can observe/detect it but there is no way you can prove it to the rest of the network reliably. More details here [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\") So, if there isn't much that can be done by detecting that a block isn't available, what good is it for? Well nodes can still avoid following the unavailable chain and thus be tricked by a dishonest majority. However, simply attesting that data has been publishing is not enough to prevent a dishonest majority from attacking the network. (edited)\n \n4. dryajov _—_ 07/25/2023 9:06 PM\n \n To complement, the relevant quote from [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\"), is:\n \n \u003e Here, fraud proofs are not a solution, because not publishing data is not a uniquely attributable fault - in any scheme where a node (\"fisherman\") has the ability to \"raise the alarm\" about some piece of data not being available, if the publisher then publishes the remaining data, all nodes who were not paying attention to that specific piece of data at that exact time cannot determine whether it was the publisher that was maliciously withholding data or whether it was the fisherman that was maliciously making a false alarm.\n \n The relevant quote from from [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\"), is:\n \n \u003e There is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a consensus supermajority has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? Honest full nodes, obviously, will not follow this chain, as they can’t download the data. But light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.\n \n Both articles are a bit old, but the intuitions still hold.\n \n\nJuly 26, 2023\n\n6. zk_id _—_ 07/26/2023 10:42 AM\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n7. _[_10:45 AM_]_\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n8. zk_id\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n ### dryajov _—_ 07/26/2023 4:42 PM\n \n Great! Glad to help anytime \n \n9. zk_id\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n dryajov _—_ 07/26/2023 4:43 PM\n \n Yes, I'd argue it is crucial in a network with distributed validation, where all nodes are either fully light or partially light nodes.\n \n10. _[_4:46 PM_]_\n \n Btw, there is probably more we can share/compare notes on in this problem space, we're looking at similar things, perhaps from a slightly different perspective in Codex's case, but the work done on DAS with the EF directly is probably very relevant for you as well \n \n\nJuly 27, 2023\n\n12. zk_id _—_ 07/27/2023 3:05 AM\n \n I would love to. Do you have those notes somewhere?\n \n13. zk_id _—_ 07/27/2023 4:01 AM\n \n all the links you have, anything, would be useful\n \n14. zk_id\n \n I would love to. Do you have those notes somewhere?\n \n dryajov _—_ 07/27/2023 4:50 PM\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n\nJuly 28, 2023\n\n16. zk_id _—_ 07/28/2023 5:47 AM\n \n Would love to see anything that is possible\n \n17. _[_5:47 AM_]_\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n18. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n dryajov _—_ 07/28/2023 4:07 PM\n \n Yes, we're also working in this direction as this is crucial for us as well. There should be some result coming soon(tm), now that @bkomuves is helping us with this part.\n \n19. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n bkomuves _—_ 07/28/2023 4:44 PM\n \n my current view (it's changing pretty often :) is that there is tension between:\n \n - commitment cost\n - proof cost\n - and verification cost\n \n the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n\nJuly 29, 2023\n\n21. bkomuves\n \n my current view (it's changing pretty often :) is that there is tension between: \n \n - commitment cost\n - proof cost\n - and verification cost\n \n  the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n zk_id _—_ 07/29/2023 4:23 AM\n \n I agree. That's also my understanding (although surely much more superficial).\n \n22. _[_4:24 AM_]_\n \n There is also the dimension of computation vs size cost\n \n23. _[_4:25 AM_]_\n \n ie the VID scheme (of the paper that kickstarted this conversation) has all the properties we need, but it scales n^2 in message complexity which makes it lose the properties we are looking for after 1k nodes. We need to scale confortably to 10k nodes.\n \n24. _[_4:29 AM_]_\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is:\n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n\nAugust 1, 2023\n\n26. dryajov\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n Leobago _—_ 08/01/2023 1:13 PM\n \n Note much public write-ups yet. You can find some content here:\n \n - [https://blog.codex.storage/data-availability-sampling/](https://blog.codex.storage/data-availability-sampling/ \"https://blog.codex.storage/data-availability-sampling/\")\n \n - [https://github.com/codex-storage/das-research](https://github.com/codex-storage/das-research \"https://github.com/codex-storage/das-research\")\n \n \n We also have a few Jupiter notebooks but they are not public yet. As soon as that content is out we can let you know ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n Codex Storage Blog\n \n [Data Availability Sampling](https://blog.codex.storage/data-availability-sampling/)\n \n The Codex team is busy building a new web3 decentralized storage platform with the latest advances in erasure coding and verification systems. Part of the challenge of deploying decentralized storage infrastructure is to guarantee that the data that has been stored and is available for retrieval from the beginning until\n \n GitHub\n \n [GitHub - codex-storage/das-research: This repository hosts all the ...](https://github.com/codex-storage/das-research)\n \n This repository hosts all the research on DAS for the collaboration between Codex and the EF. - GitHub - codex-storage/das-research: This repository hosts all the research on DAS for the collabora...\n \n [](https://opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research)\n \n ![GitHub - codex-storage/das-research: This repository hosts all the ...](https://images-ext-2.discordapp.net/external/DxXI-YBkzTrPfx_p6_kVpJzvVe6Ix6DrNxgrCbcsjxo/https/opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research?width=400\u0026height=200)\n \n27. zk_id\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is: \n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n dryajov _—_ 08/01/2023 1:55 PM\n \n This might interest you as well - [https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a \"https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a\")\n \n Medium\n \n [Combining KZG and erasure coding](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a)\n \n The Hitchhiker’s Guide to Subspace  — Episode II\n \n [](https://miro.medium.com/v2/resize:fit:1200/0*KGb5QHFQEd0cvPeP.png)\n \n ![Combining KZG and erasure coding](https://images-ext-2.discordapp.net/external/LkoJxMEskKGMwVs8XTPVQEEu0senjEQf42taOjAYu0k/https/miro.medium.com/v2/resize%3Afit%3A1200/0%2AKGb5QHFQEd0cvPeP.png?width=400\u0026height=200)\n \n28. _[_1:56 PM_]_\n \n This is a great analysis of the current state of the art in structure of data + commitment and the interplay. I would also recoment reading the first article of the series which it also links to\n \n29. zk_id _—_ 08/01/2023 3:04 PM\n \n Thanks @dryajov @Leobago ! Much appreciated!\n \n30. _[_3:05 PM_]_\n \n Very glad that we can discuss these things with you. Maybe I have some specific questions once I finish reading a huge pile of pending docs that I'm tackling starting today...\n \n31. zk_id _—_ 08/01/2023 6:34 PM\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n32. zk_id\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n Leobago _—_ 08/01/2023 6:36 PM\n \n Yes, checkout the visual branch and make sure to enable plotting in the config file, it should produce a bunch of figures ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n33. _[_6:37 PM_]_\n \n You might find also some bugs here and there on that branch ![😅](https://discord.com/assets/b45af785b0e648fe2fb7e318a6b8010c.svg)\n \n34. zk_id _—_ 08/01/2023 7:44 PM\n \n Thanks!","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-01":{"title":"2023-08-01 Codex weekly","content":"\n# Codex update Aug 1st\n\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial design writeup https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n - Work break down and review for Ben and Tomasz (epic coming up)\n - This is required to integrate the proving system\n\n### Milestone: Block discovery and retrieval\n\n- Some initial work break down and milestones here - https://docs.google.com/document/d/1hnYWLvFDgqIYN8Vf9Nf5MZw04L2Lxc9VxaCXmp9Jb3Y/edit\n - Initial analysis of block discovery - https://rpubs.com/giuliano_mega/1067876\n - Initial block discovery simulator - https://gmega.shinyapps.io/block-discovery-sim/\n\n### Milestone: Distributed Client Testing\n\n- Lots of work around log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - This is a first try of running against an L2\n - Mostly done, waiting on related fixes to land before merge - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Reservations and slot management\n\n- Lots of work around slot reservation and queuing https://github.com/codex-storage/nim-codex/pull/455\n\n## Remote auditing\n\n### Milestone: Implement Poseidon2\n\n- First pass at an implementation by Balazs\n - private repo, but can give access if anyone is interested\n\n### Milestone: Refine proving system\n\n- Lost of thinking around storage proofs and proving systems\n - private repo, but can give access if anyone is interested\n\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator.\n- Implemented logical error-rates and delays to interactions between DHT clients.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-11":{"title":"2023-08-11 Codex weekly","content":"\n\n# Codex update August 11\n\n---\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial Merkle Tree implementation - https://github.com/codex-storage/nim-codex/pull/504\n- Work on persisting/serializing Merkle Tree is underway, PR upcoming\n\n### Milestone: Block discovery and retrieval\n\n- Continued analysis of block discovery and retrieval - https://hackmd.io/_KOAm8kNQamMx-lkQvw-Iw?both=#fn5\n - Reviewing papers on peers sampling and related topics\n - [Wormhole Peer Sampling paper](http://publicatio.bibl.u-szeged.hu/3895/1/p2p13.pdf)\n - [Smoothcache](https://dl.acm.org/doi/10.1145/2713168.2713182)\n- Starting work on simulations based on the above work\n\n### Milestone: Distributed Client Testing\n\n- Continuing working on log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n - More related issues/PRs:\n - https://github.com/codex-storage/infra-codex/pull/20\n - https://github.com/codex-storage/infra-codex/pull/20\n- Testing and debugging Condex in continuous testing environment\n - Debugging continuous tests [cs-codex-dist-tests/pull/44](https://github.com/codex-storage/cs-codex-dist-tests/pull/44)\n - pod labeling [cs-codex-dist-tests/issues/39](https://github.com/codex-storage/cs-codex-dist-tests/issues/39)\n\n---\n## Infra\n\n### Milestone: Kubernetes Configuration and Management\n- Move Dist-Tests cluster to OVH and define naming conventions\n- Configure Ingress Controller for Kibana/Grafana\n- **Create documentation for Kubernetes management**\n- **Configure Dist/Continuous-Tests Pods logs shipping**\n\n### Milestone: Continuous Testing and Labeling\n- Watch the Continuous tests demo\n- Implement and configure Dist-Tests labeling\n- Set up logs shipping based on labels\n- Improve Docker workflows and add 'latest' tag\n\n### Milestone: CI/CD and Synchronization\n- Set up synchronization by codex-storage\n- Configure Codex Storage and Demo CI/CD environments\n\n---\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - Done but merge is blocked by a few issues - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Marketplace Sales\n\n- Lots of cleanup and refactoring\n - Finished refactoring state machine PR [link](https://github.com/codex-storage/nim-codex/pull/469)\n - Added support for loading node's slots during Sale's module start [link](https://github.com/codex-storage/nim-codex/pull/510)\n\n---\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator - https://github.com/cortze/py-dht.\n\n\nNOTE: Several people are/where out during the last few weeks, so some milestones are paused until they are back","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/innovation_lab/updates/2023-07-12":{"title":"2023-07-12 Innovation Lab Weekly","content":"\n**Logos Lab** 12th of July\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\n**Milestone**: deliver the first transactional Waku Object called Payggy (attached some design screenshots).\n\nIt is now possible to make transactions on the blockchain and the objects send notifications over the messaging layer (e.g. Waku) to the other participants. What is left is the proper transaction status management and some polishing.\n\nThere is also work being done on supporting external objects, this enables creating the objects with any web technology. This work will guide the separation of the interfaces between the app and the objects and lead us to release it as an SDK.\n\n**Next milestone**: group chat support\n\nThe design is already done for the group chat functionality. There is ongoing design work for a new Waku Object that would showcase what can be done in a group chat context.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nLink to Payggy design files:\nhttps://scene.zeplin.io/project/64ae9e965652632169060c7d\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/UtVHf2EU\n\n--- \n\n#### Conversation\n\n1. petty _—_ 07/15/2023 5:49 AM\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n2. petty\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n3. attila🍀 _—_ 07/15/2023 6:18 AM\n \n at the moment most of the code is in the `waku-objects-playground` repo later we may split it to several repos here is the link: [https://github.com/logos-innovation-lab/waku-objects-playground](https://github.com/logos-innovation-lab/waku-objects-playground \"https://github.com/logos-innovation-lab/waku-objects-playground\")","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-02":{"title":"2023-08-02 Innovation Lab weekly","content":"\n**Logos Lab** 2nd of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nThe last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite. \n\nStill, a lot of progress were made and the team released the first version of a color system in the form of an npm package, which lets the users to choose any color they like to customize their app. It is based on grayscale design and uses luminance, hence the name of the library. Try it in the Playground app or check the links below.\n\n**Milestone**: group chat support\n\nThere is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nGrayscale design:\nhttps://grayscale.design/\n\nLuminance package on npm:\nhttps://www.npmjs.com/package/@waku-objects/luminance\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/ZMU4yyWG\n\n--- \n\n### Conversation\n\n1. fryorcraken _—_ Yesterday at 10:58 PM\n \n \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n \n While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n\nAugust 3, 2023\n\n2. fryorcraken\n \n \u003e \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation. While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n3. attila🍀 _—_ Today at 4:21 AM\n \n This is great news and I think it will help with adoption. I did not find a JS API for status (maybe I was looking at the wrong places), the closest was the `status-js-api` project but that still uses whisper and the repo recommends to use `js-waku` instead ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg) [https://github.com/status-im/status-js-api](https://github.com/status-im/status-js-api \"https://github.com/status-im/status-js-api\") Also I also found the `56/STATUS-COMMUNITIES` spec: [https://rfc.vac.dev/spec/56/](https://rfc.vac.dev/spec/56/ \"https://rfc.vac.dev/spec/56/\") It seems to be quite a complete solution for community management with all the bells and whistles. However our use case is a private group chat for your existing contacts, so it seems to be a bit overkill for that.\n \n4. fryorcraken _—_ Today at 5:32 AM\n \n The repo is status-im/status-web\n \n5. _[_5:33 AM_]_\n \n Spec is [https://rfc.vac.dev/spec/55/](https://rfc.vac.dev/spec/55/ \"https://rfc.vac.dev/spec/55/\")\n \n6. fryorcraken\n \n The repo is status-im/status-web\n \n7. attila🍀 _—_ Today at 6:05 AM\n \n As constructive feedback I can tell you that it is not trivial to find it and use it in other projects It is presented as a React component without documentation and by looking at the code it seems to provide you the whole chat UI of the desktop app, which is not necessarily what you need if you want to embed it in your app It seems to be using this package: [https://www.npmjs.com/package/@status-im/js](https://www.npmjs.com/package/@status-im/js \"https://www.npmjs.com/package/@status-im/js\") Which also does not have documentation I assume that package is built from this: [https://github.com/status-im/status-web/tree/main/packages/status-js](https://github.com/status-im/status-web/tree/main/packages/status-js \"https://github.com/status-im/status-web/tree/main/packages/status-js\") This looks promising, but again there is no documentation. Of course you can use the code to figure out things, but at least I would be interested in what are the requirements and high level architecture (does it require an ethereum RPC endpoint, where does it store data, etc.) so that I can evaluate if this is the right approach for me. So maybe a lesson here is to put effort in the documentation and the presentation as well and if you have the budget then have someone on the team whose main responsibility is that (like a devrel or dev evangelist role)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/nomos/updates/2023-07-24":{"title":"2023-07-24 Nomos weekly","content":"\n**Research**\n\n- Milestone 1: Understanding Data Availability (DA) Problem\n - High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris.\n - Explored the necessity and key challenges associated with DA.\n - In-depth study of Verifiable Information Dispersal (VID) as it relates to data availability.\n - **Blocker:** The experimental tests for our specific EC scheme are pending, which is blocking progress to make final decision on KZG + commitments for our architecture.\n- Milestone 2: Privacy for Proof of Stake (PoS)\n - Analyzed the capabilities and limitations of mixnets, specifically within the context of timing attacks in private PoS.\n - Invested time in understanding timing attacks and how Nym mixnet caters to these challenges.\n - Reviewed the Crypsinous paper to understand its privacy vulnerabilities, notably the issue with probabilistic leader election and the vulnerability of anonymous broadcast channels to timing attacks.\n\n**Development**\n\n- Milestone 1: Mixnet and Networking\n - Initiated integration of libp2p to be used as the full node's backend, planning to complete in the next phase.\n - Begun planning for the next steps for mixnet integration, with a focus on understanding the components of the Nym mixnet, its problem-solving mechanisms, and the potential for integrating some of its components into our codebase.\n- Milestone 2: Simulation Application\n - Completed pseudocode for Carnot Simulator, created a test pseudocode, and provided a detailed description of the simulation. The relevant resources can be found at the following links:\n - Carnot Simulator pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/carnot_simulation_psuedocode.py)\n - Test pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/test_carnot_simulation.py)\n - Description of the simulation (https://www.notion.so/Carnot-Simulation-c025dbab6b374c139004aae45831cf78)\n - Implemented simulation network fixes and warding improvements, and increased the run duration of integration tests. The corresponding pull requests can be accessed here:\n - Simulation network fix (https://github.com/logos-co/nomos-node/pull/262)\n - Vote tally fix (https://github.com/logos-co/nomos-node/pull/268)\n - Increased run duration of integration tests (https://github.com/logos-co/nomos-node/pull/263)\n - Warding improvements (https://github.com/logos-co/nomos-node/pull/269)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-07-31":{"title":"2023-07-31 Nomos weekly","content":"\n**Nomos 31st July**\n\n[Network implementation and Mixnet]:\n\nResearch\n- Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder.\n- Considered the use of a new NetworkInterface in the simulation to mimic the mixnet, but currently, no significant benefits from doing so have been identified.\nDevelopment\n- Fixes were made on the Overlay interface.\n- Near completion of the libp2p integration with all tests passing so far, a PR is expected to be opened soon.\n- Link to libp2p PRs: https://github.com/logos-co/nomos-node/pull/278, https://github.com/logos-co/nomos-node/pull/279, https://github.com/logos-co/nomos-node/pull/280, https://github.com/logos-co/nomos-node/pull/281\n- Started working on the foundation of the libp2p-mixnet transport.\n\n[Private PoS]:\n\nResearch\n- Discussions were held on the Privacy PoS (PPoS) proposal, aligning a general direction of team members.\n- Reviews on the PPoS proposal were done.\n- A proposal to merge the PPoS proposal with the efficient one was made, in order to have both privacy and efficiency.\n- Discussions on merging Efficient PoS (EPoS) with PPoS are in progress.\n\n[Carnot]:\n\nResearch\n- Analyzing Bribery attack scenarios, which seem to make Carnot more vulnerable than expected.\n\n\n**Development**\n\n- Improved simulation application to meet test scale requirements (https://github.com/logos-co/nomos-node/pull/274).\n- Created a strategy to solve the large message sending issue in the simulation application.\n\n[Data Availability Sampling (or VID)]:\n\nResearch\n- Conducted an analysis of stored data \"degradation\" problem for data availability, modeling fractions of nodes which leave the system at regular time intervals\n- Continued literature reading on Verifiable Information Dispersal (VID) for DA problem, as well as encoding/commitment schemes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-07":{"title":"2023-08-07 Nomos weekly","content":"\nNomos weekly report\n================\n\n### Network implementation and Mixnet:\n#### Research\n- Researched the Nym mixnet architecture in depth in order to design our prototype architecture.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1661386628)\n- Discussions about how to manage the mixnet topology.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1665101243)\n#### Development\n- Implemented a prototype for building a Sphinx packet, mixing packets at the first hop of gossipsub with 3 mixnodes (+ encryption + delay), raw TCP connections between mixnodes, and the static entire mixnode topology.\n (Link: https://github.com/logos-co/nomos-node/pull/288)\n- Added support for libp2p in tests.\n (Link: https://github.com/logos-co/nomos-node/pull/287)\n- Added support for libp2p in nomos node.\n (Link: https://github.com/logos-co/nomos-node/pull/285)\n\n### Private PoS:\n#### Research\n- Worked on PPoS design and addressed potential metadata leakage due to staking and rewarding.\n- Focus on potential bribery attacks and privacy reasoning, but not much progress yet.\n- Stopped work on Accountability mechanism and PPoS efficiency due to prioritizing bribery attacks.\n\n### Carnot:\n#### Research\n- Addressed two solutions for the bribery attack. Proposals pending.\n- Work on accountability against attacks in Carnot including Slashing mechanism for attackers is paused at the moment.\n- Modeled data decimation using a specific set of parameters and derived equations related to it.\n- Proposed solutions to address bribery attacks without compromising the protocol's scalability.\n\n### Data Availability Sampling (VID):\n#### Research\n- Analyzed data decimation in data availability problem.\n (Link: https://www.overleaf.com/read/gzqvbbmfnxyp)\n- DA benchmarks and analysis for data commitments and encoding. This confirms that (for now), we are on the right path.\n- Explored the idea of node sharding: https://arxiv.org/abs/1907.03331 (taken from Celestia), but discarded it because it doesn't fit our architecture.\n\n#### Testing and Node development:\n- Fixes and enhancements made to nomos-node.\n (Link: https://github.com/logos-co/nomos-node/pull/282)\n (Link: https://github.com/logos-co/nomos-node/pull/289)\n (Link: https://github.com/logos-co/nomos-node/pull/293)\n (Link: https://github.com/logos-co/nomos-node/pull/295)\n- Ran simulations with 10K nodes.\n- Updated integration tests in CI to use waku or libp2p network.\n (Link: https://github.com/logos-co/nomos-node/pull/290)\n- Fix for the node throughput during simulations.\n (Link: https://github.com/logos-co/nomos-node/pull/295)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-14":{"title":"2023-08-17 Nomos weekly","content":"\n\n# **Nomos weekly report 14th August**\n---\n\n## **Network Privacy and Mixnet**\n\n### Research\n- Mixnet architecture discussions. Potential agreement on architecture not very different from PoC\n- Mixnet preliminary design [https://www.notion.so/Mixnet-Architecture-613f53cf11a245098c50af6b191d31d2]\n### Development\n- Mixnet PoC implementation starting [https://github.com/logos-co/nomos-node/pull/302]\n- Implementation of mixnode: a core module for implementing a mixnode binary\n- Implementation of mixnet-client: a client library for mixnet users, such as nomos-node\n\n### **Private PoS**\n- No progress this week.\n\n---\n## **Data Availability**\n### Research\n- Continued analysis of node decay in data availability problem\n- Improved upper bound on the probability of the event that data is no longer available given by the (K,N) erasure ECC scheme [https://www.overleaf.com/read/gzqvbbmfnxyp]\n\n### Development\n- Library survey: Library used for the benchmarks is not yet ready for requirements, looking for alternatives\n- RS \u0026 KZG benchmarking for our use case https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450\n- Study documentation on Danksharding and set of questions for Leonardo [https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450]\n\n---\n## **Testing, CI and Simulation App**\n\n### Development\n- Sim fixes/improvements [https://github.com/logos-co/nomos-node/pull/299], [https://github.com/logos-co/nomos-node/pull/298], [https://github.com/logos-co/nomos-node/pull/295]\n- Simulation app and instructions shared [https://github.com/logos-co/nomos-node/pull/300], [https://github.com/logos-co/nomos-node/pull/291], [https://github.com/logos-co/nomos-node/pull/294]\n- CI: Updated and merged [https://github.com/logos-co/nomos-node/pull/290]\n- Parallel node init for improved simulation run times [https://github.com/logos-co/nomos-node/pull/300]\n- Implemented branch overlay for simulating 100K+ nodes [https://github.com/logos-co/nomos-node/pull/291]\n- Sequential builds for nomos node features updated in CI [https://github.com/logos-co/nomos-node/pull/290]","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/vac/updates/2023-07-10":{"title":"2023-07-10 Vac Weekly","content":"- *vc::Deep Research*\n - refined deep research roadmaps https://github.com/vacp2p/research/issues/190, https://github.com/vacp2p/research/issues/192\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Prepared Paris talks\n - Implemented perf protocol to compare the performances with other libp2ps https://github.com/status-im/nim-libp2p/pull/925\n- *vsu::Tokenomics*\n - Fixing bugs on the SNT staking contract;\n - Definition of the first formal verification tests for the SNT staking contract;\n - Slides for the Paris off-site\n- *vsu::Distributed Systems Testing*\n - Replicated message rate issue (still on it)\n - First mockup of offline data\n - Nomos consensus test working\n- *vip::zkVM*\n - hiring\n - onboarding new researcher\n - presentation on ECC during Logos Research Call (incl. preparation)\n - more research on nova, considering additional options\n - Identified 3 research questions to be taken into consideration for the ZKVM and the publication\n - Researched Poseidon implementation for Nova, Nova-Scotia, Circom\n- *vip::RLNP2P*\n - finished rln contract for waku product - https://github.com/waku-org/rln-contract\n - fixed homebrew issue that prevented zerokit from building - https://github.com/vacp2p/zerokit/commit/8a365f0c9e5c4a744f70c5dd4904ce8d8f926c34\n - rln-relay: verify proofs based upon bandwidth usage - https://github.com/waku-org/nwaku/commit/3fe4522a7e9e48a3196c10973975d924269d872a\n - RLN contract audit cont' https://hackmd.io/@blockdev/B195lgIth\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-17":{"title":"2023-07-17 Vac weekly","content":"\n**Last week**\n- *vc*\n - Vac day in Paris (13th)\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Paris offsite Paris (all CCs)\n- *vsu::Tokenomics*\n - Bugs found and solved in the SNT staking contract\n - attend events in Paris\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - QoS on all four infras\n - Continue work on theoretical gossipsub analysis (varying regular graph sizes)\n - Peer extraction using WLS (almost finished)\n - Discv5 testing\n - Wakurtosis CI improvements\n - Provide offline data\n- *vip::zkVM*\n - onboarding new researcher\n - Prepared and presented ZKVM work during VAC offsite\n - Deep research on Nova vs Stark in terms of performance and related open questions\n - researching Sangria\n - Worked on NEscience document (https://www.notion.so/Nescience-WIP-0645c738eb7a40869d5650ae1d5a4f4e)\n - zerokit:\n - worked on PR for arc-circom\n- *vip::RLNP2P*\n - offsite Paris\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - EthCC \u0026 Logos event Paris (all CCs)\n- *vsu::Tokenomics*\n - Attend EthCC and side events in Paris\n - Integrate staking contracts with radCAD model\n - Work on a new approach for Codex collateral problem\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - Finish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report\n - Restructure the Analysis script and start modelling Status control messages\n - Split Wakurtosis analysis module into separate repository (delayed)\n - Deliver simulation results (incl fixing discv5 error with new Kurtosis version)\n - Second iteration Nomos CI\n- *vip::zkVM*\n - Continue researching on Nova open questions and Sangria\n - Draft the benchmark document (by the end of the week)\n - research hardware for benchmarks\n - research Halo2 cont'\n - zerokit:\n - merge a PR for deployment of arc-circom\n - deal with arc-circom master fail\n- *vip::RLNP2P*\n - offsite paris\n- *blockers*\n - *vip::zkVM:zerokit*: ark-circom deployment to crates io; contact to ark-circom team","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-24":{"title":"2023-08-03 Vac weekly","content":"\nNOTE: This is a first experimental version moving towards the new reporting structure:\n\n**Last week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - related work section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - basic torpush encode/decode ( https://github.com/vacp2p/nim-libp2p-experimental/pull/1 )\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - (focus on Tor-push PoC)\n- *vsu::P2P*\n - admin/misc\n - EthCC (all CCs)\n- *vsu::Tokenomics*\n - admin/misc\n - Attended EthCC and side events in Paris\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - Kicked off a new approach for Codex collateral problem\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - Integrated SNT staking contracts with Python\n - milestone (50%, 2023/07/14) SNT litepaper\n - (delayed)\n - milestone(30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - milestone (95%, 2023/07/31) Wakurtosis Waku Report\n - Add timout to injection async call in WLS to avoid further issues (PR #139 https://github.com/vacp2p/wakurtosis/pull/139)\n - Plotting \u0026 analyse 100 msg/s off line Prometehus data\n - milestone (90%, 2023/07/31) Nomos CI testing\n - fixed errors in Nomos consensus simulation\n - milestone (30%, ...) gossipsub model analysis\n - add config options to script, allowing to load configs that can be directly compared to Wakurtosis results\n - added support for small world networks\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - (write ups will be available here: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Solved the open questions on Nova adn completed the document (will update the page)\n - Reviewed Nescience and working on a document\n - Reviewed partly the write up on FHE\n - writeup for Nova and Sangria; research on super nova\n - reading a new paper revisiting Nova (https://eprint.iacr.org/2023/969)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - zkvm\n - Researching Nova to understand the folding technique for ZKVM adaptation\n - zerokit\n - Rostyslav became circom-compat maintainer\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro\n - completed\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - admin/misc\n - EthCC + offsite\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - working on contributions section, based on https://hackmd.io/X1DoBHtYTtuGqYg0qK4zJw\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - working on establishing a connection via nim-libp2p tor-transport\n - setting up goerli test node (cont')\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - continue working on paper\n- *vsu::P2P*\n - milestone (...)\n - Implement ChokeMessage for GossipSub\n - Continue \"limited flood publishing\" (https://github.com/status-im/nim-libp2p/pull/911)\n- *vsu::Tokenomics*\n - admin/misc:\n - (3 CC days off)\n - Catch up with EthCC talks that we couldn't attend (schedule conflicts)\n - milestone (50%, 2023/07/14) SNT litepaper\n - Start building the SNT agent-based simulation\n- *vsu::Distributed Systems Testing*\n - milestone (100%, 2023/07/31) Wakurtosis Waku Report\n - finalize simulations\n - finalize report\n - milestone (100%, 2023/07/31) Nomos CI testing\n - finalize milestone\n - milestone (30%, ...) gossipsub model analysis\n - Incorporate Status control messages\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - Refine the Nescience WIP and FHE documents\n - research HyperNova\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Continue exploring Nova and other ZKPs and start technical writing on Nova benchmarks\n - zkvm\n - zerokit\n - circom: reach an agreement with other maintainers on master branch situation\n- *vip::RLNP2P*\n - maintenance\n - investigate why docker builds of nwaku are failing [zerokit dependency related]\n - documentation on how to use rln for projects interested (https://discord.com/channels/864066763682218004/1131734908474236968/1131735766163267695)(https://ci.infra.status.im/job/nim-waku/job/manual/45/console)\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - revert rln bandwidth reduction based on offsite discussion, move to different validator\n- *blockers*","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-31":{"title":"2023-07-31 Vac weekly","content":"\n- *vc::Deep Research*\n - milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission\n - proposed solution section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - establishing torswitch and testing code\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - addressed feedback on current version of paper\n- *vsu::P2P*\n - nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH's EIP-4844\n - Merged IDontWant (https://github.com/status-im/nim-libp2p/pull/934) \u0026 Limit flood publishing (https://github.com/status-im/nim-libp2p/pull/911) 𝕏\n - This wraps up the \"mandatory\" optimizations for 4844. We will continue working on stagger sending and other optimizations\n - nim-libp2p: (70%, 2023/07/31) WebRTC transport\n- *vsu::Tokenomics*\n - admin/misc\n - 2 CCs off for the week\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - milestone (50%, 2023/07/14) SNT litepaper\n - milestone (30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - admin/misc\n - Analysis module extracted from wakurtosis repo (https://github.com/vacp2p/wakurtosis/pull/142, https://github.com/vacp2p/DST-Analysis)\n - hiring\n - milestone (99%, 2023/07/31) Wakurtosis Waku Report\n - Re-run simulations\n - merge Discv5 PR (https://github.com/vacp2p/wakurtosis/pull/129).\n - finalize Wakurtosis Tech Report v2\n - milestone (100%, 2023/07/31) Nomos CI testing\n - delivered first version of Nomos CI integration (https://github.com/vacp2p/wakurtosis/pull/141)\n - milestone (30%, 2023/08/31 gossipsub model: Status control messages\n - Waku model is updated to model topics/content-topics\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - achievment :: nova questions answered (see document in Project: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Nescience WIP done (to be delivered next week, priority)\n - FHE review (lower prio)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Working on discoveries about other benchmarks done on plonky2, starky, and halo2\n - zkvm\n - zerokit\n - fixed ark-circom master \n - achievment :: publish ark-circom https://crates.io/crates/ark-circom\n - achievment :: publish zerokit_utils https://crates.io/crates/zerokit_utils\n - achievment :: publish rln https://crates.io/crates/rln (𝕏 jointly with RLNP2P)\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) RLN-Relay Waku production readiness\n - Updated rln-contract to be more modular - and downstreamed to waku fork of rln-contract - https://github.com/vacp2p/rln-contract and http://github.com/waku-org/waku-rln-contract\n - Deployed to sepolia\n - Fixed rln enabled docker image building in nwaku - https://github.com/waku-org/nwaku/pull/1853\n - zerokit:\n - achievement :: zerokit v0.3.0 release done - https://github.com/vacp2p/zerokit/releases/tag/v0.3.0 (𝕏 jointly with zkVM)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-07":{"title":"2023-08-07 Vac weekly","content":"\n\nMore info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week):\nhttps://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n**Vac week 32** August 7th\n- *vsu::P2P*\n - `vac:p2p:nim-libp2p:vac:maintenance`\n - Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n - `vac:p2p:nim-chronos:vac:maintenance`\n - Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n - Investigate flaky test using REUSE_PORT\n- *vsu::Tokenomics*\n - (...)\n- *vsu::Distributed Systems Testing*\n - `vac:dst:wakurtosis:waku:techreport`\n - delivered: Wakurtosis Tech Report v2 (https://docs.google.com/document/d/1U3bzlbk_Z3ZxN9tPAnORfYdPRWyskMuShXbdxCj4xOM/edit?usp=sharing)\n - `vac:dst:wakurtosis:vac:rlog`\n - working on research log post on Waku Wakurtosis simulations\n - `vac:dst:gsub-model:status:control-messages`\n - delivered: the analytical model can now handle Status messages; status analysis now has a separate cli and config; handles top 5 message types (by expected bandwidth consumption)\n - `vac:dst:gsub-model:vac:refactoring`\n - Refactoring and bug fixes\n - introduced and tested 2 new analytical models\n - `vac:dst:wakurtosis:waku:topology-analysis`\n - delivered: extracted into separate module, independent of wls message\n - `vac:dst:wakurtosis:nomos:ci-integration_02`\n - planning\n - `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n - planning; check usage of new codex simulator tool (https://github.com/codex-storage/cs-codex-dist-tests)\n- *vip::zkVM*\n - `vac:zkvm::vac:research-existing-proof-systems`\n - 90% Nescience WIP done – to be reviewed carefully since no other follow up documents were giiven to me\n - 50% FHE review - needs to be refined and summarized\n - finished SuperNova writeup ( https://www.notion.so/SuperNova-research-document-8deab397f8fe413fa3a1ef3aa5669f37 )\n - researched starky\n - 80% Halo2 notes ( https://www.notion.so/halo2-fb8d7d0b857f43af9eb9f01c44e76fb9 )\n - `vac:zkvm::vac:proof-system-benchmarks`\n - More discoveries on benchmarks done on ZK-snarks and ZK-starks but all are high level\n - Viewed some circuits on Nova and Poseidon\n - Read through Halo2 code (and Poseidon code) from Axiom\n- *vip::RLNP2P*\n - `vac:acz:rlnp2p:waku:production-readiness`\n - Waku rln contract registry - https://github.com/waku-org/waku-rln-contract/pull/3\n - mark duplicated messages as spam - https://github.com/waku-org/nwaku/pull/1867\n - use waku-org/waku-rln-contract as a submodule in nwaku - https://github.com/waku-org/nwaku/pull/1884\n - `vac:acz:zerokit:vac:maintenance`\n - Fixed atomic_operation ffi edge case error - https://github.com/vacp2p/zerokit/pull/195\n - docs cleanup - https://github.com/vacp2p/zerokit/pull/196\n - fixed version tags - https://github.com/vacp2p/zerokit/pull/194\n - released zerokit v0.3.1 - https://github.com/vacp2p/zerokit/pull/198\n - marked all functions as virtual in rln-contract for inheritors - https://github.com/vacp2p/rln-contract/commit/a092b934a6293203abbd4b9e3412db23ff59877e\n - make nwaku use zerokit v0.3.1 - https://github.com/waku-org/nwaku/pull/1886\n - rlnp2p implementers draft - https://hackmd.io/@rymnc/rln-impl-w-waku\n - `vac:acz:zerokit:vac:zerokit-v0.4`\n - zerokit v0.4.0 release planning - https://github.com/vacp2p/zerokit/issues/197\n- *vc::Deep Research*\n - `vac:dr:valpriv:vac:tor-push-poc`\n - redesigned the torpush integration in nimbus https://github.com/vacp2p/nimbus-eth2-experimental/pull/2\n - `vac:dr:valpriv:vac:tor-push-relwork`\n - Addressed further comments in paper, improved intro, added source level variation approach\n - `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n - cont' work on the document","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-14":{"title":"2023-08-17 Vac weekly","content":"\n\nVac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n# Vac week 33 August 14th\n\n---\n## *vsu::P2P*\n### `vac:p2p:nim-libp2p:vac:maintenance`\n- Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n- delivered: Perf protocol https://github.com/status-im/nim-libp2p/pull/925\n- delivered: Test-plans for the perf protocol https://github.com/lchenut/test-plans/tree/perf-nim\n- Bandwidth estimate as a parameter (waiting for final review) https://github.com/status-im/nim-libp2p/pull/941\n### `vac:p2p:nim-chronos:vac:maintenance`\n- delivered: Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n- delivered: fixed flaky test using REUSE_PORT https://github.com/status-im/nim-chronos/pull/438\n\n---\n## *vsu::Tokenomics*\n - admin/misc:\n - (5 CC days off)\n### `vac:tke::codex:economic-analysis`\n- Filecoin economic structure and Codex token requirements\n### `vac:tke::status:SNT-staking`\n- tests with the contracts\n### `vac:tke::nomos:economic-analysis`\n- resume discussions with Nomos team\n\n---\n## *vsu::Distributed Systems Testing (DST)*\n### `vac:dst:wakurtosis:waku:techreport`\n- 1st Draft of Wakurtosis Research Blog (https://github.com/vacp2p/vac.dev/pull/123)\n- Data Process / Analysis of Non-Discv5 K13 Simulations (Wakurtosis Tech Report v2.5)\n### `vac:dst:shadow:vac:basic-shadow-simulation`\n- Basic Shadow Simulation of a gossipsub node (Setup, 5nodes)\n### `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n- Try and plan on how to refactor/generalize testing tool from Codex.\n- Learn more about Kubernetes\n### `vac:dst:wakurtosis:nomos:ci-integration_02`\n- Enable subnetworks\n- Plan how to use wakurtosis with fixed version\n### `vac:dst:eng:vac:bundle-simulation-data`\n- Run requested simulations\n\n---\n## *vsu:Smart Contracts (SC)*\n### `vac:sc::vac:secureum-upskilling`\n - Learned about \n - cold vs warm storage reads and their gas implications\n - UTXO vs account models\n - `DELEGATECALL` vs `CALLCODE` opcodes, `CREATE` vs `CREATE2` opcodes; Yul Assembly\n - Unstructured proxies https://eips.ethereum.org/EIPS/eip-1967\n - C3 Linearization https://forum.openzeppelin.com/t/solidity-diamond-inheritance/2694) (Diamond inheritance and resolution)\n - Uniswap deep dive\n - Finished Secureum slot 2 and 3\n### `vac:sc::vac:maintainance/misc`\n - Introduced Vac's own `foundry-template` for smart contract projects\n - Goal is to have the same project structure across projects\n - Github repository: https://github.com/vacp2p/foundry-template\n\n---\n## *vsu:Applied Cryptogarphy \u0026 ZK (ACZ)*\n - `vac:acz:zerokit:vac:maintenance`\n - PR reviews https://github.com/vacp2p/zerokit/pull/200, https://github.com/vacp2p/zerokit/pull/201\n\n---\n## *vip::zkVM*\n### `vac:zkvm::vac:research-existing-proof-systems`\n- delivered Nescience WIP doc\n- delivered FHE review\n- delivered Nova vs Sangria done - Some discussions during the meeting\n- started HyperNova writeup\n- started writing a trimmed version of FHE writeup\n- researched CCS (for HyperNova)\n- Research Protogalaxy https://eprint.iacr.org/2023/1106 and Protostar https://eprint.iacr.org/2023/620.\n### `vac:zkvm::vac:proof-system-benchmarks`\n- More work on benchmarks is ongoing\n- Putting down a document that explains the differences\n\n---\n## *vc::Deep Research*\n### `vac:dr:valpriv:vac:tor-push-poc`\n- revised the code for PR\n### `vac:dr:valpriv:vac:tor-push-relwork`\n- added section for mixnet, non-Tor/non-onion routing-based anonymity network\n### `vac:dr:gsub-scaling:vac:gossipsub-simulation`\n- Used shadow simulator to run first GossibSub simulation\n### `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n- Finalized 1st draft of the GossipSub scaling article","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/waku/milestone-waku-10-users":{"title":"Milestone: Waku Network supports 10k Users","content":"\n```mermaid\n%%{ \n init: { \n 'theme': 'base', \n 'themeVariables': { \n 'primaryColor': '#BB2528', \n 'primaryTextColor': '#fff', \n 'primaryBorderColor': '#7C0000', \n 'lineColor': '#F8B229', \n 'secondaryColor': '#006100', \n 'tertiaryColor': '#fff' \n } \n } \n}%%\ngantt\n\tdateFormat YYYY-MM-DD \n\tsection Scaling\n\t\t10k Users :done, 2023-01-20, 2023-07-31\n```\n\n## Completion Deliverable\nTBD\n\n## Epics\n- [Github Issue Tracker](https://github.com/waku-org/pm/issues/12)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/milestones-overview":{"title":"Waku Milestones Overview","content":"\n- 90% - [Waku Network support for 10k users](roadmap/waku/milestone-waku-10-users.md)\n- 80% - Waku Network support for 1MM users\n- 65% - Restricted-run (light node) protocols are production ready\n- 60% - Peer management strategy for relay and light nodes are defined and implemented\n- 10% - Quality processes are implemented for `nwaku` and `go-waku`\n- 80% - Define and track network and community metrics for continuous monitoring improvement\n- 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties)\n- 15% - Dogfooding of RLN by platforms has started\n- 06% - First protocol to incentivize operators has been defined","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/updates/2023-07-24":{"title":"2023-07-24 Waku weekly","content":"\nDisclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones.\n\n---\n\n## Docs\n\n### **Milestone**: Foundation for Waku docs (done)\n\n#### _achieved_:\n- overall layout\n- concept docs\n- community/showcase pages\n\n### **Milestone**: Foundation for node operator docs (done)\n#### _achieved_:\n- nodes overview page\n- guide for running nwaku (binaries, source, docker)\n- peer discovery config guide\n- reference docs for config methods and options\n\n### **Milestone**: Foundation for js-waku docs\n#### _achieved_:\n- js-waku overview + installation guide\n- lightpush + filter guide\n- store guide\n- @waku/create-app guide\n\n#### _next:_\n- improve @waku/react guide\n\n#### _blocker:_\n- polyfills issue with [js-waku](https://github.com/waku-org/js-waku/issues/1415)\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n### **Milestone**: Running nwaku in the cloud\n### **Milestone**: Add Waku guide to learnweb3.io\n### **Milestone**: Encryption docs for js-waku\n### **Milestone**: Advanced node operator doc (postgres, WSS, monitoring, common config)\n### **Milestone**: Foundation for go-waku docs\n### **Milestone**: Foundation for rust-waku-bindings docs\n### **Milestone**: Waku architecture docs\n### **Milestone**: Waku detailed roadmap and milestones\n### **Milestone**: Explain RLN\n\n---\n\n## Eco Dev (WIP)\n\n### **Milestone**: EthCC Logos side event organisation (done)\n### **Milestone**: Community Growth\n#### _achieved_: \n- Wrote several bounties, improved template; setup onboarding flow in Discord.\n\n#### _next_: \n- Review template, publish on GitHub\n\n### **Milestone**: Business Development (continuous)\n#### _achieved_: \n- Discussions with various leads in EthCC\n#### _next_: \n- Booking calls with said leads\n\n### **Milestone**: Setting Up Content Strategy for Waku\n\n#### _achieved_: \n- Discussions with Comms Hubs re Waku Blog \n- expressed needs and intent around future blog post and needed amplification\n- discuss strategies to onboard/involve non-dev and potential CTAs.\n\n### **Milestone**: Web3Conf (dates)\n### **Milestone**: DeCompute conf\n\n---\n\n## Research (WIP)\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- rendezvous hashing \n- weighting function \n- updated LIGHTPUSH to handle autosharding\n\n#### _next:_\n- update FILTER \u0026 STORE for autosharding\n\n---\n\n## nwaku (WIP)\n\n### **Milestone**: Postgres integration.\n#### _achieved:_\n- nwaku can store messages in a Postgres database\n- we started to perform stress tests\n\n#### _next:_\n- Analyse why some messages are not stored during stress tests happened in both sqlite and Postgres, so maybe the issue isn't directly related to _store_.\n\n### **Milestone**: nwaku as a library (C-bindings)\n#### _achieved:_\n- The integration is in progress through N-API framework\n\n#### _next:_\n- Make the nodejs to properly work by running the _nwaku_ node in a separate thread.\n\n---\n\n## go-waku (WIP)\n\n\n---\n\n## js-waku (WIP)\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved: \n- spec test for connection manager\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n### **Milestone**: Static Sharding\n#### _next_: \n- start implementation of static sharding in js-waku\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- js-lip2p upgrade to remove usage of polyfills (draft PR)\n\n#### _next_: \n- merge and release js-libp2p upgrade\n\n### **Milestone**: Waku Relay in the Browser\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-07-31":{"title":"2023-07-31 Waku weekly","content":"\n## Docs\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n#### _next:_ \n- rewrite docs in British English\n### **Milestone**: Running nwaku in the cloud\n#### _next:_ \n- publish guides for Digital Ocean, Oracle, Fly.io\n\n---\n## Eco Dev (WIP)\n\n---\n## Research\n\n### **Milestone**: Detailed network requirements and task breakdown\n#### _achieved:_ \n- gathering rough network requirements\n#### _next:_ \n- detailed task breakdown per milestone and effort allocation\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- update FILTER \u0026 STORE for autosharding\n#### _next:_ \n- RFC review \u0026 updates \n- code review \u0026 updates\n\n---\n## nwaku\n\n### **Milestone**: nwaku release process automation\n#### _next_:\n- setup automation to test/simulate current `master` to prevent/limit regressions\n- expand target architectures and platforms for release artifacts (e.g. arm64, Win...)\n### **Milestone**: HTTP Rest API for protocols\n#### _next:_ \n- Filter API added \n- tests to complete.\n\n---\n## go-waku\n\n### **Milestone**: Increase Maintability Score. Refer to [CodeClimate report](https://codeclimate.com/github/waku-org/go-waku)\n#### _next:_ \n- define scope on which issues reported by CodeClimate should be fixed. Initially it should be limited to reduce code complexity and duplication.\n\n### **Milestone**: RLN updates, refer [issue](https://github.com/waku-org/go-waku/issues/608).\n_achieved_:\n- expose `set_tree`, `key_gen`, `seeded_key_gen`, `extended_seeded_keygen`, `recover_id_secret`, `set_leaf`, `init_tree_with_leaves`, `set_metadata`, `get_metadata` and `get_leaf` \n- created an example on how to use RLN with go-waku\n- service node can pass in index to keystore credentials and can verify proofs based on bandwidth usage\n#### _next_: \n- merkle tree batch operations (in progress) \n- usage of persisted merkle tree db\n\n### **Milestone**: Improve test coverage for functional tests of all protocols. Refer to [CodeClimate report]\n#### _next_: \n- define scope on which code sections should be covered by tests\n\n### **Milestone**: C-Bindings\n#### _next_: \n- update API to match nwaku's (by using callbacks instead of strings that require freeing)\n\n---\n## js-waku\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved_: \n- extend ConnectionManager with EventEmitter and dispatch peers tagged with their discovery + make it public on the Waku interface\n#### _next_: \n- fallback improvement for peer connect rejection\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n#### _next_: \n- robusting support around peer-exchange for examples\n### **Milestone**: Static Sharding\n#### _achieved_: \n- WIP implementation of static sharding in js-waku\n#### _next_: \n- investigation around gauging connection loss;\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- improve \u0026 update @waku/react \n- merge and release js-libp2p upgrade\n\n#### _next:_\n- update examples to latest release + make sure no old/unused packages there\n\n### **Milestone**: Maintenance\n#### _achieved_: \n- update to libp2p@0.46.0\n#### _next_:\n- suit of optional tests in pipeline\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-06":{"title":"2023-08-06 Waku weekly","content":"\nMilestones for current works are created and used. Next steps are:\n1) Refine scope of [research work](https://github.com/waku-org/research/issues/3) for rest of the year and create matching milestones for research and waku clients\n2) Review work not coming from research and setting dates\nNote that format matches the Notion page but can be changed easily as it's scripted\n\n\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n- _blocker_: \n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Docker compose with `nwaku` + `postgres` + `prometheus` + `grafana` + `postgres_exporter` https://github.com/alrevuelta/nwaku-compose/pull/3\n- _next_: Carry on with stress testing\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: feedback/update cycles for FILTER \u0026 LIGHTPUSH\n- _next_: New fleet, updating ENR from live subscriptions and merging\n- _blocker_: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app.\n\n**[Move Waku v1 and Waku-Bridge to new repos](https://github.com/waku-org/nwaku/issues/1767)** {E:2023-qa}\n\n- _achieved_: Removed v1 and wakubridge code from nwaku repo\n- _next_: Remove references to `v2` from nwaku directory structure and documents\n\n**[nwaku c-bindings](https://github.com/waku-org/nwaku/issues/1332)** {E:2023-many-platforms}\n\n- _achieved_:\n - Moved the Waku execution into a secondary working thread. Essential for NodeJs.\n - Adapted the NodeJs example to use the `libwaku` with the working-thread approach. The example had been receiving relay messages during a weekend. The memory was stable without crashing. \n- _next_: start applying the thread-safety recommendations https://github.com/waku-org/nwaku/issues/1878\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node.\n\n---\n## js-waku\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: robustness around peer-exchange, and highlight discovery vs connections for PX on the web-chat example\n- _next_: saving successfully connected PX peers to local storage for easier connections on reload\n\n**[Waku Relay scalability in the Browser](https://github.com/waku-org/js-waku/issues/905)** {NO EPIC}\n\n- _achieved_: draft of direct browser-browser RTC example https://github.com/waku-org/js-waku-examples/pull/260 \n- _next_: improve the example (connection re-usage), work on contentTopic based RTC example\n\n---\n## go-waku\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: updated c-bindings to use callbacks\n- _next_: refactor v1 encoding functions and update RFC\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Enabled -race flag and ran all unit tests to identify data races.\n- _next_: Fix issues reported by the data race detector tool\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: use zerokit batch insert/delete for members, exposed function to retrieve data from merkle tree, modified zerokit and go-zerokit-rln to pass merkle tree persistance configuration settings\n- _next_: resume onchain sync from persisted tree db\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: Basic peer management to ensure standard in/out ratio for relay peers.\n- _next_: add service slots to peer manager\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: production of swags and marketing collaterals for web3conf completed\n- _next_: web3conf talk and side event production. various calls with commshub for preparing marketing collaterals.\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)** {E:2023-eco-growth}\n\n- _next_: create guide on `@waku/react` and debugging js-waku web apps\n\n**[Docs general improvement/incorporating feedback (2023)](https://github.com/waku-org/docs.waku.org/issues/102)** {E:2023-eco-growth}\n\n- _achieved_: rewrote the docs in UK English\n- _next_: update docs terms, announce js-waku docs\n\n**[Foundation of js-waku docs](https://github.com/waku-org/docs.waku.org/issues/101)** {E:2023-eco-growth}\n\n_achieved_: added guide on js-waku bootstrapping\n\n---\n## Research\n\n**[1.1 Network requirements and task breakdown](https://github.com/waku-org/research/issues/6)** {E:2023-1mil-users}\n\n- _achieved_: Setup project management tools; determined number of shards to 8; some conversations on RLN memberships\n- _next_: Breakdown and assign tasks under each milestone for the 1 million users/public Waku Network epic.\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-14":{"title":"2023-08-14 Waku weekly","content":"\n\n# 2023-08-14 Waku weekly\n---\n## Epics\n\n**[Waku Network Can Support 10K Users](https://github.com/waku-org/pm/issues/12)** {E:2023-10k-users}\n\nAll software has been delivered. Pending items are:\n- Running stress testing on PostgreSQL to confirm performance gain https://github.com/waku-org/nwaku/issues/1894\n- Setting up a staging fleet for Status to try static sharding\n- Running simulations for Store protocol: [Will confirm with Vac/DST on dates/commitment](https://github.com/vacp2p/research/issues/191#issuecomment-1672542165) and probably move this to [1mil epic](https://github.com/waku-org/pm/issues/31)\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub\n- _next_: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning\n- _blocker_: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)**\n\n- _next_: document notes/recommendations for NodeJS, begin docs on `js-waku` encryption\n\n---\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: minor CI fixes and improvements\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Learned that the insertion rate is constrained by the `relay` protocol. i.e. the maximum insert rate is limited by `relay` so I couldn't push the \"insert\" operation to a limit from a _Postgres_ point of view. For example, if 25 clients publish messages concurrently, and each client publishes 300 msgs, all the messages are correctly stored. If repeating the same operation but with 50 clients, then many messages are lost because the _relay_ protocol doesn't process all of them.\n- _next_: Carry on with stress testing. Analyze the performance differences between _Postgres_ and _SQLite_ regarding the _read_ operations.\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: many feedback/update cycles for FILTER, LIGHTPUSH, STORE \u0026 RFC\n- _next_: updating ENR for live subscriptions\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node. Add more tests.\n\n---\n## js-waku\n\n**[Maintenance](https://github.com/waku-org/js-waku/issues/1455)** {E:2023-qa}\n\n- achieved: upgrade libp2p \u0026 chainsafe deps to libp2p 0.46.3 while removing deprecated libp2p standalone interface packages (new breaking change libp2p w/ other deps), add tsdoc for referenced types, setting up/fixing prettier/eslint conflict \n\n**[Developer Experience (2023)](https://github.com/waku-org/js-waku/issues/1453)** {E:2023-eco-growth}\n\n- _achieved_: non blocking pipeline step (https://github.com/waku-org/js-waku/issues/1411)\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: close the \"fallback mechanism for peer rejections\", refactor peer-exchange compliance test\n- _next_: peer-exchange to be included with default discovery, action peer-exchange browser feedback\n\n---\n## go-waku\n\n**[Maintenance](https://github.com/waku-org/go-waku/issues/634)** {E:2023-qa}\n\n- _achieved_: improved keep alive logic for identifying if machine is waking up; added vacuum feature to sqlite and postgresql; made migrations optional; refactored db and migration code, extracted code to generate node key to its own separate subcommand\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: PR for updating the RFC to use callbacks, and refactored the encoding functions\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Fixed issues reported by the data race detector tool.\n- _next_: identify areas where test coverage needs improvement.\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: exposed merkle tree configuration, removed embedded resources from go-zerokit-rln, fixed nwaku / go-waku rlnKeystore compatibility, added merkle tree persistence and modified zerokit to print to stderr any error obtained while executing functions via FFI.\n- _next_: interop with nwaku\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: add service slots to peer manager.\n- _next_: implement relay connectivity loop, integrate gossipsub scoring for peer disconnections\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]}} \ No newline at end of file diff --git a/indices/contentIndex.815a9f600820229dfebe874a95dab3f8.min.json b/indices/contentIndex.815a9f600820229dfebe874a95dab3f8.min.json deleted file mode 100644 index 03a341d85..000000000 --- a/indices/contentIndex.815a9f600820229dfebe874a95dab3f8.min.json +++ /dev/null @@ -1 +0,0 @@ -{"/":{"title":"Logos Technical Roadmap and Activity","content":"This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. To learn more about the motivation, please visit the [Logos Collective Site](https://logos.co).\n\n## Navigation\n\n### Waku\n- [Milestones](roadmap/waku/milestones-overview.md)\n- [weekly updates](tags/waku-updates)\n\n### Codex\n- [Milestones](roadmap/codex/milestones-overview.md)\n- [weekly updates](tags/codex-updates)\n\n### Nomos\n- [Milestones](roadmap/nomos/milestones-overview.md)\n- [weekly updates](tags/nomos-updates)\n\n### Vac\n- [Milestones](roadmap/vac/milestones-overview.md)\n- [weekly updates](tags/vac-updates)\n\n### Innovation Lab\n- [Milestones](roadmap/innovation_lab/milestones-overview.md)\n- [weekly updates](tags/ilab-updates)\n### Comms (Acid Info)\n- [Milestones](roadmap/acid/milestones-overview.md)\n- [weekly updates](tags/acid-updates)\n","lastmodified":"2023-08-17T20:36:02.487556006Z","tags":[]},"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":{"title":"CJK + Latex Support (测试)","content":"\n## Chinese, Japanese, Korean Support\n几乎在我们意识到之前,我们已经离开了地面。\n\n우리가 그것을 알기도 전에 우리는 땅을 떠났습니다.\n\n私たちがそれを知るほぼ前に、私たちは地面を離れていました。\n\n## Latex\n\nBlock math works with two dollar signs `$$...$$`\n\n$$f(x) = \\int_{-\\infty}^\\infty\n f\\hat(\\xi),e^{2 \\pi i \\xi x}\n \\,d\\xi$$\n\t\nInline math also works with single dollar signs `$...$`. For example, Euler's identity but inline: $e^{i\\pi} = 0$\n\nAligned equations work quite well:\n\n$$\n\\begin{aligned}\na \u0026= b + c \\\\ \u0026= e + f \\\\\n\\end{aligned}\n$$\n\nAnd matrices\n\n$$\n\\begin{bmatrix}\n1 \u0026 2 \u0026 3 \\\\\na \u0026 b \u0026 c\n\\end{bmatrix}\n$$\n\n## RTL\nMore information on configuring RTL languages like Arabic in the [config](config.md) page.\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/callouts":{"title":"Callouts","content":"\n## Callout support\n\nQuartz supports the same Admonition-callout syntax as Obsidian.\n\nThis includes\n- 12 Distinct callout types (each with several aliases)\n- Collapsable callouts\n\nSee [documentation on supported types and syntax here](https://help.obsidian.md/How+to/Use+callouts#Types).\n\n## Showcase\n\n\u003e [!EXAMPLE] Examples\n\u003e\n\u003e Aliases: example\n\n\u003e [!note] Notes\n\u003e\n\u003e Aliases: note\n\n\u003e [!abstract] Summaries \n\u003e\n\u003e Aliases: abstract, summary, tldr\n\n\u003e [!info] Info \n\u003e\n\u003e Aliases: info, todo\n\n\u003e [!tip] Hint \n\u003e\n\u003e Aliases: tip, hint, important\n\n\u003e [!success] Success \n\u003e\n\u003e Aliases: success, check, done\n\n\u003e [!question] Question \n\u003e\n\u003e Aliases: question, help, faq\n\n\u003e [!warning] Warning \n\u003e\n\u003e Aliases: warning, caution, attention\n\n\u003e [!failure] Failure \n\u003e\n\u003e Aliases: failure, fail, missing\n\n\u003e [!danger] Error\n\u003e\n\u003e Aliases: danger, error\n\n\u003e [!bug] Bug\n\u003e\n\u003e Aliases: bug\n\n\u003e [!quote] Quote\n\u003e\n\u003e Aliases: quote, cite\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/config":{"title":"Configuration","content":"\n## Configuration\nQuartz is designed to be extremely configurable. You can find the bulk of the configuration scattered throughout the repository depending on how in-depth you'd like to get.\n\nThe majority of configuration can be found under `data/config.yaml`. An annotated example configuration is shown below.\n\n```yaml {title=\"data/config.yaml\"}\n# The name to display in the footer\nname: Jacky Zhao\n\n# whether to globally show the table of contents on each page\n# this can be turned off on a per-page basis by adding this to the\n# front-matter of that note\nenableToc: true\n\n# whether to by-default open or close the table of contents on each page\nopenToc: false\n\n# whether to display on-hover link preview cards\nenableLinkPreview: true\n\n# whether to render titles for code blocks\nenableCodeBlockTitle: true \n\n# whether to render copy buttons for code blocks\nenableCodeBlockCopy: true \n\n# whether to render callouts\nenableCallouts: true\n\n# whether to try to process Latex\nenableLatex: true\n\n# whether to enable single-page-app style rendering\n# this prevents flashes of unstyled content and improves\n# smoothness of Quartz. More info in issue #109 on GitHub\nenableSPA: true\n\n# whether to render a footer\nenableFooter: true\n\n# whether backlinks of pages should show the context in which\n# they were mentioned\nenableContextualBacklinks: true\n\n# whether to show a section of recent notes on the home page\nenableRecentNotes: false\n\n# whether to display an 'edit' button next to the last edited field\n# that links to github\nenableGitHubEdit: true\nGitHubLink: https://github.com/jackyzha0/quartz/tree/hugo/content\n\n# whether to use Operand to power semantic search\n# IMPORTANT: replace this API key with your own if you plan on using\n# Operand search!\nenableSemanticSearch: false\noperandApiKey: \"REPLACE-WITH-YOUR-OPERAND-API-KEY\"\n\n# page description used for SEO\ndescription:\n Host your second brain and digital garden for free. Quartz features extremely fast full-text search,\n Wikilink support, backlinks, local graph, tags, and link previews.\n\n# title of the home page (also for SEO)\npage_title:\n \"🪴 Quartz 3.2\"\n\n# links to show in the footer\nlinks:\n - link_name: Twitter\n link: https://twitter.com/_jzhao\n - link_name: Github\n link: https://github.com/jackyzha0\n```\n\n### Code Block Titles\nTo add code block titles with Quartz:\n\n1. Ensure that code block titles are enabled in Quartz's configuration:\n\n ```yaml {title=\"data/config.yaml\", linenos=false}\n enableCodeBlockTitle: true\n ```\n\n2. Add the `title` attribute to the desired [code block\n fence](https://gohugo.io/content-management/syntax-highlighting/#highlighting-in-code-fences):\n\n ```markdown {linenos=false}\n ```yaml {title=\"data/config.yaml\"}\n enableCodeBlockTitle: true # example from step 1\n ```\n ```\n\n**Note** that if `{title=\u003cmy-title\u003e}` is included, and code block titles are not\nenabled, no errors will occur, and the title attribute will be ignored.\n\n### HTML Favicons\nIf you would like to customize the favicons of your Quartz-based website, you \ncan add them to the `data/config.yaml` file. The **default** without any set \n`favicon` key is:\n\n```html {title=\"layouts/partials/head.html\", linenostart=15}\n\u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n```\n\nThe default can be overridden by defining a value to the `favicon` key in your \n`data/config.yaml` file. For example, here is a `List[Dictionary]` example format, which is\nequivalent to the default:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon:\n - { rel: \"shortcut icon\", href: \"icon.png\", type: \"image/png\" }\n# - { ... } # Repeat for each additional favicon you want to add\n```\n\nIn this format, the keys are identical to their HTML representations.\n\nIf you plan to add multiple favicons generated by a website (see list below), it\nmay be easier to define it as HTML. Here is an example which appends the \n**Apple touch icon** to Quartz's default favicon:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon: |\n \u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n \u003clink rel=\"apple-touch-icon\" sizes=\"180x180\" href=\"/apple-touch-icon.png\"\u003e\n```\n\nThis second favicon will now be used as a web page icon when someone adds your \nwebpage to the home screen of their Apple device. If you are interested in more \ninformation about the current and past standards of favicons, you can read \n[this article](https://www.emergeinteractive.com/insights/detail/the-essentials-of-favicons/).\n\n**Note** that all generated favicon paths, defined by the `href` \nattribute, are relative to the `static/` directory.\n\n### Graph View\nTo customize the Interactive Graph view, you can poke around `data/graphConfig.yaml`.\n\n```yaml {title=\"data/graphConfig.yaml\"}\n# if true, a Global Graph will be shown on home page with full width, no backlink.\n# A different set of Local Graphs will be shown on sub pages.\n# if false, Local Graph will be default on every page as usual\nenableGlobalGraph: false\n\n### Local Graph ###\nlocalGraph:\n # whether automatically generate a legend\n enableLegend: false\n \n # whether to allow dragging nodes in the graph\n enableDrag: true\n \n # whether to allow zooming and panning the graph\n enableZoom: true\n \n # how many neighbours of the current node to show (-1 is all nodes)\n depth: 1\n \n # initial zoom factor of the graph\n scale: 1.2\n \n # how strongly nodes should repel each other\n repelForce: 2\n\n # how strongly should nodes be attracted to the center of gravity\n centerForce: 1\n\n # what the default link length should be\n linkDistance: 1\n \n # how big the node labels should be\n fontSize: 0.6\n \n # scale at which to start fading the labes on nodes\n opacityScale: 3\n\n### Global Graph ###\nglobalGraph:\n\t# same settings as above\n\n### For all graphs ###\n# colour specific nodes path off of their path\npaths:\n - /moc: \"#4388cc\"\n```\n\n\n## Styling\nWant to go even more in-depth? You can add custom CSS styling and change existing colours through editing `assets/styles/custom.scss`. If you'd like to target specific parts of the site, you can add ids and classes to the HTML partials in `/layouts/partials`. \n\n### Partials\nPartials are what dictate what gets rendered to the page. Want to change how pages are styled and structured? You can edit the appropriate layout in `/layouts`.\n\nFor example, the structure of the home page can be edited through `/layouts/index.html`. To customize the footer, you can edit `/layouts/partials/footer.html`\n\nMore info about partials on [Hugo's website.](https://gohugo.io/templates/partials/)\n\nStill having problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n\n## Language Support\n[CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) comes out of the box with Quartz.\n\nWant to support languages that read from right-to-left (like Arabic)? Hugo (and by proxy, Quartz) supports this natively.\n\nFollow the steps [Hugo provides here](https://gohugo.io/content-management/multilingual/#configure-languages) and modify your `config.toml`\n\nFor example:\n\n```toml\ndefaultContentLanguage = 'ar'\n[languages]\n [languages.ar]\n languagedirection = 'rtl'\n title = 'مدونتي'\n weight = 1\n```\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/custom-Domain":{"title":"Custom Domain","content":"\n### Registrar\nThis step is only applicable if you are using a **custom domain**! If you are using a `\u003cYOUR-USERNAME\u003e.github.io` domain, you can skip this step.\n\nFor this last bit to take effect, you also need to create a CNAME record with the DNS provider you register your domain with (i.e. NameCheap, Google Domains).\n\nGitHub has some [documentation on this](https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site), but the tldr; is to\n\n1. Go to your forked repository (`github.com/\u003cYOUR-GITHUB-USERNAME\u003e/quartz`) settings page and go to the Pages tab. Under \"Custom domain\", type your custom domain, then click **Save**.\n2. Go to your DNS Provider and create a CNAME record that points from your domain to `\u003cYOUR-GITHUB-USERNAME.github.io.` (yes, with the trailing period).\n\n\t![Example Configuration for Quartz](google-domains.png)*Example Configuration for Quartz*\n3. Wait 30 minutes to an hour for the network changes to kick in.\n4. Done!","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/editing":{"title":"Editing Content in Quartz","content":"\n## Editing \nQuartz runs on top of [Hugo](https://gohugo.io/) so all notes are written in [Markdown](https://www.markdownguide.org/getting-started/).\n\n### Folder Structure\nHere's a rough overview of what's what.\n\n**All content in your garden can found in the `/content` folder.** To make edits, you can open any of the files and make changes directly and save it. You can organize content into any folder you'd like.\n\n**To edit the main home page, open `/content/_index.md`.**\n\nTo create a link between notes in your garden, just create a normal link using Markdown pointing to the document in question. Please note that **all links should be relative to the root `/content` path**. \n\n```markdown\nFor example, I want to link this current document to `notes/config.md`.\n[A link to the config page](notes/config.md)\n```\n\nSimilarly, you can put local images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\nYou can also use wikilinks if that is what you are more comfortable with!\n\n### Front Matter\nHugo is picky when it comes to metadata for files. Make sure that your title is double-quoted and that you have a title defined at the top of your file like so. You can also add tags here as well.\n\n```yaml\n---\ntitle: \"Example Title\"\ntags:\n- example-tag\n---\n\nRest of your content here...\n```\n\n### Obsidian\nI recommend using [Obsidian](http://obsidian.md/) as a way to edit and grow your digital garden. It comes with a really nice editor and graphical interface to preview all of your local files.\n\nThis step is **highly recommended**.\n\n\u003e 🔗 Step 3: [How to setup your Obsidian Vault to work with Quartz](obsidian.md)\n\n## Previewing Changes\nThis step is purely optional and mostly for those who want to see the published version of their digital garden locally before opening it up to the internet. This is *highly recommended* but not required.\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)\n\nFor those who like to live life more on the edge, viewing the garden through Obsidian gets you pretty close to the real thing.\n\n## Publishing Changes\nNow that you know the basics of managing your digital garden using Quartz, you can publish it to the internet!\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/hosting":{"title":"Deploying Quartz to the Web","content":"\n## Hosting on GitHub Pages\nQuartz is designed to be effortless to deploy. If you forked and cloned Quartz directly from the repository, everything should already be good to go! Follow the steps below.\n\n### Enable GitHub Actions\nBy default, GitHub disables workflows from running automatically on Forked Repostories. Head to the 'Actions' tab of your forked repository and Enable Workflows to setup deploying your Quartz site!\n\n![Enable GitHub Actions](github-actions.png)*Enable GitHub Actions*\n\n### Enable GitHub Pages\n\nHead to the 'Settings' tab of your forked repository and go to the 'Pages' tab.\n\n1. (IMPORTANT) Set the source to deploy from `master` (and not `hugo`) using `/ (root)`\n2. Set a custom domain here if you have one!\n\n![Enable GitHub Pages](github-pages.png)*Enable GitHub Pages*\n\n### Pushing Changes\nTo see your changes on the internet, we need to push it them to GitHub. Quartz is a `git` repository so updating it is the same workflow as you would follow as if it were just a regular software project.\n\n```shell\n# Navigate to Quartz folder\ncd \u003cpath-to-quartz\u003e\n\n# Commit all changes\ngit add .\ngit commit -m \"message describing changes\"\n\n# Push to GitHub to update site\ngit push origin hugo\n```\n\nNote: we specifically push to the `hugo` branch here. Our GitHub action automatically runs everytime a push to is detected to that branch and then updates the `master` branch for redeployment.\n\n### Setting up the Site\nNow let's get this site up and running. Never hosted a site before? No problem. Have a fancy custom domain you already own or want to subdomain your Quartz? That's easy too.\n\nHere, we take advantage of GitHub's free page hosting to deploy our site. Change `baseURL` in `/config.toml`. \n\nMake sure that your `baseURL` has a trailing `/`!\n\n[Reference `config.toml` here](https://github.com/jackyzha0/quartz/blob/hugo/config.toml)\n\n```toml\nbaseURL = \"https://\u003cYOUR-DOMAIN\u003e/\"\n```\n\nIf you are using this under a subdomain (e.g. `\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz`), include the trailing `/`. **You need to do this especially if you are using GitHub!**\n\n```toml\nbaseURL = \"https://\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz/\"\n```\n\nChange `cname` in `/.github/workflows/deploy.yaml`. Again, if you don't have a custom domain to use, you can use `\u003cYOUR-USERNAME\u003e.github.io`.\n\nPlease note that the `cname` field should *not* have any path `e.g. end with /quartz` or have a trailing `/`.\n\n[Reference `deploy.yaml` here](https://github.com/jackyzha0/quartz/blob/hugo/.github/workflows/deploy.yaml)\n\n```yaml {title=\".github/workflows/deploy.yaml\"}\n- name: Deploy \n uses: peaceiris/actions-gh-pages@v3 \n with: \n\tgithub_token: ${{ secrets.GITHUB_TOKEN }} # this can stay as is, GitHub fills this in for us!\n\tpublish_dir: ./public \n\tpublish_branch: master\n\tcname: \u003cYOUR-DOMAIN\u003e\n```\n\nHave a custom domain? [Learn how to set it up with Quartz ](custom%20Domain.md).\n\n### Ignoring Files\nOnly want to publish a subset of all of your notes? Don't worry, Quartz makes this a simple two-step process.\n\n❌ [Excluding pages from being published](ignore%20notes.md)\n\n---\n\nNow that your Quartz is live, let's figure out how to make Quartz really *yours*!\n\n\u003e Step 6: 🎨 [Customizing Quartz](config.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/ignore-notes":{"title":"Ignoring Notes","content":"\n### Quartz Ignore\nEdit `ignoreFiles` in `config.toml` to include paths you'd like to exclude from being rendered.\n\n```toml\n...\nignoreFiles = [ \n \"/content/templates/*\", \n \"/content/private/*\", \n \"\u003cyour path here\u003e\"\n]\n```\n\n`ignoreFiles` supports the use of Regular Expressions (RegEx) so you can ignore patterns as well (e.g. ignoring all `.png`s by doing `\\\\.png$`).\nTo ignore a specific file, you can also add the tag `draft: true` to the frontmatter of a note.\n\n```markdown\n---\ntitle: Some Private Note\ndraft: true\n---\n...\n```\n\nMore details in [Hugo's documentation](https://gohugo.io/getting-started/configuration/#ignore-content-and-data-files-when-rendering).\n\n### Global Ignore\nHowever, just adding to the `ignoreFiles` will only prevent the page from being access through Quartz. If you want to prevent the file from being pushed to GitHub (for example if you have a public repository), you need to also add the path to the `.gitignore` file at the root of the repository.","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/obsidian":{"title":"Obsidian Vault Integration","content":"\n## Setup\nObsidian is the preferred way to use Quartz. You can either create a new Obsidian Vault or link one that your already have.\n\n### New Vault\nIf you don't have an existing Vault, [download Obsidian](https://obsidian.md/) and create a new Vault in the `/content` folder that you created and cloned during the [setup](setup.md) step.\n\n### Linking an existing Vault\nThe easiest way to use an existing Vault is to copy all of your files (directory and hierarchies intact) into the `/content` folder.\n\n## Settings\nGreat, now that you have your Obsidian linked to your Quartz, let's fix some settings so that they play well.\n\n1. Under Options \u003e Files and Links, set the New link format to always use Absolute Path in Vault.\n2. Go to Settings \u003e Files \u0026 Links \u003e Turn \"on\" automatically update internal links.\n\n![Obsidian Settings](obsidian-settings.png)*Obsidian Settings*\n\n## Templates\nInserting front matter everytime you want to create a new Note gets annoying really quickly. Luckily, Obsidian supports templates which makes inserting new content really easily.\n\n**If you decide to overwrite the `/content` folder completely, don't remove the `/content/templates` folder!**\n\nHead over to Options \u003e Core Plugins and enable the Templates plugin. Then go to Options \u003e Hotkeys and set a hotkey for 'Insert Template' (I recommend `[cmd]+T`). That way, when you create a new note, you can just press the hotkey for a new template and be ready to go!\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/philosophy":{"title":"Quartz Philosophy","content":"\n\u003e “[One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important.” — Richard Hamming\n\n## Why Quartz?\nHosting a public digital garden isn't easy. There are an overwhelming number of tutorials, resources, and guides for tools like [Notion](https://www.notion.so/), [Roam](https://roamresearch.com/), and [Obsidian](https://obsidian.md/), yet none of them have super easy to use *free* tools to publish that garden to the world.\n\nI've personally found that\n1. It's nice to access notes from anywhere\n2. Having a public digital garden invites open conversations\n3. It makes keeping personal notes and knowledge *playful and fun*\n\nI was really inspired by [Bianca](https://garden.bianca.digital/) and [Joel](https://joelhooks.com/digital-garden)'s digital gardens and wanted to try making my own.\n\n**The goal of Quartz is to make hosting your own public digital garden free and simple.** You don't even need your own website. Quartz does all of that for you and gives your own little corner of the internet.\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/preview-changes":{"title":"Preview Changes","content":"\nIf you'd like to preview what your Quartz site looks like before deploying it to the internet, here's exactly how to do that!\n\nNote that both of these steps need to be completed.\n\n## Install `hugo-obsidian`\nThis step will generate the list of backlinks for Hugo to parse. Ensure you have [Go](https://golang.org/doc/install) (\u003e= 1.16) installed.\n\n```bash\n# Install and link `hugo-obsidian` locally\ngo install github.com/jackyzha0/hugo-obsidian@latest\n```\n\nIf you are running into an error saying that `command not found: hugo-obsidian`, make sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize hugo-obsidian as an executable.\n\nAfterwards, start the Hugo server as shown above and your local backlinks and interactive graph should be populated!\n\n## Installing Hugo\nHugo is the static site generator that powers Quartz. [Install Hugo with \"extended\" Sass/SCSS version](https://gohugo.io/getting-started/installing/) first. Then,\n\n```bash\n# Navigate to your local Quartz folder\ncd \u003clocation-of-your-local-quartz\u003e\n\n# Start local server\nmake serve\n\n# View your site in a browser at http://localhost:1313/\n```\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/search":{"title":"Search","content":"\nQuartz supports two modes of searching through content.\n\n## Full-text\nFull-text search is the default in Quartz. It produces results that *exactly* match the search query. This is easier to setup but usually produces lower quality matches.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: false\n```\n\n## Natural Language\nNatural language search is powered by [Operand](https://operand.ai/). It understands language like a person does and finds results that best match user intent. In this sense, it is closer to how Google Search works.\n\nNatural language search tends to produce higher quality results than full-text search.\n\nHere's how to set it up.\n\n1. Create an Operand Account on [their website](https://operand.ai/).\n2. Go to Dashboard \u003e Settings \u003e Integrations.\n3. Follow the steps to setup the GitHub integration. Operand needs access to GitHub in order to index your digital garden properly!\n4. Head over to Dashboard \u003e Objects and press `(Cmd + K)` to open the omnibar and select 'Create Collection'.\n\t1. Set the 'Collection Label' to something that will help you remember it.\n\t2. You can leave the 'Parent Collection' field empty.\n5. Click into your newly made Collection.\n\t1. Press the 'share' button that looks like three dots connected by lines.\n\t2. Set the 'Interface Type' to `object-search` and click 'Create'.\n\t3. This will bring you to a new page with a search bar. Ignore this for now.\n6. Go back to Dashboard \u003e Settings \u003e API Keys and find your Quartz-specific Operand API key under 'Other keys'.\n\t1. Copy the key (which looks something like `0e733a7f-9b9c-48c6-9691-b54fa1c8b910`).\n\t2. Open `data/config.yaml`. Set `enableSemanticSearch` to `true` and `operandApiKey` to your copied key.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: true\noperandApiKey: \"0e733a7f-9b9c-48c6-9691-b54fa1c8b910\"\n```\n7. Make a commit and push your changes to GitHub. See the [[hosting|hosting]] page if you haven't done this already.\n\t1. This step is *required* for Operand to be able to properly index your content. \n\t2. Head over to Dashboard \u003e Objects and select the collection that you made earlier\n8. Press `(Cmd + K)` to open the omnibar again and select 'Create GitHub Repo'\n\t1. Set the 'Repository Label' to `Quartz`\n\t2. Set the 'Repository Owner' to your GitHub username\n\t3. Set the 'Repository Ref' to `master`\n\t4. Set the 'Repository Name' to the name of your repository (usually just `quartz` if you forked the repository without changing the name)\n\t5. Leave 'Root Path' and 'Root URL' empty\n9. Wait for your repository to index and enjoy natural language search in Quartz! Operand refreshes the index every 2h so all you need to do is just push to GitHub to update the contents in the search.","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/setup":{"title":"Setup","content":"\n## Making your own Quartz\nSetting up Quartz requires a basic understanding of `git`. If you are unfamiliar, [this resource](https://resources.nwplus.io/2-beginner/how-to-git-github.html) is a great place to start!\n\n### Forking\n\u003e A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project.\n\nNavigate to the GitHub repository for the Quartz project:\n\n📁 [Quartz Repository](https://github.com/jackyzha0/quartz)\n\nThen, Fork the repository into your own GitHub account. If you don't have an account, you can make on for free [here](https://github.com/join). More details about forking a repo can be found on [GitHub's documentation](https://docs.github.com/en/get-started/quickstart/fork-a-repo).\n\n### Cloning\nAfter you've made a fork of the repository, you need to download the files locally onto your machine. Ensure you have `git`, then type the following command replacing `YOUR-USERNAME` with your GitHub username.\n\n```shell\ngit clone https://github.com/YOUR-USERNAME/quartz\n```\n\n## Editing\nGreat! Now you have everything you need to start editing and growing your digital garden. If you're ready to start writing content already, check out the recommended flow for editing notes in Quartz.\n\n\u003e ✏️ Step 2: [Editing Notes in Quartz](editing.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/showcase":{"title":"Showcase","content":"\nWant to see what Quartz can do? Here are some cool community gardens :)\n\n- [Quartz Documentation (this site!)](https://quartz.jzhao.xyz/)\n- [Jacky Zhao's Garden](https://jzhao.xyz/)\n- [Scaling Synthesis - A hypertext research notebook](https://scalingsynthesis.com/)\n- [AWAGMI Intern Notes](https://notes.awagmi.xyz/)\n- [Shihyu's PKM](https://shihyuho.github.io/pkm/)\n- [Chloe's Garden](https://garden.chloeabrasada.online/)\n- [SlRvb's Site](https://slrvb.github.io/Site/)\n- [Course notes for Information Technology Advanced Theory](https://a2itnotes.github.io/quartz/)\n- [Brandon Boswell's Garden](https://brandonkboswell.com)\n- [Siyang's Courtyard](https://siyangsun.github.io/courtyard/)\n\nIf you want to see your own on here, submit a [Pull Request adding yourself to this file](https://github.com/jackyzha0/quartz/blob/hugo/content/notes/showcase.md)!\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/troubleshooting":{"title":"Troubleshooting and FAQ","content":"\nStill having trouble? Here are a list of common questions and problems people encounter when installing Quartz.\n\nWhile you're here, join our [Discord](https://discord.gg/cRFFHYye7t) :)\n\n### Does Quartz have Latex support?\nYes! See [CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) for a brief demo.\n\n### Can I use \\\u003cObsidian Plugin\\\u003e in Quartz?\nUnless it produces direct Markdown output in the file, no. There currently is no way to bundle plugin code with Quartz.\n\nThe easiest way would be to add your own HTML partial that supports the functionality you are looking for.\n\n### My GitHub pages is just showing the README and not Quartz\nMake sure you set the source to deploy from `master` (and not `hugo`) using `/ (root)`! See more in the [hosting](hosting.md) guide\n\n### Some of my pages have 'January 1, 0001' as the last modified date\nThis is a problem caused by `git` treating files as case-insensitive by default and some of your posts probably have capitalized file names. You can turn this off in your Quartz by running this command.\n\n```shell\n# in the root of your Quartz (same folder as config.toml)\ngit config core.ignorecase true\n\n# or globally (not recommended)\ngit config --global core.ignorecase true\n```\n\n### Can I publish only a subset of my pages?\nYes! Quartz makes selective publishing really easy. Heres a guide on [excluding pages from being published](ignore%20notes.md).\n\n### Can I host this myself and not on GitHub Pages?\nYes! All built files can be found under `/public` in the `master` branch. More details under [hosting](hosting.md).\n\n### `command not found: hugo-obsidian`\nMake sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize `hugo-obsidian` as an executable.\n\n```shell\n# Add the following 2 lines to your ~/.bash_profile\nexport GOPATH=/Users/$USER/go\nexport PATH=$GOPATH/bin:$PATH\n\n# In your current terminal, to reload the session\nsource ~/.bash_profile\n```\n\n### How come my notes aren't being rendered?\nYou probably forgot to include front matter in your Markdown files. You can either setup [Obsidian](obsidian.md) to do this for you or you need to manually define it. More details in [the 'how to edit' guide](editing.md).\n\n### My custom domain isn't working!\nWalk through the steps in [the hosting guide](hosting.md) again. Make sure you wait 30 min to 1 hour for changes to take effect.\n\n### How do I setup Google Analytics?\nYou can edit it in `config.toml` and either use a V3 (UA-) or V4 (G-) tag.\n\n### How do I change the content on the home page?\nTo edit the main home page, open `/content/_index.md`.\n\n### How do I change the colours?\nYou can change the theme by editing `assets/custom.scss`. More details on customization and themeing can be found in the [customization guide](config.md).\n\n### How do I add images?\nYou can put images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\n### My Interactive Graph and Backlinks aren't up to date\nBy default, the `linkIndex.json` (which Quartz needs to generate the Interactive Graph and Backlinks) are not regenerated locally. To set that up, see the guide on [local editing](editing.md)\n\n### Can I use React/Vue/some other framework?\nNot out of the box. You could probably make it work by editing `/layouts/_default/single.html` but that's not what Quartz is designed to work with. 99% of things you are trying to do with those frameworks you can accomplish perfectly fine using just vanilla HTML/CSS/JS.\n\n## Still Stuck?\nQuartz isn't perfect! If you're still having troubles, file an issue in the GitHub repo with as much information as you can reasonably provide. Alternatively, you can message me on [Twitter](https://twitter.com/_jzhao) and I'll try to get back to you as soon as I can.\n\n🐛 [Submit an Issue](https://github.com/jackyzha0/quartz/issues)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/updating":{"title":"Updating","content":"\nHaven't updated Quartz in a while and want all the cool new optimizations? On Unix/Mac systems you can run the following command for a one-line update! This command will show you a log summary of all commits since you last updated, press `q` to acknowledge this. Then, it will show you each change in turn and press `y` to accept the patch or `n` to reject it. Usually you should press `y` for most of these unless it conflicts with existing changes you've made! \n\n```shell\nmake update\n```\n\nOr, if you don't want the interactive parts and just want to force update your local garden (this assumed that you are okay with some of your personalizations been overriden!)\n\n```shell\nmake update-force\n```\n\nOr, manually checkout the changes yourself.\n\n\u003e [!warning] Warning!\n\u003e\n\u003e If you customized the files in `data/`, or anything inside `layouts/`, your customization may be overwritten!\n\u003e Make sure you have a copy of these changes if you don't want to lose them.\n\n\n```shell\n# add Quartz as a remote host\ngit remote add upstream git@github.com:jackyzha0/quartz.git\n\n# index and fetch changes\ngit fetch upstream\ngit checkout -p upstream/hugo -- layouts .github Makefile assets/js assets/styles/base.scss assets/styles/darkmode.scss config.toml data \n```\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/requirements/overview":{"title":"Logos Network Requirements Overview","content":"\nThis document describes the requirements of the Logos Network.\n\n\u003e Network sovereignty is an extension of the collective sovereignty of the individuals within. \n\n\u003e Meaningful participation in the network should be acheivable by affordable and accessible consumer grade hardware.\n\n\u003e Privacy by default. \n\n\u003e A given CiC should have the option to gracefully exit the network and operate on its own.\n\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["requirements"]},"/private/roadmap/consensus/candidates/carnot/FAQ":{"title":"Frequently Asked Questions","content":"\n## Network Requirements and Assumptions\n\n### What assumptions do we need Waku to fulfill? - Corey\n\u003e `Moh:` Waku needs to fill the following requirements, taken from the Carnot paper:\n\n\u003e **Definition 3** (Probabilistic Reliable Dissemination). _After the GST, and when the leader is correct, all the correct nodes deliver the proposal sent by the leader (w.h.p)._\n\n\u003e **Definition 4** (Probabilistic Fulfillment). _After the GST, and when the current and previous leaders are correct, the number of votes collected by teh current leader is $2c+1$ (w.h.p)._\n\n## Tradeoffs\n\n### I think the main clear disadvantage of such a scheme is the added latency of the multiple layers. - Alvaro\n\n\u003e `Moh:` The added latency will be O(log(n/C)), where C is the committee size. But I guess it will be hard to avoid it. Though it also depends on how fast the network layer (potentially Waku) propagats msgs and also on execution time of the transaction as well.\n\n\u003e `Alvaro:` Well IIUC the only latency we are introducing is directly proportional to the levels of subcommitee nesting (ie the log(n/C)), which is understandably the price to pay. We have to make sure though that what we gain by introducing this is really worth the extra cost vs the typical comittee formation via randao or perhaps VDFs\n\n\u003e `Moh:` Again the Typical committee formation with randao can reduce their wait time value to match our latency, but then it becomes vulnerable and fail if the network latency becomes greater than their slot interval. If they keep it too large it may not fail but becomes slow. We won't have that problem. If an adversary has the power to slow down the network then their liveness will fail, whereas we won't have that issue.\n\n## How would you compare Aptos and Carnot? - Alvaro\n\n\u003e `Moh:` It is variant of DiemBFT, Sui is based on Nahrwal, both cannot scale to more than few hunderd of nodes. That is why they achieve that low latency.\n\n\u003e `Alvaro:` Yes, so they need to select a committee of that size in order to operate at that latency What's wrong with selecting a committee vs Carnot's solution? This I'm asking genuinely to understand and because everyone will ask this question when we release.\n\n\u003e `Moh:` When you select a committee you have to wait for a time slot to make sure the result of consensus has propagated. Again strong synchrony assumption (slot time), formation of forks, increase in PoS attack vector come into play\nWithin committee the protocol does not need a wait time but for its results to get propagated if scalability is to be achieved, then wait time has to be added or signatures have to be collected from thousands of nodes.\n\n\u003e `Alvaro:` Can you elaborate?\n\n\u003e `Moh:` Ethereum (and any other protocol who runs the consensus in a single committee selected from a large group on nodes) has wait time so that the output of the consenus propagates to all honest nodes before the next committee is selected. Else the next committee will fail or only forks will be formed and chain length won't increase. But since this wait time as stated, increases latency, makes the protocol vulnerable, Ethereum wants to avoid it to achieve responsivess. To avoid wait time (add responsiveness) a protocol has to collect attestation signatures from 2/3rd of all nodes (not a single committee) to move to the second round (Carnot is already responsive). But aggregating and verifying signatures thousands of signatures is expensive and time consuming. This is why they are working to improve BLS signatures. Instead we have changed the consensus protocol in such a way that a small number of signatures need to be aggregated and verified to achieve responsiveness and fast finality. We can further improve performance by using the improved BLS signatures.\n\n\u003e One cannot achieve fast finality while running the consensus in a small committee. Because attestation of a Block within the single committee is not enough. This block can be averted if the leader of the next committee has not seen it. Therefore, there should be enough delay so that all honest nodes can see it. This is why we have this wait/slot time. Another issue can be a malicious leader from the next chosen committee can also avert a block of honest leader and hence preventing honest leaders from getting rewards. If blocks of honest leaders are averted for long time, stake of malicious leaders will increase. Moreover, malicious leaders can delay blocks of honest nodes by making fork and averting them. Addressing these issues will further make the protocol complex, while still laking fast finality.\n\n## Data Distribution\n\n### How much failure rate of erasure code transmission are we expecting. Basically, what are the EC coding parameters that we expect to be sending such that we have some failure rate of transmission? Has that been looked into? - Dmitriy\n\u003e `Moh:` This is a great question and it points to the tension between the failure rate vs overhead. We have briefly looked into this (today me and Marcin @madxor discussed such cases), but we haven’t thoroughly analyzed this. In our case, the rate of failure also depends on committee size. We look into $10^{-3}$ to $10^{-6}$ probability of failure. And in this case, the coding overhead can be somewhere between 200%-500% approximately. This means for a committee size of 500 (while expecting receipt of messages from 251 correct nodes), for a failure rate of $10^{-6}$ a single node has to send \u003e 6Mb of data for a 1Mb of actual data. Though 5x overhead is large, it still prevent us from sending/receiving 500 Mb of data in return for a failure probability of 1 proposal out of 1 million. From the protocol perspective, we can address EC failures in multiple ways: a: Since the root committee only forwards the coded chunks only when they have successfully rebuilt the block. This means the root committee can be contacted to download additional coded chunks to decode the block. b: We allow this failure and let the leader be replaced but since there is proof that the failure is due to the reason that a decoder failed to reconstruct the block, therefore, the leader cannot be punished (if we chose to employ punishment in PoS). \n\n### How much data should a given block be. Are there limits on this and if so, what are they and what do they depend on? - Dmitriy\n\u003e `Moh:` This question can be answered during simulations and experiments over links of different bandwidths and latencies. We will test the protocol performances with different block sizes. As we know increasing the block size results in increased throughput as well as latency. What is the most appropriate block size can be determined once we observe the tradeoff between throughput vs latency.\n\n## Signature Propagation\n\n### Who sends the signatures up from a given committee? Do that have any leadered power within the committee? - Tanguy\n\u003e `Moh:` Each node in a committee multicasts its vote to all members of the parent committee. Since the size of the vote is small the bit complexity will be low. Introducing a leader within each committee will create a single point of failure within each committee. This is why we avoid maintaining a leader within each committee\n\n## Network Scale\n\n### What is our expected minimum number of nodes within the network? - Dmitriy\n\u003e `Moh:` For a small number of nodes we can have just a single committee. But I am not sure how many nodes will join our network \n\n## Byzantine Behavior\n\n### Can we also consider a flavor that adds attestation/attribution to misbehaving nodes? That will come at a price but there might be a set of use cases which would like to have lower performance with strong attribution. Not saying that it must be part of the initial design, but can be think-through/added later. - Marcin\n\u003e `Moh:` Attestation to misbehaving nodes is part of this protocol. For example, if a node sends an incorrect vote or if a leader proposes an invalid transaction, then this proof will be shared with the network to punish the misbehaving nodes (Though currently this is not part of pseudocode). But it is not possible to reliably prove the attestation of not participation.\n\n\u003e `Marcin:` Great, and definitely, we cannot attest that a node was not participating - I was not suggesting that;). But we can also think about extending the attestation for lazy-participants case (if it’s not already part of the protocol).\n\n\u003e `Moh:` OK, thanks for the clarification 😁 . Of course we can have this feature to forward the proof of participation of successor committees. In the first version of Carnot we had this feature as a sliding window. One could choose the size of the window (in terms of tree levels) for which a node should forward the proof of participation. In the most recent version the size of sliding window is 0. And it is 1 for the root committee. It means root committee members have to forward the proof of participation of their child committee members. Since I was able to prove protocol correctness without forwarding the proofs so we avoid it. But it can be part of the protocol without any significant changes in the protocol\n\n\u003e If the proof scheme is efficient ( as the results you presented) in practice and the cost of creating and verifying proofs is not significant then actually adding proofs can be good. But not required.\n\n### Also, how do you reward online validators / punish offline ones if you can't prove at the block level that someone attested or not? - Tanguy\n\u003e `Moh:` This is very tricky and so far no one has done it right (to my knowledge). Current reward mechanism for attestation, favours fast nodes.This means if malicious nodes in the network are fast, they can increase their stake in the network faster than the honest nodes and eventually take control of the network. Or in the case of Ethereum a Byzantine leader can include signature of malicious nodes more frequently in the proof of attestation, hence malicious nodes will be rewarded more frequently. Also let me add that I don't have definite answer to your question currently, but I think by revising the protocol assumptions, incentive mechanism and using a game theoretical approach this problem can be resolved.\n\n\u003e An honest node should wait for a specific number of children votes (to make sure everyone is voting on the same proposal) before voting but does not need to provide any cryptographic proof. Though we build a threshold signature from root committee members and it’s children but not from the whole tree. As long as enough number of nodes follow the the protocol we should be fine. I am working on protocol proofs. Also I think bugs should be discovered during development and testing phase. Changing protocol to detect potential bug might not be a good practice.\n\n### doesn't having randomly distributed malicious nodes (say there is a 20%) increase the odds that over a third of a committee end up being from those malicious ones? It seems intuitive: since a 20% at the global scale is always \u003c1/3, but when randomly distributed there is always non-zero chance they end up in a single group, thus affecting liveness more and more the closer we get to that global 1/3. Consequently, if I'm understanding the algorithm correctly, it would have worse liveness guarantees that classical pBFT, say with a randomly-selected commitee from the total set. - Alvaro\n\n\u003e `Alexander:` We assume that fraction of malicious nodes is $1/4$ and given we chooses comm. sizes, which will depend on total number of nodes, appropriately this guarantees that with high probability we are below $1/3$ in each committee.\n\n\u003e `Alvaro:` ok, but then both the global guarantee is below that current \"standard\" of 1/3 of malicious nodes and even then we are talking about non-zero probabilities that a comm has the power to slow down consensus via requiring reformation of comms (is this right?)\n\n\u003e `Alexander:` This is the price we pay to improve scalability. Also these probabilities of failure can be very low.\n\n### What happens in Carnot when one committee is taken over by \u003e1/3 intra-comm byzantine nodes? - Alvaro\n\n\u003e `Moh:` When there is a failure the overlay is recalculated. By gradually increasing the fault tolerance by a small value, the probability of failure of a committee slightly increases but upon recalculating the correct overlay, inactive nodes that caused the failure of previous overlay (when no committee has more than 1/3 Byzantine nodes) will be slashed.\n\n\n\n## Synchronicity\n\n### How to guarantee synchronicity. In particular how to avoid that in a big network different nodes see a proposal with 2c+1 votes but different votes and thus different random seed - Giacomo\n\n\u003e `Moh:` The assumption is that there exists some known finite time bound Δ and a special event called GST (Global Stabilization Time) such that:\n\n\u003e The adversary must cause the GST event to eventually happen after some unknown finite time. Any message sent at time x must be delivered by time $\\delta + \\text{max}(x,GST)$. In the Partial synchrony model, the system behaves asynchronously till GST and synchronously after GST.\n\n\u003e Moreover, votes travel one level at a time from tree leaves to the tree root. We only need the proof of votes of root+child committees to conclude with a high probability that the majority of nodes have voted.\n\n### That's a timeout? How does this work exactly without timing assumptions? Trying to find this in the document -Alvaro\n\n\u003e `Moh:` Each committee only verifies the votes of its child committees. Once a verified 2/3rd votes of its child members, it then sends it vote to its parent. In this way each layer of the tree verifies the votes (attests) the layer below. Thus, a node does not have to collect and verify 2/3rd of all thousands of votes (as done in other responsive BFTs) but only from its child nodes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["Carnot","consensus"]},"/private/roadmap/consensus/candidates/carnot/overview":{"title":"Carnot Overview","content":"\nCarnot (formerly LogosBFT) is a Byzantine Fault Tolerant (BFT) [consensus](roadmap/consensus/index.md) candidate for the Nomos Network that utilizes Fountain Codes and a committees tree structure to optimize message propagation in the presence of a large number of nodes, while maintaining high througput and fast finality. More specifically, these are the research contributions in Carnot. To our knowledge, Carnot is the first consensus protocol that can achieve together all of these properties:\n\n1. Scalability: Carnot is highly scalable, scaling to thousands of nodes.\n2. Responsiveness: The ability of a protocol to operate with the speed of a wire but not a maximum delay (block delay, slot time, etc.) is called responsiveness. Responsiveness reduces latency and helps the Carnot achieve Fast Finality. Moreover, it improves Carnot's resilience against adversaries that can slow down network traffic. \n3. Fork avoidance: Carnot avoids the formation of forks in a happy path. Forks formation has the following adverse consequences that the Carnot avoids.\n 1. Wastage of resources on orphan blocks and reduced throughput with increased latency for transactions in orphan blocks\n 2. Increased attack vector on PoS as attackers can employ a strategy to force the network to accept their fork resulting in increased stake for adversaries.\n\n- [FAQ](FAQ.md): Here is a page that tracks various questions people have around Carnot.\n\n## Work Streams\n\n### Current State of the Art\nAn ongoing survey of the current state of the art around Consensus Mechanisms and their peripheral dependencies is being conducted by Tuanir, and can be found in the following WIP Overleaf document: \n- [WIP Consensus SoK](https://www.overleaf.com/project/633acc1acaa6ffe456d1ab1f)\n\n### Committee Tree Overlay\nThe basis of Carnot is dependent upon establishing an committee overlay tree structure for message distribution. \n\nAn overview video can be found in the following link: \n- [Carnot Overview by Moh during Offsite](https://drive.google.com/file/d/17L0JPgC0L1ejbjga7_6ZitBfHUe3VO11/view?usp=sharing)\n\nThe details of this are being worked on by Moh and Alexander and can be found in the following overleaf documents: \n- [Moh's draft](https://www.overleaf.com/project/6341fb4a3cf4f20f158afad3)\n- [Alexander's notes on the statistical properties of committees](https://www.overleaf.com/project/630c7e20e56998385e7d8416)\n- [Alexander's python code for computing committee sizes](https://github.com/AMozeika/committees)\n\nA simulation notebook is being worked on by Corey to investigate the properties of various tree overlay structures and estimate their practical performance:\n- [Corey's Overlay Jupyter Notebook](https://github.com/logos-co/scratch/tree/main/corpetty/committee_sim)\n\n#### Failure Recovery\nThere exists a timeout that triggers an overlay reconfiguration. Currently work is being done to calculate the probabilities of another failure based on a given percentage of byzantine nodes within the network. \n- [Recovery Failure Probabilities]() - LINK TO WORK HERE\n\n### Random Beacon\nA random beacon is required to choose a leader and establish a seed for defining the overlay tree. Marcin is working on the various avenues. His previous presentations can be found in the following presentation slides (in chronological order):\n- [Intro to Multiparty Random Beacons](https://cloud.logos.co/index.php/s/b39EmQrZRt5rrfL)\n- [Circles of Trust](https://cloud.logos.co/index.php/s/NXJZX8X8pHg6akw)\n- [Compact Certificates of Knowledge](https://cloud.logos.co/index.php/s/oSJ4ykR4A55QHkG)\n\n### Erasure Coding (LT Codes / Fountain Codes / Raptor Codes)\nIn order to reduce message complexity during propagation, we are investigating the use of Luby Transform (LT) codes, more specifically [Fountain Codes](https://en.wikipedia.org/wiki/Fountain_code), to break up the block to be propagated to validators and recombined by local peers within a committee. \n- [LT Code implementation in Rust](https://github.com/chrido/fountain) - unclear about legal status of LT or Raptor Codes, it is currently under investigation.\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","Carnot"]},"/private/roadmap/consensus/candidates/claro":{"title":"Claro: Consensus Candidate","content":"\n\n\n**Claro** (formerly Glacier) is a consensus candidate for the Logos network that aims to be an improvement to the Avalanche family of consensus protocols. \n\n\n### Implementations\nThe protocol has been implemented in multiple languages to facilitate learning and testing. The individual code repositories can be found in the following links:\n- Rust (reference)\n- Python\n- Common Lisp\n\n### Simulations/Experiments/Analysis\nIn order to test the performance of the protocol, and how it stacked up to the Avalanche family of protocols, we have performed a multitude of simulations and experiments under various assumptions. \n- [Alvaro's initial Python implementations and simulation code](https://github.com/status-im/consensus-models)\n\n### Specification\nCurrently the Claro consensus protocol is being drafted into a specification so that other implementations can be created. It's draft resides under [Vac](https://vac.dev) and can be tracked [here](https://github.com/vacp2p/rfc/pull/512/)\n\n### Additional Information\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","claro"]},"/private/roadmap/consensus/development/overview":{"title":"Development Work","content":"","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/development/prototypes":{"title":"Consensus Prototypes","content":"\nConsensus Prototypes is a collection of Rust implementations of the [Consensus Candidates](tags/candidates)\n\n## Tiny Node\n\n\n## Required Roles\n- Lead Developer (filled)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/overview":{"title":"Consensus Work","content":"\nConsensus is the foundation of the network. It is how a group of peer-to-peer nodes understands how to agree on information in a distributed way, particuluarly in the presence of byzantine actors. \n\n## Consensus Roadmap\n### Consensus Candidates\n- [Carnot](private/roadmap/consensus/candidates/carnot/overview.md) - Carnot is the current leading consensus candidate for the Nomos network. It is designed to maximize efficiency of message dissemination while supoorting hundreds of thousands of full validators. It gets its name from the thermodynamic concept of the [Carnot Cycle](https://en.wikipedia.org/wiki/Carnot_cycle), which defines maximal efficiency of work from heat through iterative gas expansions and contractions. \n- [Claro](claro.md) - Claro is a variant of the Avalanche Snow family of protocols, designed to be more efficient at the decision making process by leveraging the concept of \"confidence\" across peer responses. \n\n\n### Theoretical Analysis\n- [snow-family](snow-family.md)\n\n### Development\n- [prototypes](prototypes.md)\n\n## Open Roles\n- [distributed-systems-researcher](distributed-systems-researcher.md)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus"]},"/private/roadmap/consensus/theory/overview":{"title":"Consensus Theory Work","content":"\nThis track of work is dedicated to creating theoretical models of distributed consensus in order to evaluate them from a mathematical standpoint. \n\n## Navigation\n- [Snow Family Analysis](snow-family.md)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory"]},"/private/roadmap/consensus/theory/snow-family":{"title":"Theoretical Analysis of the Snow Family of Consensus Protocols","content":"\nIn order to evaluate the properties of the Avalanche family of consensus protocols more rigorously than the original [whitepapers](), we work to create an analytical framework to explore and better understand the theoretical boundaries of the underlying protocols, and under what parameterization they will break vs a set of adversarial strategies","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory","snow"]},"/private/roadmap/networking/carnot-waku-specification":{"title":"A Specification proposal for using Waku for Carnot Consensus","content":"\n##### Definition Reference \n- $k$ - size of a given committee\n- $n_C$ - number of committees in the overlay, or nodes in the tree\n- $d$ - depth of the overlay tree\n- $n_d$ - number of committees at a given depth of the tree\n\n## Motivation\nIn #Carnot, an overlay is created to facilitate message distribution and voting aggregation. This document will focus on the differentiated channels of communication for message distribution. Whether or not voting aggregation and subsequenty traversal back up the tree can utilize the same channels will be investigated later. \n\nThe overlay is described as a binary tree of committees, where a individual in each committee propogates messages to an assigned node in their two children committees of the tree, until the leaf nodes have recieved enough information to reconstitute the proposal block. \n\nThis communication protocol will naturally form \"pools of information streams\" that people will need to listen to in order to do their assigned work:\n- inner committee communication\n- parent-child chain communication\n- intitial leader distribution\n\n### **inner committee communication** \nall members of a given committee will need to gossip with each other in order to reform the initial proposal block\n- This results in $n_C$ pools of $k$-sized communication pools.\n\n### **parent-child chain communication** \nThe formation of the committee and the lifecycle of a chunk of erasure coded data forms a number of \"parent-child\" chains. \n- If we completely minimize the communcation between committees, then this results in $k$ number of $n_C$-sized communication pools.\n- It is not clear if individual levels of the tree needs to \"execute\" the message to their children, or if the root committee can broadcast to everyone within its assigned parent-chain communcation pool at the same time.\n- It is also unclear if individual levels of the tree need to send independant messages to each of their children, or if a unified communication pool can be leveraged at the tree-level. This results in $d$ communication pools of $n_d$-size. \n\n### **initial leader distribution**\nFor each proposal, a leader needs to distribute the erasure coded proposal block to the root committee\n- This results in a single communication pool of size $k(+1)$.\n- the $(+1)$ above is the leader, who could also be a part of the root committee. The leader changes with each block proposal, and we seek to minimize the time between leader selection and a round start. Thusly, this results in a requirement that each node in the network must maintain a connection to every node in the root committee. \n\n## Proposal\nThis part of the document will attempt to propose using various aspects of Waku, to facilitate both the setup of the above-mentioned communication pools as well as encryption schemes to add a layer of privacy (and hopefully efficiency) to message distribution. \n\nWe seek to minimize the availability of data such that an individual has only the information to do his job and nothing more.\n\nWe also seek to minimize the amount of messages being passed such that eventually everyone can reconstruct the initial proposal block\n\n`???` for Waku-Relay, 6 connections is optimal, resulting in latency ???\n\n`???` Is it better to have multiple pubsub topics with a simple encryption scheme or a single one with a complex encryption scheme\n\nAs there seems to be a lot of dynamic change from one proposal to the next, I would expect [`noise`](https://vac.dev/wakuv2-noise) to be a quality candidate to facilitate the creation of secure ephemeral keys in the to-be proposed encryption scheme. \n\nIt is also of interest how [`contentTopics`](https://rfc.vac.dev/spec/23/) can be leveraged to optimize the communication pools. \n\n## Whiteboard diagram and notes\n![Whiteboard Diagram](images/Overlay-Communications-Brainstorm.png)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku","carnot","networking","consensus"]},"/private/roadmap/networking/overview":{"title":"P2P Networking Overview","content":"\nThis page summarizes the work around the P2P networking layer of the Nomos project.\n\n## Waku\n[Waku](https://waku.org) is an privacy-preserving, ephemeral, peer-to-peer (P2P) messaging suite of protocols which is developed under [Vac](https://vac.dev) and maintained/productionized by the [Logos Collective](https://logos.co). \n\nIt is hopeful that Nomos can leverage the work of the Waku project to provide the P2P networking layer and peripheral services associated with passing messages around the network. Below is a list of the associated work to investigate the use of Waku within the Nomos Project. \n\n### Scalability and Fault-Tolerance Studies\nCurrently, the amount of research and analysis of the scalability of Waku is not sufficient to give enough confidence that Waku can serve as the networking layer for the Nomos project. Thusly, it is our effort to push this analysis forward by investigating the various boundaries of scale for Waku. Below is a list of endeavors in this direction which we hope serves the broader community: \n- [Status' use of Waku study w/ Kurtosis](status-waku-kurtosis.md)\n- [Using Waku for Carnot Overlay](carnot-waku-specification.md)\n\n### Rust implementations\nWe have created and maintain a stop-gap solution to using Waku with the Rust programming language, which is wrapping the [go-waku](https://github.com/status-im/go-waku) library in Rust and publishing it as a crate. This library allows us to do tests with our [Tiny Node](roadmap/development/prototypes.md#Tiny-Node) implementation more quickly while also providing other projects in the ecosystem to leverage Waku within their Rust codebases more quickly. \n\nIt is desired that we implement a more robust and efficient Rust library for Waku, but this is a significant amount of work. \n\nLinks:\n- [Rust bindings to go-waku repo](https://github.com/waku-org/waku-rust-bindings)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","overview"]},"/private/roadmap/networking/status-network-agents":{"title":"Status Network Agents Breakdown","content":"\nThis page creates a model to describe the impact of the various clients within the Status ecosystem by describing their individual contribution to the messages within the Waku network they leverage. \n\nThis model will serve to create a realistic network topology while also informing the appropriate _dimensions of scale_ that are relevant to explore in the [Status Waku scalability study](status-waku-kurtosis.md)\n\nStatus has three main clients that users interface with (in increasing \"network weight\" ordering):\n- Status Web\n- Status Mobile\n- Status Desktop\n\nEach of these clients has differing (on average) resources available to them, and thusly, provides and consumes different Waku protocols and services within the Status network. Here we will detail their associated messaging impact to the network using the following model:\n\n```\nAgent\n - feature\n - protocol\n - contentTopic, messageType, payloadSize, frequency\n```\n\nBy describing all `Agents` and their associated feature list, we should be able do the following:\n\n- Estimate how much impact per unit time an individual `Agent` impacts the Status network\n- Create a realistic network topology and usage within a simulation framework (_e.g._ Kurtosis)\n- Facilitate a Status Specification of `Agents`\n- Set an example for future agent based modeling and simulation work for the Waku protocol suite \n\n## Status Web\n\n## Status Mobile\n\n## Status Desktop\nStatus Desktop serves as the backbone for the Status Network, as the software runs on hardware that is has more available resources, typically has more stable network and robust connections, and generally has a drastically lower churn (or none at all). This results in it running the most Waku protocols for longer periods of time, resulting int he heaviest usage of the Waku network w.r.t. messaging. \n\nHere is the model breakdown of its usage:\n```\nStatus Desktop\n - Prekey bundle broadcast\n - Account sync\n - Historical message melivery\n - Waku-Relay (answering message queries)\n - Message propogation\n - Waku-Relay\n - Waku-Lightpush (receiving)\n```","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["status","waku","scalability"]},"/private/roadmap/networking/status-waku-kurtosis":{"title":"Status' use of Waku - A Scalability Study","content":"\n[Status](https://status.im) is the largest consumer of the Waku protocol, leveraging it for their entire networking stack. THeir upcoming release of Status Desktop and the associated Communities product will heavily push the limits of what Waku can do. As mentioned in the [Networking Overview](private/roadmap/networking/overview.md) page, rigorous scalability studies have yet to be conducted of Waku (v2). \n\nWhile these studies most immediately benefit the Status product suite, it behooves the Nomos Project to assist as the lessons learned immediately inform us the limits of what the Waku protocol suite can handle, and how that fits within our [Technical Requirements](private/requirements/overview.md).\n\nThis work has been kicked off as a partnership with the [Kurtosis](https://kurtosis.com) distributed systems development platform. It is our hope that the experience and accumen gained during this partnership and study will serve us in the future with respect to Nomos developme, and more broadly, all projects under the Logos Collective. \n\nAs such, here is an overview of the various resources towards this endeavor:\n- [Status Network Agent Breakdown](status-network-agents.md) - A document that describes the archetypal agents that participate in the Status Network and their associated Waku consumption.\n- [Wakurtosis repo](https://github.com/logos-co/wakurtosis) - A Kurtosis module to run scalability studies\n- [Waku Topology Test repo](https://github.com/logos-co/Waku-topology-test) - a Python script that facilitates setting up a reasonable network topology for the purpose of injecting the network configuration into the above Kurtosis repo\n- [Initial Vac forum post introducing this work](https://forum.vac.dev/t/waku-v2-scalability-studies/142)\n- [Waku Github Issue detailing work progression](https://github.com/waku-org/pm/issues/2)\n - this is also a place to maintain communications of progress\n- [Initial Waku V2 theoretical scalability study](https://vac.dev/waku-v1-v2-bandwidth-comparison)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","scalability","waku"]},"/private/roadmap/virtual-machines/overview":{"title":"overview","content":"\n## Motivation\nLogos seeks to use a privacy-first virtual machine for transaction execution. We believe this can only be acheived through zero-knowledge. The majority of current work in the field focuses more towards the aggregation and subsequent verification of transactions. This leads us to explore the researching and development of a privacy-first virtual machine. \n\nLINK TO APPROPRIATE NETWORK REQUIREMENTS HERE\n\n#### Educational Resources\n- primer on Zero Knowledge Virtual Machines - [link](https://youtu.be/GRFPGJW0hic)\n\n### Implementations:\n- TinyRAM - link\n- CairoVM\n- zkSync\n- Hermes\n- [MIDEN](https://polygon.technology/solutions/polygon-miden/) (Polygon)\n- RISC-0\n\t- RISC-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General Building Blocks of a ZK-VM\n- CPU\n\t- modeled with \"execution trays\"\n- RAM\n\t- overhead to look out for\n\t\t- range checks\n\t\t- bitwise operations\n\t\t- hashing\n- Specialized circuits\n- Recursion\n\n## Approaches\n- zk-WASM\n- zk-EVM\n- RISC-0\n\t- RISK-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t\t- https://youtu.be/2MXHgUGEsHs - Why use the RISC Zero zkVM?\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General workstreams\n- bytecode compiler\n- zero-knowledge circuit design\n- opcode architecture (???)\n- engineering\n- required proof system\n- control flow\n\t- MAST (as used in MIDEN)\n\n## Roles\n- [ZK Research Engineer](zero-knowledge-research-engineer.md)\n- Senior Rust Developer\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["virtual machines","zero knowledge"]},"/private/roles/distributed-systems-researcher":{"title":"Open Role: Distributed Systems Researcher","content":"\n\n## About Status\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception. Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n \n\n## Who are we?\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the Status Network. We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality\n\n## The job\n\n**Responsibilities:**\n- This role is dedicated to pure research\n- Primarily, ensuring that solutions are sound and diving deeper into their formal definition.\n- Additionally, he/she would be regularly going through papers, bringing new ideas and staying up-to-date.\n- Designing, specifying and verifying distributed systems by leveraging formal and experimental techniques.\n- Conducting theoretical and practical analysis of the performance of distributed systems.\n- Designing and analysing incentive systems.\n- Collaborating with both internal and external customers and the teams responsible for the actual implementation.\n- Researching new techniques for designing, analysing and implementing dependable distributed systems.\n- Publishing and presenting research results both internally and externally.\n\n \n**Ideally you will have:**\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]\n- Strong background in Computer Science and Math, or a related area.\n- Academic background (The ability to analyze, digest and improve the State of the Art in our fields of interest. Specifically, familiarity with formal proofs and/or the scientific method.)\n- Distributed Systems with a focus on Blockchain\n- Analysis of algorithms\n- Familiarity with Python and/or complex systems modeling software\n- Deep knowledge of algorithms (much more academic, such as have dealt with papers, moving from research to pragmatic implementation)\n- Experience in analysing the correctness and security of distributed systems.\n- Familiarity with the application of formal method techniques. \n- Comfortable with “reverse engineering” code in a number of languages including Java, Go, Rust, etc. Even if no experience in these languages, the ability to read and \"reverse engineer\" code of other projects is important.\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Capable of deep and creative thinking.\n- Passionate about blockchain technology in general.\n- Able to manage the uncertainties and ambiguities associated with working in a remote-first, distributed, decentralised environment.\n- A strong alignment to our principles: https://status.im/about/#our-principles\n\n\n**Bonus points:**\n- Experience working remotely. \n- Experience working for an open source organization. \n- TLA+/PRISM would be desirable.\n- PhD in Computer Science, Mathematics, or a related area. \n- Experience Multi-Party Computation and Zero-Knowledge Proofs\n- Track record of scientific publications.\n- Previous experience in remote or globally distributed teams.\n\n## Hiring process\n\nThe hiring process for this role will be:\n- Interview with our People Ops team\n- Interview with Alvaro (Team Lead)\n- Interview with Corey (Chief Security Officer)\n- Interview with Jarrad (Cofounder) or Daniel \n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n \n\n## Compensation\n\nWe are happy to pay salaries in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: https://people-ops.status.im/tag/perks/\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role"]},"/private/roles/rust-developer":{"title":"Rust Developer","content":"\n# Role: Rust Developer\nat Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is an organization building the tools and infrastructure for the advancement of a secure, private, and open web3. We have been completely distributed since inception. Our team is currently 100+ core contributors strong and welcomes a growing number of community members from all walks of life, scattered all around the globe. We care deeply about open source, and our organizational structure has a minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**About Logos**\n\nA group of Status Contributors is also involved in a new community lead project, called Logos, and this particular role will enable you to also focus on this project. Logos is a grassroots movement to provide trust-minimized, corruption-resistant governing services and social institutions to underserved citizens. \n\nLogos’ infrastructure will provide a base for the provisioning of the next-generation of governing services and social institutions - paving the way to economic opportunities for those who need them most, whilst respecting basic human rights through the network’s design.You can read more about Logos here: [in this small handbook](https://github.com/acid-info/public-assets/blob/master/logos-manual.pdf) for mindful readers like yourself.\n\n**Who are we?**\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the [Status Network](https://statusnetwork.com/). We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality.\n\n**Responsibilities:**\n\n- Develop and maintenance of internal rust libraries\n- 1st month: comfortable with dev framework, simulation app. Improve python lib?\n- 2th-3th month: Start dev of prototype node services\n\n**Ideally you will have:**\n\n- “Extensive” Rust experience (Async programming is a must) \n Ideally they have some GitHub projects to show\n- Experience with Python\n- Strong competency in developing and maintaining complex libraries or applications\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles) \n \n\n**Bonus points if**\n\n-  E.g. Comfortable working remotely and asynchronously\n-  Experience working for an open source organization.  \n-  Peer-to-peer or networking experience\n\n_[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]_\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)\n\n**Hiring Process** \n\nThe hiring process for this role will be:\n\n1. Interview with Maya (People Ops team)\n2. Interview with Corey (Logos Program Owner)\n3. Interview with Daniel (Engineering Lead)\n4. Interview with Jarrad (Cofounder)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role","engineering","rust"]},"/private/roles/zero-knowledge-research-engineer":{"title":"Zero Knowledge Research Engineer","content":"at Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception.  Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**Who are we**\n\n[Vac](http://vac.dev/) **builds** [public good](https://en.wikipedia.org/wiki/Public_good) protocols for the decentralized web.\n\nWe do applied research based on which we build protocols, libraries and publications. Custodians of protocols that reflect [a set of principles](http://vac.dev/principles) - liberty, privacy, etc.\n\nYou can see a sample of some of our work here: [Vac, Waku v2 and Ethereum Messaging](https://vac.dev/waku-v2-ethereum-messaging), [Privacy-preserving p2p economic spam protection in Waku v2](https://vac.dev/rln-relay), [Waku v2 RFC](https://rfc.vac.dev/spec/10/). Our attitude towards ZK: [Vac \u003c3 ZK](https://forum.vac.dev/t/vac-3-zk/97).\n\n**The role**\n\nThis role will be part of a new team that will make a provable and private WASM engine that runs everywhere. As a research engineer, you will be responsible for researching, designing, analyzing and implementing circuits that allow for proving private computation of execution in WASM. This includes having a deep understanding of relevant ZK proof systems and tooling (zk-SNARK, Circom, Plonk/Halo 2, zk-STARK, etc), as well as different architectures (zk-EVM Community Effort, Polygon Hermez and similar) and their trade-offs. You will collaborate with the Vac Research team, and work with requirements from our new Logos program. As one of the first hires of a greenfield project, you are expected to take on significant responsibility,  while collaborating with other research engineers, including compiler engineers and senior Rust engineers. \n \n\n**Key responsibilities** \n\n- Research, analyze and design proof systems and architectures for private computation\n- Be familiar and adapt to research needs zero-knowledge circuits written in Rust Design and implement zero-knowledge circuits in Rust\n- Write specifications and communicate research findings through write-ups\n- Break down complex problems, and know what can and what can’t be dealt with later\n- Perform security analysis, measure performance of and debug circuits\n\n**You ideally will have**\n\n- Very strong academic or engineering background (PhD-level or equivalent in industry); relevant research experience\n- Experience with low level/strongly typed languages (C/C++/Go/Rust or Java/C#)\n- Experience with Open Source software\n- Deep understanding of Zero-Knowledge proof systems (zk-SNARK, circom, Plonk/Halo2, zk-STARK), elliptic curve cryptography, and circuit design\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles)\n\n**Bonus points if** \n\n- Experience in provable and/or private computation (zkEVM, other ZK VM)\n- Rust Zero Knowledge tooling\n- Experience with WebAssemblyWASM\n\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role. Just explain to us why in your cover letter].\n\n**Hiring process** \n\nThe hiring process for this role will be:\n\n1. Interview with Angel/Maya from our Talent team\n2. Interview with team member from the Vac team\n3. Pair programming task with the Vac team\n4. Interview with Oskar, the Vac team lead\n5. Interview with Jacek, Program lead\n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["engineering","role","zero knowledge"]},"/roadmap/acid/milestones-overview":{"title":"Comms Milestones Overview","content":"\n- [Comms Roadmap](https://www.notion.so/eb0629444f0a431b85f79c569e1ca91b?v=76acbc1631d4479cbcac04eb08138c19)\n- [Comms Projects](https://www.notion.so/b9a44ea08d2a4d2aaa9e51c19b476451?v=f4f6184e49854fe98d61ade0bf02200d)\n- [Comms planner deadlines](https://www.notion.so/2585646d01b24b5fbc79150e1aa92347?v=feae1d82810849169b06a12c849d8088)\n- ","lastmodified":"2023-08-17T20:36:02.487556006Z","tags":["milestones"]},"/roadmap/acid/updates/2023-08-02":{"title":"2023-08-02 Acid weekly","content":"\n## Leads roundup - acid\n\n**Al / Comms**\n\n- Status app relaunch comms campaign plan in the works. Approx. date for launch 31.08.\n- Logos comms + growth plan post launch is next up TBD.\n- Will be waiting for specs for data room, raise etc.\n- Hires: split the role for content studio to be more realistic in getting top level talent.\n\n**Matt / Copy**\n\n- Initiative updating old documentation like CC guide to reflect broader scope of BUs\n- Brand guidelines/ modes of presentation are in process\n- Wikipedia entry on network states and virtual states is live on \n\n**Eddy / Digital Comms**\n\n- Logos Discord will be completed by EOD.\n- Codex Discord will be done tomorrow.\n - LPE rollout plan, currently working on it, will be ready EOW\n- Podcast rollout needs some\n- Overarching BU plan will be ready in next couple of weeks as things on top have taken priority.\n\n**Amir / Studio**\n\n- Started execution of LPE for new requirements, broken down in smaller deliveries. Looking to have it working and live by EOM.\n- Hires: still looking for 3 positions with main focus on developer side. \n\n**Jonny / Podcast**\n\n- Podcast timelines are being set. In production right now. Nick delivered graphics for HiO but we need a full pack.\n- First HiO episode is in the works. Will be ready in 2 weeks to fit in the rollout of the LPE.\n\n**Louisa / Events**\n\n- Global strategy paper for wider comms plan.\n- Template for processes and executions when preparing events.\n- Decision made with Carl to move Network State event to November in satellite of other events. Looking into ETH Lisbon / Staking Summit etc.\n - Seoul Q4 hackathon is already in the works. Needs bounty planning.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/acid/updates/2023-08-09":{"title":"2023-08-09 Acid weekly","content":"\n## **Top level priorities:**\n\nLogos Growth Plan\nStatus Relaunch\nLaunch of LPE\nPodcasts (Target: Every week one podcast out)\nHiring: TD studio and DC studio roles\n\n## **Movement Building:**\n\n- Logos collective comms plan skeleton ready - will be applied for all BUs as next step\n- Goal is to have plan + overview to set realistic KPIs and expectations\n- Discord Server update on various views\n- Status relaunch comms plan is ready for input from John et al.\n- Reach out to BUs for needs and deliverables\n\n## **TD Studio**\n\nFull focus on LPE:\n- On track, target of end of august\n- review of options, more diverse landscape of content\n- Episodes page proposals\n- Players in progress\n- refactoring from prev code base\n- structure of content ready in GDrive\n\n## **Copy**\n\n- Content around LPE\n- Content for podcast launches\n- Status launch - content requirements to receive\n- Organization of doc sites review\n- TBD what type of content and how the generation workflows will look like\n\n## **Podcast**\n\n- Good state in editing and producing the shows\n- First interview edited end to end with XMTP is ready. 2 weeks with social assets and all included. \n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n- 3 recorded for HIO, motion graphics in progress\n- First E2E podcast ready in 2 weeks for LPE\n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n\n## **DC Studio**\n\n- Brand guidelines for HiO are ready and set. Thanks `Shmeda`!\n- Logos State branding assets are being developed\n- Presentation templates update\n\n## **Events**\n\n- Network State event probably in Istanbul in November re: Devconnect will confirm shortly.\n- Program elements and speakers are top priority\n- Hackathon in Seoul in Q1 2024 - late Febuary probably\n- Jarrad will be speaking at HCPP and EthRome\n- Global event strategy written and in review\n- Lou presented social media and event KPIs on Paris event\n\n## **CRM \u0026 Marketing tool**\n\n- Get feedback from stakeholders and users\n- PM implementation to be planned (+- 3 month time TBD) with working group\n- LPE KPI: Collecting email addresses of relevant people\n- Careful on how we manage and use data, important for BizDev\n- Careful on which segments of the project to manage using the CRM as it can be very off brand","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/codex/milestones-overview":{"title":"Codex Milestones Overview","content":"\n## Milestones\n- [Zenhub Tracker](https://app.zenhub.com/workspaces/engineering-62cee4c7a335690012f826fa/roadmap)\n- [Miro Tracker](https://miro.com/app/board/uXjVOtZ40xI=/?share_link_id=33106977104)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones-overview"]},"/roadmap/codex/updates/2023-07-21":{"title":"2023-07-21 Codex weekly","content":"\n## Codex update 07/12/2023 to 07/21/2023\n\nOverall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc...\n\nOur main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. A lot of grunt work is being done to make that possible. Progress is steady, but there are lots of stabilization and testing \u0026 infra related work going on.\n\nWe're also onboarding several new members to the team (4 to be precise), this will ultimately accelerate our progress, but it requires some upfront investment from some of the more experienced team members.\n\n### DevOps/Infrastructure:\n\n- Adopted nim-codex Docker builds for Dist Tests.\n- Ordered Dedicated node on Hetzner.\n- Configured Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Created Geth and Prometheus Docker images for Dist-Tests.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Set up Ingress Controller in Dist-Tests cluster.\n\n### Testing:\n\n- Set up deployer to gather metrics.\n- Debugging and identifying potential deadlock in the Codex client.\n- Added metrics, built image, and ran tests.\n- Updated dist-test log for Kibana compatibility.\n- Ran dist-tests on a new master image.\n- Debugging continuous tests.\n\n### Development:\n\n- Worked on codex-dht nimble updates and fixing key format issue.\n- Updated CI and split Windows CI tests to run on two CI machines.\n- Continued updating dependencies in codex-dht.\n- Fixed decoding large manifests ([PR #479](https://github.com/codex-storage/nim-codex/pull/497)).\n- Explored the existing implementation of NAT Traversal techniques in `nim-libp2p`.\n\n### Research\n\n- Exploring additional directions for remote verification techniques and the interplay of different encoding approaches and cryptographic primitives\n - https://eprint.iacr.org/2021/1500.pdf\n - https://dankradfeist.de/ethereum/2021/06/18/pcs-multiproofs.html\n - https://eprint.iacr.org/2021/1544.pdf\n- Onboarding Balázs as our ZK researcher/engineer\n- Continued research in DAS related topics\n - Running simulation on newly setup infrastructure\n- Devised a new direction to reduce metadata overhead and enable remote verification https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n- Looked into NAT Traversal ([issue #166](https://github.com/codex-storage/nim-codex/issues/166)).\n\n### Cross-functional (Combination of DevOps/Testing/Development):\n\n- Fixed discovery related issues.\n- Planned Codex Demo update for the Logos event and prepared environment for the demo.\n- Described requirements for Dist Tests logs format.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Dist Tests logs adoption requirements - Updated log format for Kibana compatibility.\n- Hetzner Dedicated server was configured.\n- Set up Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper in Dist-Tests cluster.\n- Setup Grafana in Dist-Tests cluster.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Setup Ingress Controller in Dist-Tests cluster.\n\n---\n\n#### Conversations\n1. zk_id _—_ 07/24/2023 11:59 AM\n\u003e \n\u003e We've explored VDI for rollups ourselves in the last week, curious to know your thoughts\n2. dryajov _—_ 07/25/2023 1:28 PM\n\u003e \n\u003e It depends on what you mean, from a high level (A)VID is probably the closest thing to DAS in academic research, in fact DAS is probably either a subset or a superset of VID, so it's definitely worth digging into. But I'm not sure what exactly you're interested in, in the context of rollups...\n1. zk_id _—_ 07/25/2023 3:28 PM\n \n The part of the rollups seems to be the base for choosing proofs that scale linearly with the amount of nodes (which makes it impractical for large numbers of nodes). The protocol is very simple, and would only need to instead provide constant proofs with the Kate commitments (at the cost of large computational resources is my understanding). This was at least the rationale that I get from reading the paper and the conversation with Bunz, one of the founders of the Espresso shared sequencer (which is where I found the first reference to this paper). I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal. You still would need the sampling for the light clients though, of course.\n \n2. dryajov _—_ 07/25/2023 8:31 PM\n \n \u003e I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal.\n \n Yeah, great question. What follows is strictly IMO, as I haven't seen this formally contrasted anywhere, so my reasoning can be wrong in subtle ways.\n \n - (A)VID - **dispersing** and storing data in a verifiable manner\n - Sampling - verifying already **dispersed** data\n \n tl;dr Sampling allows light nodes to protect against dishonest majority attacks. In other words, a light node cannot be tricked to follow an incorrect chain by a dishonest validator majority that withholds data. More details are here - [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\") ------------- First, DAS implies (A)VID, as there is an initial phase where data is distributed to some subset of nodes. Moreover, these nodes, usually the validators, attest that they received the data and that it is correct. If a majority of validators accepts, then the block is considered correct, otherwise it is rejected. This is the verifiable dispersal part. But what if the majority of validators are dishonest? Can you prevent them from tricking the rest of the network from following the chain?\n \n Dankrad Feist\n \n [Data availability checks](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html)\n \n Primer on data availability checks\n \n3. _[_8:31 PM_]_\n \n ## Dealing with dishonest majorities\n \n This is easy if all the data is downloaded by all nodes all the time, but we're trying to avoid just that. But lets assume, for the sake of the argument, that there are full nodes in the network that download all the data and are able to construct fraud proofs for missing data, can this mitigate the problem? It turns out that it can't, because proving data (un)availability isn't a directly attributable fault - in other words, you can observe/detect it but there is no way you can prove it to the rest of the network reliably. More details here [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\") So, if there isn't much that can be done by detecting that a block isn't available, what good is it for? Well nodes can still avoid following the unavailable chain and thus be tricked by a dishonest majority. However, simply attesting that data has been publishing is not enough to prevent a dishonest majority from attacking the network. (edited)\n \n4. dryajov _—_ 07/25/2023 9:06 PM\n \n To complement, the relevant quote from [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\"), is:\n \n \u003e Here, fraud proofs are not a solution, because not publishing data is not a uniquely attributable fault - in any scheme where a node (\"fisherman\") has the ability to \"raise the alarm\" about some piece of data not being available, if the publisher then publishes the remaining data, all nodes who were not paying attention to that specific piece of data at that exact time cannot determine whether it was the publisher that was maliciously withholding data or whether it was the fisherman that was maliciously making a false alarm.\n \n The relevant quote from from [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\"), is:\n \n \u003e There is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a consensus supermajority has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? Honest full nodes, obviously, will not follow this chain, as they can’t download the data. But light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.\n \n Both articles are a bit old, but the intuitions still hold.\n \n\nJuly 26, 2023\n\n6. zk_id _—_ 07/26/2023 10:42 AM\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n7. _[_10:45 AM_]_\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n8. zk_id\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n ### dryajov _—_ 07/26/2023 4:42 PM\n \n Great! Glad to help anytime \n \n9. zk_id\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n dryajov _—_ 07/26/2023 4:43 PM\n \n Yes, I'd argue it is crucial in a network with distributed validation, where all nodes are either fully light or partially light nodes.\n \n10. _[_4:46 PM_]_\n \n Btw, there is probably more we can share/compare notes on in this problem space, we're looking at similar things, perhaps from a slightly different perspective in Codex's case, but the work done on DAS with the EF directly is probably very relevant for you as well \n \n\nJuly 27, 2023\n\n12. zk_id _—_ 07/27/2023 3:05 AM\n \n I would love to. Do you have those notes somewhere?\n \n13. zk_id _—_ 07/27/2023 4:01 AM\n \n all the links you have, anything, would be useful\n \n14. zk_id\n \n I would love to. Do you have those notes somewhere?\n \n dryajov _—_ 07/27/2023 4:50 PM\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n\nJuly 28, 2023\n\n16. zk_id _—_ 07/28/2023 5:47 AM\n \n Would love to see anything that is possible\n \n17. _[_5:47 AM_]_\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n18. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n dryajov _—_ 07/28/2023 4:07 PM\n \n Yes, we're also working in this direction as this is crucial for us as well. There should be some result coming soon(tm), now that @bkomuves is helping us with this part.\n \n19. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n bkomuves _—_ 07/28/2023 4:44 PM\n \n my current view (it's changing pretty often :) is that there is tension between:\n \n - commitment cost\n - proof cost\n - and verification cost\n \n the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n\nJuly 29, 2023\n\n21. bkomuves\n \n my current view (it's changing pretty often :) is that there is tension between: \n \n - commitment cost\n - proof cost\n - and verification cost\n \n  the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n zk_id _—_ 07/29/2023 4:23 AM\n \n I agree. That's also my understanding (although surely much more superficial).\n \n22. _[_4:24 AM_]_\n \n There is also the dimension of computation vs size cost\n \n23. _[_4:25 AM_]_\n \n ie the VID scheme (of the paper that kickstarted this conversation) has all the properties we need, but it scales n^2 in message complexity which makes it lose the properties we are looking for after 1k nodes. We need to scale confortably to 10k nodes.\n \n24. _[_4:29 AM_]_\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is:\n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n\nAugust 1, 2023\n\n26. dryajov\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n Leobago _—_ 08/01/2023 1:13 PM\n \n Note much public write-ups yet. You can find some content here:\n \n - [https://blog.codex.storage/data-availability-sampling/](https://blog.codex.storage/data-availability-sampling/ \"https://blog.codex.storage/data-availability-sampling/\")\n \n - [https://github.com/codex-storage/das-research](https://github.com/codex-storage/das-research \"https://github.com/codex-storage/das-research\")\n \n \n We also have a few Jupiter notebooks but they are not public yet. As soon as that content is out we can let you know ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n Codex Storage Blog\n \n [Data Availability Sampling](https://blog.codex.storage/data-availability-sampling/)\n \n The Codex team is busy building a new web3 decentralized storage platform with the latest advances in erasure coding and verification systems. Part of the challenge of deploying decentralized storage infrastructure is to guarantee that the data that has been stored and is available for retrieval from the beginning until\n \n GitHub\n \n [GitHub - codex-storage/das-research: This repository hosts all the ...](https://github.com/codex-storage/das-research)\n \n This repository hosts all the research on DAS for the collaboration between Codex and the EF. - GitHub - codex-storage/das-research: This repository hosts all the research on DAS for the collabora...\n \n [](https://opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research)\n \n ![GitHub - codex-storage/das-research: This repository hosts all the ...](https://images-ext-2.discordapp.net/external/DxXI-YBkzTrPfx_p6_kVpJzvVe6Ix6DrNxgrCbcsjxo/https/opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research?width=400\u0026height=200)\n \n27. zk_id\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is: \n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n dryajov _—_ 08/01/2023 1:55 PM\n \n This might interest you as well - [https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a \"https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a\")\n \n Medium\n \n [Combining KZG and erasure coding](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a)\n \n The Hitchhiker’s Guide to Subspace  — Episode II\n \n [](https://miro.medium.com/v2/resize:fit:1200/0*KGb5QHFQEd0cvPeP.png)\n \n ![Combining KZG and erasure coding](https://images-ext-2.discordapp.net/external/LkoJxMEskKGMwVs8XTPVQEEu0senjEQf42taOjAYu0k/https/miro.medium.com/v2/resize%3Afit%3A1200/0%2AKGb5QHFQEd0cvPeP.png?width=400\u0026height=200)\n \n28. _[_1:56 PM_]_\n \n This is a great analysis of the current state of the art in structure of data + commitment and the interplay. I would also recoment reading the first article of the series which it also links to\n \n29. zk_id _—_ 08/01/2023 3:04 PM\n \n Thanks @dryajov @Leobago ! Much appreciated!\n \n30. _[_3:05 PM_]_\n \n Very glad that we can discuss these things with you. Maybe I have some specific questions once I finish reading a huge pile of pending docs that I'm tackling starting today...\n \n31. zk_id _—_ 08/01/2023 6:34 PM\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n32. zk_id\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n Leobago _—_ 08/01/2023 6:36 PM\n \n Yes, checkout the visual branch and make sure to enable plotting in the config file, it should produce a bunch of figures ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n33. _[_6:37 PM_]_\n \n You might find also some bugs here and there on that branch ![😅](https://discord.com/assets/b45af785b0e648fe2fb7e318a6b8010c.svg)\n \n34. zk_id _—_ 08/01/2023 7:44 PM\n \n Thanks!","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-01":{"title":"2023-08-01 Codex weekly","content":"\n# Codex update Aug 1st\n\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial design writeup https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n - Work break down and review for Ben and Tomasz (epic coming up)\n - This is required to integrate the proving system\n\n### Milestone: Block discovery and retrieval\n\n- Some initial work break down and milestones here - https://docs.google.com/document/d/1hnYWLvFDgqIYN8Vf9Nf5MZw04L2Lxc9VxaCXmp9Jb3Y/edit\n - Initial analysis of block discovery - https://rpubs.com/giuliano_mega/1067876\n - Initial block discovery simulator - https://gmega.shinyapps.io/block-discovery-sim/\n\n### Milestone: Distributed Client Testing\n\n- Lots of work around log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - This is a first try of running against an L2\n - Mostly done, waiting on related fixes to land before merge - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Reservations and slot management\n\n- Lots of work around slot reservation and queuing https://github.com/codex-storage/nim-codex/pull/455\n\n## Remote auditing\n\n### Milestone: Implement Poseidon2\n\n- First pass at an implementation by Balazs\n - private repo, but can give access if anyone is interested\n\n### Milestone: Refine proving system\n\n- Lost of thinking around storage proofs and proving systems\n - private repo, but can give access if anyone is interested\n\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator.\n- Implemented logical error-rates and delays to interactions between DHT clients.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-11":{"title":"2023-08-11 Codex weekly","content":"\n\n# Codex update August 11\n\n---\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial Merkle Tree implementation - https://github.com/codex-storage/nim-codex/pull/504\n- Work on persisting/serializing Merkle Tree is underway, PR upcoming\n\n### Milestone: Block discovery and retrieval\n\n- Continued analysis of block discovery and retrieval - https://hackmd.io/_KOAm8kNQamMx-lkQvw-Iw?both=#fn5\n - Reviewing papers on peers sampling and related topics\n - [Wormhole Peer Sampling paper](http://publicatio.bibl.u-szeged.hu/3895/1/p2p13.pdf)\n - [Smoothcache](https://dl.acm.org/doi/10.1145/2713168.2713182)\n- Starting work on simulations based on the above work\n\n### Milestone: Distributed Client Testing\n\n- Continuing working on log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n - More related issues/PRs:\n - https://github.com/codex-storage/infra-codex/pull/20\n - https://github.com/codex-storage/infra-codex/pull/20\n- Testing and debugging Condex in continuous testing environment\n - Debugging continuous tests [cs-codex-dist-tests/pull/44](https://github.com/codex-storage/cs-codex-dist-tests/pull/44)\n - pod labeling [cs-codex-dist-tests/issues/39](https://github.com/codex-storage/cs-codex-dist-tests/issues/39)\n\n---\n## Infra\n\n### Milestone: Kubernetes Configuration and Management\n- Move Dist-Tests cluster to OVH and define naming conventions\n- Configure Ingress Controller for Kibana/Grafana\n- **Create documentation for Kubernetes management**\n- **Configure Dist/Continuous-Tests Pods logs shipping**\n\n### Milestone: Continuous Testing and Labeling\n- Watch the Continuous tests demo\n- Implement and configure Dist-Tests labeling\n- Set up logs shipping based on labels\n- Improve Docker workflows and add 'latest' tag\n\n### Milestone: CI/CD and Synchronization\n- Set up synchronization by codex-storage\n- Configure Codex Storage and Demo CI/CD environments\n\n---\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - Done but merge is blocked by a few issues - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Marketplace Sales\n\n- Lots of cleanup and refactoring\n - Finished refactoring state machine PR [link](https://github.com/codex-storage/nim-codex/pull/469)\n - Added support for loading node's slots during Sale's module start [link](https://github.com/codex-storage/nim-codex/pull/510)\n\n---\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator - https://github.com/cortze/py-dht.\n\n\nNOTE: Several people are/where out during the last few weeks, so some milestones are paused until they are back","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/innovation_lab/milestones-overview":{"title":"Innovation Lab Milestones Overview","content":"\niLab Milestones can be found on the [Notion Page](https://www.notion.so/Logos-Innovation-Lab-dcff7b7a984b4f9e946f540c16434dc9?pvs=4)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/innovation_lab/updates/2023-07-12":{"title":"2023-07-12 Innovation Lab Weekly","content":"\n**Logos Lab** 12th of July\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\n**Milestone**: deliver the first transactional Waku Object called Payggy (attached some design screenshots).\n\nIt is now possible to make transactions on the blockchain and the objects send notifications over the messaging layer (e.g. Waku) to the other participants. What is left is the proper transaction status management and some polishing.\n\nThere is also work being done on supporting external objects, this enables creating the objects with any web technology. This work will guide the separation of the interfaces between the app and the objects and lead us to release it as an SDK.\n\n**Next milestone**: group chat support\n\nThe design is already done for the group chat functionality. There is ongoing design work for a new Waku Object that would showcase what can be done in a group chat context.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nLink to Payggy design files:\nhttps://scene.zeplin.io/project/64ae9e965652632169060c7d\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/UtVHf2EU\n\n--- \n\n#### Conversation\n\n1. petty _—_ 07/15/2023 5:49 AM\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n2. petty\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n3. attila🍀 _—_ 07/15/2023 6:18 AM\n \n at the moment most of the code is in the `waku-objects-playground` repo later we may split it to several repos here is the link: [https://github.com/logos-innovation-lab/waku-objects-playground](https://github.com/logos-innovation-lab/waku-objects-playground \"https://github.com/logos-innovation-lab/waku-objects-playground\")","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-02":{"title":"2023-08-02 Innovation Lab weekly","content":"\n**Logos Lab** 2nd of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nThe last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite. \n\nStill, a lot of progress were made and the team released the first version of a color system in the form of an npm package, which lets the users to choose any color they like to customize their app. It is based on grayscale design and uses luminance, hence the name of the library. Try it in the Playground app or check the links below.\n\n**Milestone**: group chat support\n\nThere is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nGrayscale design:\nhttps://grayscale.design/\n\nLuminance package on npm:\nhttps://www.npmjs.com/package/@waku-objects/luminance\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/ZMU4yyWG\n\n--- \n\n### Conversation\n\n1. fryorcraken _—_ Yesterday at 10:58 PM\n \n \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n \n While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n\nAugust 3, 2023\n\n2. fryorcraken\n \n \u003e \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation. While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n3. attila🍀 _—_ Today at 4:21 AM\n \n This is great news and I think it will help with adoption. I did not find a JS API for status (maybe I was looking at the wrong places), the closest was the `status-js-api` project but that still uses whisper and the repo recommends to use `js-waku` instead ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg) [https://github.com/status-im/status-js-api](https://github.com/status-im/status-js-api \"https://github.com/status-im/status-js-api\") Also I also found the `56/STATUS-COMMUNITIES` spec: [https://rfc.vac.dev/spec/56/](https://rfc.vac.dev/spec/56/ \"https://rfc.vac.dev/spec/56/\") It seems to be quite a complete solution for community management with all the bells and whistles. However our use case is a private group chat for your existing contacts, so it seems to be a bit overkill for that.\n \n4. fryorcraken _—_ Today at 5:32 AM\n \n The repo is status-im/status-web\n \n5. _[_5:33 AM_]_\n \n Spec is [https://rfc.vac.dev/spec/55/](https://rfc.vac.dev/spec/55/ \"https://rfc.vac.dev/spec/55/\")\n \n6. fryorcraken\n \n The repo is status-im/status-web\n \n7. attila🍀 _—_ Today at 6:05 AM\n \n As constructive feedback I can tell you that it is not trivial to find it and use it in other projects It is presented as a React component without documentation and by looking at the code it seems to provide you the whole chat UI of the desktop app, which is not necessarily what you need if you want to embed it in your app It seems to be using this package: [https://www.npmjs.com/package/@status-im/js](https://www.npmjs.com/package/@status-im/js \"https://www.npmjs.com/package/@status-im/js\") Which also does not have documentation I assume that package is built from this: [https://github.com/status-im/status-web/tree/main/packages/status-js](https://github.com/status-im/status-web/tree/main/packages/status-js \"https://github.com/status-im/status-web/tree/main/packages/status-js\") This looks promising, but again there is no documentation. Of course you can use the code to figure out things, but at least I would be interested in what are the requirements and high level architecture (does it require an ethereum RPC endpoint, where does it store data, etc.) so that I can evaluate if this is the right approach for me. So maybe a lesson here is to put effort in the documentation and the presentation as well and if you have the budget then have someone on the team whose main responsibility is that (like a devrel or dev evangelist role)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-11":{"title":"2023-08-17 \u003cTEAM\u003e weekly","content":"\n\n# **Logos Lab** 11th of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nWe merged the group chat but it surfaced plenty of issues that were not a problem with 1on1 chats, both with our Waku integration and from product perspective as well. Spent the bigger part of the week with fixing these. We also registered a new domain, wakuplay.im where the latest version is deployed. It uses the Gnosis chain for transactions and currently the xDai and Gno tokens are supported, but it is easy to add other ERC-20 tokens now.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions. The design is ready and the implementaton has started.\n\n**Next milestone**: Basic Waku Objects website\n\nWork started toward having a structure for a website and the content is shaping up nicely. The implementation has been started on it as well.\n\nDeployed version of the main branch:\nhttps://www.wakuplay.im/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/eaYVgSUG","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["\u003cTEAM\u003e-updates"]},"/roadmap/nomos/milestones-overview":{"title":"Nomos Milestones Overview","content":"\n[Milestones Overview Notion Page](https://www.notion.so/ec57b205d4b443aeb43ee74ecc91c701?v=e782d519939f449c974e53fa3ab6978c)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/nomos/updates/2023-07-24":{"title":"2023-07-24 Nomos weekly","content":"\n**Research**\n\n- Milestone 1: Understanding Data Availability (DA) Problem\n - High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris.\n - Explored the necessity and key challenges associated with DA.\n - In-depth study of Verifiable Information Dispersal (VID) as it relates to data availability.\n - **Blocker:** The experimental tests for our specific EC scheme are pending, which is blocking progress to make final decision on KZG + commitments for our architecture.\n- Milestone 2: Privacy for Proof of Stake (PoS)\n - Analyzed the capabilities and limitations of mixnets, specifically within the context of timing attacks in private PoS.\n - Invested time in understanding timing attacks and how Nym mixnet caters to these challenges.\n - Reviewed the Crypsinous paper to understand its privacy vulnerabilities, notably the issue with probabilistic leader election and the vulnerability of anonymous broadcast channels to timing attacks.\n\n**Development**\n\n- Milestone 1: Mixnet and Networking\n - Initiated integration of libp2p to be used as the full node's backend, planning to complete in the next phase.\n - Begun planning for the next steps for mixnet integration, with a focus on understanding the components of the Nym mixnet, its problem-solving mechanisms, and the potential for integrating some of its components into our codebase.\n- Milestone 2: Simulation Application\n - Completed pseudocode for Carnot Simulator, created a test pseudocode, and provided a detailed description of the simulation. The relevant resources can be found at the following links:\n - Carnot Simulator pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/carnot_simulation_psuedocode.py)\n - Test pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/test_carnot_simulation.py)\n - Description of the simulation (https://www.notion.so/Carnot-Simulation-c025dbab6b374c139004aae45831cf78)\n - Implemented simulation network fixes and warding improvements, and increased the run duration of integration tests. The corresponding pull requests can be accessed here:\n - Simulation network fix (https://github.com/logos-co/nomos-node/pull/262)\n - Vote tally fix (https://github.com/logos-co/nomos-node/pull/268)\n - Increased run duration of integration tests (https://github.com/logos-co/nomos-node/pull/263)\n - Warding improvements (https://github.com/logos-co/nomos-node/pull/269)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-07-31":{"title":"2023-07-31 Nomos weekly","content":"\n**Nomos 31st July**\n\n[Network implementation and Mixnet]:\n\nResearch\n- Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder.\n- Considered the use of a new NetworkInterface in the simulation to mimic the mixnet, but currently, no significant benefits from doing so have been identified.\nDevelopment\n- Fixes were made on the Overlay interface.\n- Near completion of the libp2p integration with all tests passing so far, a PR is expected to be opened soon.\n- Link to libp2p PRs: https://github.com/logos-co/nomos-node/pull/278, https://github.com/logos-co/nomos-node/pull/279, https://github.com/logos-co/nomos-node/pull/280, https://github.com/logos-co/nomos-node/pull/281\n- Started working on the foundation of the libp2p-mixnet transport.\n\n[Private PoS]:\n\nResearch\n- Discussions were held on the Privacy PoS (PPoS) proposal, aligning a general direction of team members.\n- Reviews on the PPoS proposal were done.\n- A proposal to merge the PPoS proposal with the efficient one was made, in order to have both privacy and efficiency.\n- Discussions on merging Efficient PoS (EPoS) with PPoS are in progress.\n\n[Carnot]:\n\nResearch\n- Analyzing Bribery attack scenarios, which seem to make Carnot more vulnerable than expected.\n\n\n**Development**\n\n- Improved simulation application to meet test scale requirements (https://github.com/logos-co/nomos-node/pull/274).\n- Created a strategy to solve the large message sending issue in the simulation application.\n\n[Data Availability Sampling (or VID)]:\n\nResearch\n- Conducted an analysis of stored data \"degradation\" problem for data availability, modeling fractions of nodes which leave the system at regular time intervals\n- Continued literature reading on Verifiable Information Dispersal (VID) for DA problem, as well as encoding/commitment schemes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-07":{"title":"2023-08-07 Nomos weekly","content":"\nNomos weekly report\n================\n\n### Network implementation and Mixnet:\n#### Research\n- Researched the Nym mixnet architecture in depth in order to design our prototype architecture.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1661386628)\n- Discussions about how to manage the mixnet topology.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1665101243)\n#### Development\n- Implemented a prototype for building a Sphinx packet, mixing packets at the first hop of gossipsub with 3 mixnodes (+ encryption + delay), raw TCP connections between mixnodes, and the static entire mixnode topology.\n (Link: https://github.com/logos-co/nomos-node/pull/288)\n- Added support for libp2p in tests.\n (Link: https://github.com/logos-co/nomos-node/pull/287)\n- Added support for libp2p in nomos node.\n (Link: https://github.com/logos-co/nomos-node/pull/285)\n\n### Private PoS:\n#### Research\n- Worked on PPoS design and addressed potential metadata leakage due to staking and rewarding.\n- Focus on potential bribery attacks and privacy reasoning, but not much progress yet.\n- Stopped work on Accountability mechanism and PPoS efficiency due to prioritizing bribery attacks.\n\n### Carnot:\n#### Research\n- Addressed two solutions for the bribery attack. Proposals pending.\n- Work on accountability against attacks in Carnot including Slashing mechanism for attackers is paused at the moment.\n- Modeled data decimation using a specific set of parameters and derived equations related to it.\n- Proposed solutions to address bribery attacks without compromising the protocol's scalability.\n\n### Data Availability Sampling (VID):\n#### Research\n- Analyzed data decimation in data availability problem.\n (Link: https://www.overleaf.com/read/gzqvbbmfnxyp)\n- DA benchmarks and analysis for data commitments and encoding. This confirms that (for now), we are on the right path.\n- Explored the idea of node sharding: https://arxiv.org/abs/1907.03331 (taken from Celestia), but discarded it because it doesn't fit our architecture.\n\n#### Testing and Node development:\n- Fixes and enhancements made to nomos-node.\n (Link: https://github.com/logos-co/nomos-node/pull/282)\n (Link: https://github.com/logos-co/nomos-node/pull/289)\n (Link: https://github.com/logos-co/nomos-node/pull/293)\n (Link: https://github.com/logos-co/nomos-node/pull/295)\n- Ran simulations with 10K nodes.\n- Updated integration tests in CI to use waku or libp2p network.\n (Link: https://github.com/logos-co/nomos-node/pull/290)\n- Fix for the node throughput during simulations.\n (Link: https://github.com/logos-co/nomos-node/pull/295)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-14":{"title":"2023-08-17 Nomos weekly","content":"\n\n# **Nomos weekly report 14th August**\n---\n\n## **Network Privacy and Mixnet**\n\n### Research\n- Mixnet architecture discussions. Potential agreement on architecture not very different from PoC\n- Mixnet preliminary design [https://www.notion.so/Mixnet-Architecture-613f53cf11a245098c50af6b191d31d2]\n### Development\n- Mixnet PoC implementation starting [https://github.com/logos-co/nomos-node/pull/302]\n- Implementation of mixnode: a core module for implementing a mixnode binary\n- Implementation of mixnet-client: a client library for mixnet users, such as nomos-node\n\n### **Private PoS**\n- No progress this week.\n\n---\n## **Data Availability**\n### Research\n- Continued analysis of node decay in data availability problem\n- Improved upper bound on the probability of the event that data is no longer available given by the (K,N) erasure ECC scheme [https://www.overleaf.com/read/gzqvbbmfnxyp]\n\n### Development\n- Library survey: Library used for the benchmarks is not yet ready for requirements, looking for alternatives\n- RS \u0026 KZG benchmarking for our use case https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450\n- Study documentation on Danksharding and set of questions for Leonardo [https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450]\n\n---\n## **Testing, CI and Simulation App**\n\n### Development\n- Sim fixes/improvements [https://github.com/logos-co/nomos-node/pull/299], [https://github.com/logos-co/nomos-node/pull/298], [https://github.com/logos-co/nomos-node/pull/295]\n- Simulation app and instructions shared [https://github.com/logos-co/nomos-node/pull/300], [https://github.com/logos-co/nomos-node/pull/291], [https://github.com/logos-co/nomos-node/pull/294]\n- CI: Updated and merged [https://github.com/logos-co/nomos-node/pull/290]\n- Parallel node init for improved simulation run times [https://github.com/logos-co/nomos-node/pull/300]\n- Implemented branch overlay for simulating 100K+ nodes [https://github.com/logos-co/nomos-node/pull/291]\n- Sequential builds for nomos node features updated in CI [https://github.com/logos-co/nomos-node/pull/290]","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/vac/milestones-overview":{"title":"Vac Milestones Overview","content":"\n[Overview Notion Page](https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632?pvs=4) - Information copied here for now\n\n## Info\n### Structure of milestone names:\n\n`vac:\u003cunit\u003e:\u003ctag\u003e:\u003cfor_project\u003e:\u003ctitle\u003e_\u003ccounter\u003e`\n- `vac` indicates it is a vac milestone\n- `unit` indicates the vac unit `p2p`, `dst`, `tke`, `acz`, `sc`, `zkvm`, `dr`, `rfc`\n- `tag` tags a specific area / project / epic within the respective vac unit, e.g. `nimlibp2p`, or `zerokit`\n- `for_project` indicates which Logos project the milestone is mainly for `nomos`, `waku`, `codex`, `nimbus`, `status`; or `vac` (meaning it is internal / helping all projects as a base layer)\n- `title` the title of the milestone\n- `counter` an optional counter; `01` is implicit; marked with a `02` onward indicates extensions of previous milestones\n\n## Vac Unit Roadmaps\n- [Roadmap: P2P](https://www.notion.so/Roadmap-P2P-a409c34cb95b4b81af03f60cbf32f9c1?pvs=21)\n- [Roadmap: Token Economics](https://www.notion.so/Roadmap-Token-Economics-e91f1cb58ebc4b1eb46b074220f535d0?pvs=21)\n- [Roadmap: Distributed Systems Testing (DST))](https://www.notion.so/Roadmap-Distributed-Systems-Testing-DST-4ef0d8694d3e40d6a0cfe706855c43e6?pvs=21)\n- [Roadmap: Applied Cryptography and ZK (ACZ)](https://www.notion.so/Roadmap-Applied-Cryptography-and-ZK-ACZ-00b3ba101fae4a099a2d7af2144ca66c?pvs=21)\n- [Roadmap: Smart Contracts (SC)](https://www.notion.so/Roadmap-Smart-Contracts-SC-e60e0103cad543d5832144d5dd4611a0?pvs=21)\n- [Roadmap: zkVM](https://www.notion.so/Roadmap-zkVM-59cb588bd2404e659633e008101310b5?pvs=21)\n- [Roadmap: Deep Research (DR)](https://www.notion.so/Roadmap-Deep-Research-DR-561a864c890549c3861bf52ab979d7ab?pvs=21)\n- [Roadmap: RFC Process](https://www.notion.so/Roadmap-RFC-Process-f8516d19132b41a0beb29c24510ebc09?pvs=21)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/vac/updates/2023-07-10":{"title":"2023-07-10 Vac Weekly","content":"- *vc::Deep Research*\n - refined deep research roadmaps https://github.com/vacp2p/research/issues/190, https://github.com/vacp2p/research/issues/192\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Prepared Paris talks\n - Implemented perf protocol to compare the performances with other libp2ps https://github.com/status-im/nim-libp2p/pull/925\n- *vsu::Tokenomics*\n - Fixing bugs on the SNT staking contract;\n - Definition of the first formal verification tests for the SNT staking contract;\n - Slides for the Paris off-site\n- *vsu::Distributed Systems Testing*\n - Replicated message rate issue (still on it)\n - First mockup of offline data\n - Nomos consensus test working\n- *vip::zkVM*\n - hiring\n - onboarding new researcher\n - presentation on ECC during Logos Research Call (incl. preparation)\n - more research on nova, considering additional options\n - Identified 3 research questions to be taken into consideration for the ZKVM and the publication\n - Researched Poseidon implementation for Nova, Nova-Scotia, Circom\n- *vip::RLNP2P*\n - finished rln contract for waku product - https://github.com/waku-org/rln-contract\n - fixed homebrew issue that prevented zerokit from building - https://github.com/vacp2p/zerokit/commit/8a365f0c9e5c4a744f70c5dd4904ce8d8f926c34\n - rln-relay: verify proofs based upon bandwidth usage - https://github.com/waku-org/nwaku/commit/3fe4522a7e9e48a3196c10973975d924269d872a\n - RLN contract audit cont' https://hackmd.io/@blockdev/B195lgIth\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-17":{"title":"2023-07-17 Vac weekly","content":"\n**Last week**\n- *vc*\n - Vac day in Paris (13th)\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Paris offsite Paris (all CCs)\n- *vsu::Tokenomics*\n - Bugs found and solved in the SNT staking contract\n - attend events in Paris\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - QoS on all four infras\n - Continue work on theoretical gossipsub analysis (varying regular graph sizes)\n - Peer extraction using WLS (almost finished)\n - Discv5 testing\n - Wakurtosis CI improvements\n - Provide offline data\n- *vip::zkVM*\n - onboarding new researcher\n - Prepared and presented ZKVM work during VAC offsite\n - Deep research on Nova vs Stark in terms of performance and related open questions\n - researching Sangria\n - Worked on NEscience document (https://www.notion.so/Nescience-WIP-0645c738eb7a40869d5650ae1d5a4f4e)\n - zerokit:\n - worked on PR for arc-circom\n- *vip::RLNP2P*\n - offsite Paris\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - EthCC \u0026 Logos event Paris (all CCs)\n- *vsu::Tokenomics*\n - Attend EthCC and side events in Paris\n - Integrate staking contracts with radCAD model\n - Work on a new approach for Codex collateral problem\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - Finish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report\n - Restructure the Analysis script and start modelling Status control messages\n - Split Wakurtosis analysis module into separate repository (delayed)\n - Deliver simulation results (incl fixing discv5 error with new Kurtosis version)\n - Second iteration Nomos CI\n- *vip::zkVM*\n - Continue researching on Nova open questions and Sangria\n - Draft the benchmark document (by the end of the week)\n - research hardware for benchmarks\n - research Halo2 cont'\n - zerokit:\n - merge a PR for deployment of arc-circom\n - deal with arc-circom master fail\n- *vip::RLNP2P*\n - offsite paris\n- *blockers*\n - *vip::zkVM:zerokit*: ark-circom deployment to crates io; contact to ark-circom team","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-24":{"title":"2023-08-03 Vac weekly","content":"\nNOTE: This is a first experimental version moving towards the new reporting structure:\n\n**Last week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - related work section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - basic torpush encode/decode ( https://github.com/vacp2p/nim-libp2p-experimental/pull/1 )\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - (focus on Tor-push PoC)\n- *vsu::P2P*\n - admin/misc\n - EthCC (all CCs)\n- *vsu::Tokenomics*\n - admin/misc\n - Attended EthCC and side events in Paris\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - Kicked off a new approach for Codex collateral problem\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - Integrated SNT staking contracts with Python\n - milestone (50%, 2023/07/14) SNT litepaper\n - (delayed)\n - milestone(30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - milestone (95%, 2023/07/31) Wakurtosis Waku Report\n - Add timout to injection async call in WLS to avoid further issues (PR #139 https://github.com/vacp2p/wakurtosis/pull/139)\n - Plotting \u0026 analyse 100 msg/s off line Prometehus data\n - milestone (90%, 2023/07/31) Nomos CI testing\n - fixed errors in Nomos consensus simulation\n - milestone (30%, ...) gossipsub model analysis\n - add config options to script, allowing to load configs that can be directly compared to Wakurtosis results\n - added support for small world networks\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - (write ups will be available here: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Solved the open questions on Nova adn completed the document (will update the page)\n - Reviewed Nescience and working on a document\n - Reviewed partly the write up on FHE\n - writeup for Nova and Sangria; research on super nova\n - reading a new paper revisiting Nova (https://eprint.iacr.org/2023/969)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - zkvm\n - Researching Nova to understand the folding technique for ZKVM adaptation\n - zerokit\n - Rostyslav became circom-compat maintainer\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro\n - completed\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - admin/misc\n - EthCC + offsite\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - working on contributions section, based on https://hackmd.io/X1DoBHtYTtuGqYg0qK4zJw\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - working on establishing a connection via nim-libp2p tor-transport\n - setting up goerli test node (cont')\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - continue working on paper\n- *vsu::P2P*\n - milestone (...)\n - Implement ChokeMessage for GossipSub\n - Continue \"limited flood publishing\" (https://github.com/status-im/nim-libp2p/pull/911)\n- *vsu::Tokenomics*\n - admin/misc:\n - (3 CC days off)\n - Catch up with EthCC talks that we couldn't attend (schedule conflicts)\n - milestone (50%, 2023/07/14) SNT litepaper\n - Start building the SNT agent-based simulation\n- *vsu::Distributed Systems Testing*\n - milestone (100%, 2023/07/31) Wakurtosis Waku Report\n - finalize simulations\n - finalize report\n - milestone (100%, 2023/07/31) Nomos CI testing\n - finalize milestone\n - milestone (30%, ...) gossipsub model analysis\n - Incorporate Status control messages\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - Refine the Nescience WIP and FHE documents\n - research HyperNova\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Continue exploring Nova and other ZKPs and start technical writing on Nova benchmarks\n - zkvm\n - zerokit\n - circom: reach an agreement with other maintainers on master branch situation\n- *vip::RLNP2P*\n - maintenance\n - investigate why docker builds of nwaku are failing [zerokit dependency related]\n - documentation on how to use rln for projects interested (https://discord.com/channels/864066763682218004/1131734908474236968/1131735766163267695)(https://ci.infra.status.im/job/nim-waku/job/manual/45/console)\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - revert rln bandwidth reduction based on offsite discussion, move to different validator\n- *blockers*","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-31":{"title":"2023-07-31 Vac weekly","content":"\n- *vc::Deep Research*\n - milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission\n - proposed solution section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - establishing torswitch and testing code\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - addressed feedback on current version of paper\n- *vsu::P2P*\n - nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH's EIP-4844\n - Merged IDontWant (https://github.com/status-im/nim-libp2p/pull/934) \u0026 Limit flood publishing (https://github.com/status-im/nim-libp2p/pull/911) 𝕏\n - This wraps up the \"mandatory\" optimizations for 4844. We will continue working on stagger sending and other optimizations\n - nim-libp2p: (70%, 2023/07/31) WebRTC transport\n- *vsu::Tokenomics*\n - admin/misc\n - 2 CCs off for the week\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - milestone (50%, 2023/07/14) SNT litepaper\n - milestone (30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - admin/misc\n - Analysis module extracted from wakurtosis repo (https://github.com/vacp2p/wakurtosis/pull/142, https://github.com/vacp2p/DST-Analysis)\n - hiring\n - milestone (99%, 2023/07/31) Wakurtosis Waku Report\n - Re-run simulations\n - merge Discv5 PR (https://github.com/vacp2p/wakurtosis/pull/129).\n - finalize Wakurtosis Tech Report v2\n - milestone (100%, 2023/07/31) Nomos CI testing\n - delivered first version of Nomos CI integration (https://github.com/vacp2p/wakurtosis/pull/141)\n - milestone (30%, 2023/08/31 gossipsub model: Status control messages\n - Waku model is updated to model topics/content-topics\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - achievment :: nova questions answered (see document in Project: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Nescience WIP done (to be delivered next week, priority)\n - FHE review (lower prio)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Working on discoveries about other benchmarks done on plonky2, starky, and halo2\n - zkvm\n - zerokit\n - fixed ark-circom master \n - achievment :: publish ark-circom https://crates.io/crates/ark-circom\n - achievment :: publish zerokit_utils https://crates.io/crates/zerokit_utils\n - achievment :: publish rln https://crates.io/crates/rln (𝕏 jointly with RLNP2P)\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) RLN-Relay Waku production readiness\n - Updated rln-contract to be more modular - and downstreamed to waku fork of rln-contract - https://github.com/vacp2p/rln-contract and http://github.com/waku-org/waku-rln-contract\n - Deployed to sepolia\n - Fixed rln enabled docker image building in nwaku - https://github.com/waku-org/nwaku/pull/1853\n - zerokit:\n - achievement :: zerokit v0.3.0 release done - https://github.com/vacp2p/zerokit/releases/tag/v0.3.0 (𝕏 jointly with zkVM)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-07":{"title":"2023-08-07 Vac weekly","content":"\n\nMore info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week):\nhttps://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n**Vac week 32** August 7th\n- *vsu::P2P*\n - `vac:p2p:nim-libp2p:vac:maintenance`\n - Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n - `vac:p2p:nim-chronos:vac:maintenance`\n - Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n - Investigate flaky test using REUSE_PORT\n- *vsu::Tokenomics*\n - (...)\n- *vsu::Distributed Systems Testing*\n - `vac:dst:wakurtosis:waku:techreport`\n - delivered: Wakurtosis Tech Report v2 (https://docs.google.com/document/d/1U3bzlbk_Z3ZxN9tPAnORfYdPRWyskMuShXbdxCj4xOM/edit?usp=sharing)\n - `vac:dst:wakurtosis:vac:rlog`\n - working on research log post on Waku Wakurtosis simulations\n - `vac:dst:gsub-model:status:control-messages`\n - delivered: the analytical model can now handle Status messages; status analysis now has a separate cli and config; handles top 5 message types (by expected bandwidth consumption)\n - `vac:dst:gsub-model:vac:refactoring`\n - Refactoring and bug fixes\n - introduced and tested 2 new analytical models\n - `vac:dst:wakurtosis:waku:topology-analysis`\n - delivered: extracted into separate module, independent of wls message\n - `vac:dst:wakurtosis:nomos:ci-integration_02`\n - planning\n - `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n - planning; check usage of new codex simulator tool (https://github.com/codex-storage/cs-codex-dist-tests)\n- *vip::zkVM*\n - `vac:zkvm::vac:research-existing-proof-systems`\n - 90% Nescience WIP done – to be reviewed carefully since no other follow up documents were giiven to me\n - 50% FHE review - needs to be refined and summarized\n - finished SuperNova writeup ( https://www.notion.so/SuperNova-research-document-8deab397f8fe413fa3a1ef3aa5669f37 )\n - researched starky\n - 80% Halo2 notes ( https://www.notion.so/halo2-fb8d7d0b857f43af9eb9f01c44e76fb9 )\n - `vac:zkvm::vac:proof-system-benchmarks`\n - More discoveries on benchmarks done on ZK-snarks and ZK-starks but all are high level\n - Viewed some circuits on Nova and Poseidon\n - Read through Halo2 code (and Poseidon code) from Axiom\n- *vip::RLNP2P*\n - `vac:acz:rlnp2p:waku:production-readiness`\n - Waku rln contract registry - https://github.com/waku-org/waku-rln-contract/pull/3\n - mark duplicated messages as spam - https://github.com/waku-org/nwaku/pull/1867\n - use waku-org/waku-rln-contract as a submodule in nwaku - https://github.com/waku-org/nwaku/pull/1884\n - `vac:acz:zerokit:vac:maintenance`\n - Fixed atomic_operation ffi edge case error - https://github.com/vacp2p/zerokit/pull/195\n - docs cleanup - https://github.com/vacp2p/zerokit/pull/196\n - fixed version tags - https://github.com/vacp2p/zerokit/pull/194\n - released zerokit v0.3.1 - https://github.com/vacp2p/zerokit/pull/198\n - marked all functions as virtual in rln-contract for inheritors - https://github.com/vacp2p/rln-contract/commit/a092b934a6293203abbd4b9e3412db23ff59877e\n - make nwaku use zerokit v0.3.1 - https://github.com/waku-org/nwaku/pull/1886\n - rlnp2p implementers draft - https://hackmd.io/@rymnc/rln-impl-w-waku\n - `vac:acz:zerokit:vac:zerokit-v0.4`\n - zerokit v0.4.0 release planning - https://github.com/vacp2p/zerokit/issues/197\n- *vc::Deep Research*\n - `vac:dr:valpriv:vac:tor-push-poc`\n - redesigned the torpush integration in nimbus https://github.com/vacp2p/nimbus-eth2-experimental/pull/2\n - `vac:dr:valpriv:vac:tor-push-relwork`\n - Addressed further comments in paper, improved intro, added source level variation approach\n - `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n - cont' work on the document","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-14":{"title":"2023-08-17 Vac weekly","content":"\n\nVac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n# Vac week 33 August 14th\n\n---\n## *vsu::P2P*\n### `vac:p2p:nim-libp2p:vac:maintenance`\n- Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n- delivered: Perf protocol https://github.com/status-im/nim-libp2p/pull/925\n- delivered: Test-plans for the perf protocol https://github.com/lchenut/test-plans/tree/perf-nim\n- Bandwidth estimate as a parameter (waiting for final review) https://github.com/status-im/nim-libp2p/pull/941\n### `vac:p2p:nim-chronos:vac:maintenance`\n- delivered: Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n- delivered: fixed flaky test using REUSE_PORT https://github.com/status-im/nim-chronos/pull/438\n\n---\n## *vsu::Tokenomics*\n - admin/misc:\n - (5 CC days off)\n### `vac:tke::codex:economic-analysis`\n- Filecoin economic structure and Codex token requirements\n### `vac:tke::status:SNT-staking`\n- tests with the contracts\n### `vac:tke::nomos:economic-analysis`\n- resume discussions with Nomos team\n\n---\n## *vsu::Distributed Systems Testing (DST)*\n### `vac:dst:wakurtosis:waku:techreport`\n- 1st Draft of Wakurtosis Research Blog (https://github.com/vacp2p/vac.dev/pull/123)\n- Data Process / Analysis of Non-Discv5 K13 Simulations (Wakurtosis Tech Report v2.5)\n### `vac:dst:shadow:vac:basic-shadow-simulation`\n- Basic Shadow Simulation of a gossipsub node (Setup, 5nodes)\n### `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n- Try and plan on how to refactor/generalize testing tool from Codex.\n- Learn more about Kubernetes\n### `vac:dst:wakurtosis:nomos:ci-integration_02`\n- Enable subnetworks\n- Plan how to use wakurtosis with fixed version\n### `vac:dst:eng:vac:bundle-simulation-data`\n- Run requested simulations\n\n---\n## *vsu:Smart Contracts (SC)*\n### `vac:sc::vac:secureum-upskilling`\n - Learned about \n - cold vs warm storage reads and their gas implications\n - UTXO vs account models\n - `DELEGATECALL` vs `CALLCODE` opcodes, `CREATE` vs `CREATE2` opcodes; Yul Assembly\n - Unstructured proxies https://eips.ethereum.org/EIPS/eip-1967\n - C3 Linearization https://forum.openzeppelin.com/t/solidity-diamond-inheritance/2694) (Diamond inheritance and resolution)\n - Uniswap deep dive\n - Finished Secureum slot 2 and 3\n### `vac:sc::vac:maintainance/misc`\n - Introduced Vac's own `foundry-template` for smart contract projects\n - Goal is to have the same project structure across projects\n - Github repository: https://github.com/vacp2p/foundry-template\n\n---\n## *vsu:Applied Cryptogarphy \u0026 ZK (ACZ)*\n - `vac:acz:zerokit:vac:maintenance`\n - PR reviews https://github.com/vacp2p/zerokit/pull/200, https://github.com/vacp2p/zerokit/pull/201\n\n---\n## *vip::zkVM*\n### `vac:zkvm::vac:research-existing-proof-systems`\n- delivered Nescience WIP doc\n- delivered FHE review\n- delivered Nova vs Sangria done - Some discussions during the meeting\n- started HyperNova writeup\n- started writing a trimmed version of FHE writeup\n- researched CCS (for HyperNova)\n- Research Protogalaxy https://eprint.iacr.org/2023/1106 and Protostar https://eprint.iacr.org/2023/620.\n### `vac:zkvm::vac:proof-system-benchmarks`\n- More work on benchmarks is ongoing\n- Putting down a document that explains the differences\n\n---\n## *vc::Deep Research*\n### `vac:dr:valpriv:vac:tor-push-poc`\n- revised the code for PR\n### `vac:dr:valpriv:vac:tor-push-relwork`\n- added section for mixnet, non-Tor/non-onion routing-based anonymity network\n### `vac:dr:gsub-scaling:vac:gossipsub-simulation`\n- Used shadow simulator to run first GossibSub simulation\n### `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n- Finalized 1st draft of the GossipSub scaling article","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/waku/milestone-waku-10-users":{"title":"Milestone: Waku Network supports 10k Users","content":"\n```mermaid\n%%{ \n init: { \n 'theme': 'base', \n 'themeVariables': { \n 'primaryColor': '#BB2528', \n 'primaryTextColor': '#fff', \n 'primaryBorderColor': '#7C0000', \n 'lineColor': '#F8B229', \n 'secondaryColor': '#006100', \n 'tertiaryColor': '#fff' \n } \n } \n}%%\ngantt\n\tdateFormat YYYY-MM-DD \n\tsection Scaling\n\t\t10k Users :done, 2023-01-20, 2023-07-31\n```\n\n## Completion Deliverable\nTBD\n\n## Epics\n- [Github Issue Tracker](https://github.com/waku-org/pm/issues/12)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/milestones-overview":{"title":"Waku Milestones Overview","content":"\n- 90% - [Waku Network support for 10k users](roadmap/waku/milestone-waku-10-users.md)\n- 80% - Waku Network support for 1MM users\n- 65% - Restricted-run (light node) protocols are production ready\n- 60% - Peer management strategy for relay and light nodes are defined and implemented\n- 10% - Quality processes are implemented for `nwaku` and `go-waku`\n- 80% - Define and track network and community metrics for continuous monitoring improvement\n- 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties)\n- 15% - Dogfooding of RLN by platforms has started\n- 06% - First protocol to incentivize operators has been defined","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/updates/2023-07-24":{"title":"2023-07-24 Waku weekly","content":"\nDisclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones.\n\n---\n\n## Docs\n\n### **Milestone**: Foundation for Waku docs (done)\n\n#### _achieved_:\n- overall layout\n- concept docs\n- community/showcase pages\n\n### **Milestone**: Foundation for node operator docs (done)\n#### _achieved_:\n- nodes overview page\n- guide for running nwaku (binaries, source, docker)\n- peer discovery config guide\n- reference docs for config methods and options\n\n### **Milestone**: Foundation for js-waku docs\n#### _achieved_:\n- js-waku overview + installation guide\n- lightpush + filter guide\n- store guide\n- @waku/create-app guide\n\n#### _next:_\n- improve @waku/react guide\n\n#### _blocker:_\n- polyfills issue with [js-waku](https://github.com/waku-org/js-waku/issues/1415)\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n### **Milestone**: Running nwaku in the cloud\n### **Milestone**: Add Waku guide to learnweb3.io\n### **Milestone**: Encryption docs for js-waku\n### **Milestone**: Advanced node operator doc (postgres, WSS, monitoring, common config)\n### **Milestone**: Foundation for go-waku docs\n### **Milestone**: Foundation for rust-waku-bindings docs\n### **Milestone**: Waku architecture docs\n### **Milestone**: Waku detailed roadmap and milestones\n### **Milestone**: Explain RLN\n\n---\n\n## Eco Dev (WIP)\n\n### **Milestone**: EthCC Logos side event organisation (done)\n### **Milestone**: Community Growth\n#### _achieved_: \n- Wrote several bounties, improved template; setup onboarding flow in Discord.\n\n#### _next_: \n- Review template, publish on GitHub\n\n### **Milestone**: Business Development (continuous)\n#### _achieved_: \n- Discussions with various leads in EthCC\n#### _next_: \n- Booking calls with said leads\n\n### **Milestone**: Setting Up Content Strategy for Waku\n\n#### _achieved_: \n- Discussions with Comms Hubs re Waku Blog \n- expressed needs and intent around future blog post and needed amplification\n- discuss strategies to onboard/involve non-dev and potential CTAs.\n\n### **Milestone**: Web3Conf (dates)\n### **Milestone**: DeCompute conf\n\n---\n\n## Research (WIP)\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- rendezvous hashing \n- weighting function \n- updated LIGHTPUSH to handle autosharding\n\n#### _next:_\n- update FILTER \u0026 STORE for autosharding\n\n---\n\n## nwaku (WIP)\n\n### **Milestone**: Postgres integration.\n#### _achieved:_\n- nwaku can store messages in a Postgres database\n- we started to perform stress tests\n\n#### _next:_\n- Analyse why some messages are not stored during stress tests happened in both sqlite and Postgres, so maybe the issue isn't directly related to _store_.\n\n### **Milestone**: nwaku as a library (C-bindings)\n#### _achieved:_\n- The integration is in progress through N-API framework\n\n#### _next:_\n- Make the nodejs to properly work by running the _nwaku_ node in a separate thread.\n\n---\n\n## go-waku (WIP)\n\n\n---\n\n## js-waku (WIP)\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved: \n- spec test for connection manager\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n### **Milestone**: Static Sharding\n#### _next_: \n- start implementation of static sharding in js-waku\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- js-lip2p upgrade to remove usage of polyfills (draft PR)\n\n#### _next_: \n- merge and release js-libp2p upgrade\n\n### **Milestone**: Waku Relay in the Browser\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-07-31":{"title":"2023-07-31 Waku weekly","content":"\n## Docs\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n#### _next:_ \n- rewrite docs in British English\n### **Milestone**: Running nwaku in the cloud\n#### _next:_ \n- publish guides for Digital Ocean, Oracle, Fly.io\n\n---\n## Eco Dev (WIP)\n\n---\n## Research\n\n### **Milestone**: Detailed network requirements and task breakdown\n#### _achieved:_ \n- gathering rough network requirements\n#### _next:_ \n- detailed task breakdown per milestone and effort allocation\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- update FILTER \u0026 STORE for autosharding\n#### _next:_ \n- RFC review \u0026 updates \n- code review \u0026 updates\n\n---\n## nwaku\n\n### **Milestone**: nwaku release process automation\n#### _next_:\n- setup automation to test/simulate current `master` to prevent/limit regressions\n- expand target architectures and platforms for release artifacts (e.g. arm64, Win...)\n### **Milestone**: HTTP Rest API for protocols\n#### _next:_ \n- Filter API added \n- tests to complete.\n\n---\n## go-waku\n\n### **Milestone**: Increase Maintability Score. Refer to [CodeClimate report](https://codeclimate.com/github/waku-org/go-waku)\n#### _next:_ \n- define scope on which issues reported by CodeClimate should be fixed. Initially it should be limited to reduce code complexity and duplication.\n\n### **Milestone**: RLN updates, refer [issue](https://github.com/waku-org/go-waku/issues/608).\n_achieved_:\n- expose `set_tree`, `key_gen`, `seeded_key_gen`, `extended_seeded_keygen`, `recover_id_secret`, `set_leaf`, `init_tree_with_leaves`, `set_metadata`, `get_metadata` and `get_leaf` \n- created an example on how to use RLN with go-waku\n- service node can pass in index to keystore credentials and can verify proofs based on bandwidth usage\n#### _next_: \n- merkle tree batch operations (in progress) \n- usage of persisted merkle tree db\n\n### **Milestone**: Improve test coverage for functional tests of all protocols. Refer to [CodeClimate report]\n#### _next_: \n- define scope on which code sections should be covered by tests\n\n### **Milestone**: C-Bindings\n#### _next_: \n- update API to match nwaku's (by using callbacks instead of strings that require freeing)\n\n---\n## js-waku\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved_: \n- extend ConnectionManager with EventEmitter and dispatch peers tagged with their discovery + make it public on the Waku interface\n#### _next_: \n- fallback improvement for peer connect rejection\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n#### _next_: \n- robusting support around peer-exchange for examples\n### **Milestone**: Static Sharding\n#### _achieved_: \n- WIP implementation of static sharding in js-waku\n#### _next_: \n- investigation around gauging connection loss;\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- improve \u0026 update @waku/react \n- merge and release js-libp2p upgrade\n\n#### _next:_\n- update examples to latest release + make sure no old/unused packages there\n\n### **Milestone**: Maintenance\n#### _achieved_: \n- update to libp2p@0.46.0\n#### _next_:\n- suit of optional tests in pipeline\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-06":{"title":"2023-08-06 Waku weekly","content":"\nMilestones for current works are created and used. Next steps are:\n1) Refine scope of [research work](https://github.com/waku-org/research/issues/3) for rest of the year and create matching milestones for research and waku clients\n2) Review work not coming from research and setting dates\nNote that format matches the Notion page but can be changed easily as it's scripted\n\n\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n- _blocker_: \n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Docker compose with `nwaku` + `postgres` + `prometheus` + `grafana` + `postgres_exporter` https://github.com/alrevuelta/nwaku-compose/pull/3\n- _next_: Carry on with stress testing\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: feedback/update cycles for FILTER \u0026 LIGHTPUSH\n- _next_: New fleet, updating ENR from live subscriptions and merging\n- _blocker_: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app.\n\n**[Move Waku v1 and Waku-Bridge to new repos](https://github.com/waku-org/nwaku/issues/1767)** {E:2023-qa}\n\n- _achieved_: Removed v1 and wakubridge code from nwaku repo\n- _next_: Remove references to `v2` from nwaku directory structure and documents\n\n**[nwaku c-bindings](https://github.com/waku-org/nwaku/issues/1332)** {E:2023-many-platforms}\n\n- _achieved_:\n - Moved the Waku execution into a secondary working thread. Essential for NodeJs.\n - Adapted the NodeJs example to use the `libwaku` with the working-thread approach. The example had been receiving relay messages during a weekend. The memory was stable without crashing. \n- _next_: start applying the thread-safety recommendations https://github.com/waku-org/nwaku/issues/1878\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node.\n\n---\n## js-waku\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: robustness around peer-exchange, and highlight discovery vs connections for PX on the web-chat example\n- _next_: saving successfully connected PX peers to local storage for easier connections on reload\n\n**[Waku Relay scalability in the Browser](https://github.com/waku-org/js-waku/issues/905)** {NO EPIC}\n\n- _achieved_: draft of direct browser-browser RTC example https://github.com/waku-org/js-waku-examples/pull/260 \n- _next_: improve the example (connection re-usage), work on contentTopic based RTC example\n\n---\n## go-waku\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: updated c-bindings to use callbacks\n- _next_: refactor v1 encoding functions and update RFC\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Enabled -race flag and ran all unit tests to identify data races.\n- _next_: Fix issues reported by the data race detector tool\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: use zerokit batch insert/delete for members, exposed function to retrieve data from merkle tree, modified zerokit and go-zerokit-rln to pass merkle tree persistance configuration settings\n- _next_: resume onchain sync from persisted tree db\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: Basic peer management to ensure standard in/out ratio for relay peers.\n- _next_: add service slots to peer manager\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: production of swags and marketing collaterals for web3conf completed\n- _next_: web3conf talk and side event production. various calls with commshub for preparing marketing collaterals.\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)** {E:2023-eco-growth}\n\n- _next_: create guide on `@waku/react` and debugging js-waku web apps\n\n**[Docs general improvement/incorporating feedback (2023)](https://github.com/waku-org/docs.waku.org/issues/102)** {E:2023-eco-growth}\n\n- _achieved_: rewrote the docs in UK English\n- _next_: update docs terms, announce js-waku docs\n\n**[Foundation of js-waku docs](https://github.com/waku-org/docs.waku.org/issues/101)** {E:2023-eco-growth}\n\n_achieved_: added guide on js-waku bootstrapping\n\n---\n## Research\n\n**[1.1 Network requirements and task breakdown](https://github.com/waku-org/research/issues/6)** {E:2023-1mil-users}\n\n- _achieved_: Setup project management tools; determined number of shards to 8; some conversations on RLN memberships\n- _next_: Breakdown and assign tasks under each milestone for the 1 million users/public Waku Network epic.\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-14":{"title":"2023-08-14 Waku weekly","content":"\n\n# 2023-08-14 Waku weekly\n---\n## Epics\n\n**[Waku Network Can Support 10K Users](https://github.com/waku-org/pm/issues/12)** {E:2023-10k-users}\n\nAll software has been delivered. Pending items are:\n- Running stress testing on PostgreSQL to confirm performance gain https://github.com/waku-org/nwaku/issues/1894\n- Setting up a staging fleet for Status to try static sharding\n- Running simulations for Store protocol: [Will confirm with Vac/DST on dates/commitment](https://github.com/vacp2p/research/issues/191#issuecomment-1672542165) and probably move this to [1mil epic](https://github.com/waku-org/pm/issues/31)\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub\n- _next_: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning\n- _blocker_: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)**\n\n- _next_: document notes/recommendations for NodeJS, begin docs on `js-waku` encryption\n\n---\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: minor CI fixes and improvements\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Learned that the insertion rate is constrained by the `relay` protocol. i.e. the maximum insert rate is limited by `relay` so I couldn't push the \"insert\" operation to a limit from a _Postgres_ point of view. For example, if 25 clients publish messages concurrently, and each client publishes 300 msgs, all the messages are correctly stored. If repeating the same operation but with 50 clients, then many messages are lost because the _relay_ protocol doesn't process all of them.\n- _next_: Carry on with stress testing. Analyze the performance differences between _Postgres_ and _SQLite_ regarding the _read_ operations.\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: many feedback/update cycles for FILTER, LIGHTPUSH, STORE \u0026 RFC\n- _next_: updating ENR for live subscriptions\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node. Add more tests.\n\n---\n## js-waku\n\n**[Maintenance](https://github.com/waku-org/js-waku/issues/1455)** {E:2023-qa}\n\n- achieved: upgrade libp2p \u0026 chainsafe deps to libp2p 0.46.3 while removing deprecated libp2p standalone interface packages (new breaking change libp2p w/ other deps), add tsdoc for referenced types, setting up/fixing prettier/eslint conflict \n\n**[Developer Experience (2023)](https://github.com/waku-org/js-waku/issues/1453)** {E:2023-eco-growth}\n\n- _achieved_: non blocking pipeline step (https://github.com/waku-org/js-waku/issues/1411)\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: close the \"fallback mechanism for peer rejections\", refactor peer-exchange compliance test\n- _next_: peer-exchange to be included with default discovery, action peer-exchange browser feedback\n\n---\n## go-waku\n\n**[Maintenance](https://github.com/waku-org/go-waku/issues/634)** {E:2023-qa}\n\n- _achieved_: improved keep alive logic for identifying if machine is waking up; added vacuum feature to sqlite and postgresql; made migrations optional; refactored db and migration code, extracted code to generate node key to its own separate subcommand\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: PR for updating the RFC to use callbacks, and refactored the encoding functions\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Fixed issues reported by the data race detector tool.\n- _next_: identify areas where test coverage needs improvement.\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: exposed merkle tree configuration, removed embedded resources from go-zerokit-rln, fixed nwaku / go-waku rlnKeystore compatibility, added merkle tree persistence and modified zerokit to print to stderr any error obtained while executing functions via FFI.\n- _next_: interop with nwaku\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: add service slots to peer manager.\n- _next_: implement relay connectivity loop, integrate gossipsub scoring for peer disconnections\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]}} \ No newline at end of file diff --git a/indices/contentIndex.d52fe1dd91efb5701e534220b9af8ad3.min.json b/indices/contentIndex.d52fe1dd91efb5701e534220b9af8ad3.min.json deleted file mode 100644 index e5409040d..000000000 --- a/indices/contentIndex.d52fe1dd91efb5701e534220b9af8ad3.min.json +++ /dev/null @@ -1 +0,0 @@ -{"/":{"title":"Logos Technical Roadmap and Activity","content":"This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. To learn more about the motivation, please visit the [Logos Collective Site](https://logos.co).\n\n## Navigation\n\n### Waku\n- [Milestones](roadmap/waku/milestones-overview.md)\n- [weekly updates](tags/waku-updates)\n\n### Codex\n- [Milestones](roadmap/codex/milestones-overview.md)\n- [weekly updates](tags/codex-updates)\n\n### Nomos\n- [Milestones](roadmap/nomos/milestones-overview.md)\n- [weekly updates](tags/nomos-updates)\n\n### Vac\n- [Milestones](roadmap/vac/milestones-overview.md)\n- [weekly updates](tags/vac-updates)\n\n### Innovation Lab\n- [Milestones](roadmap/innovation_lab/milestones-overview.md)\n- [weekly updates](tags/ilab-updates)\n\n### Comms (Acid Info)\n- [Milestones](roadmap/acid/milestones-overview.md)\n- [weekly updates](tags/acid-updates)\n","lastmodified":"2023-08-21T15:49:54.901241828Z","tags":[]},"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":{"title":"CJK + Latex Support (测试)","content":"\n## Chinese, Japanese, Korean Support\n几乎在我们意识到之前,我们已经离开了地面。\n\n우리가 그것을 알기도 전에 우리는 땅을 떠났습니다.\n\n私たちがそれを知るほぼ前に、私たちは地面を離れていました。\n\n## Latex\n\nBlock math works with two dollar signs `$$...$$`\n\n$$f(x) = \\int_{-\\infty}^\\infty\n f\\hat(\\xi),e^{2 \\pi i \\xi x}\n \\,d\\xi$$\n\t\nInline math also works with single dollar signs `$...$`. For example, Euler's identity but inline: $e^{i\\pi} = 0$\n\nAligned equations work quite well:\n\n$$\n\\begin{aligned}\na \u0026= b + c \\\\ \u0026= e + f \\\\\n\\end{aligned}\n$$\n\nAnd matrices\n\n$$\n\\begin{bmatrix}\n1 \u0026 2 \u0026 3 \\\\\na \u0026 b \u0026 c\n\\end{bmatrix}\n$$\n\n## RTL\nMore information on configuring RTL languages like Arabic in the [config](config.md) page.\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/callouts":{"title":"Callouts","content":"\n## Callout support\n\nQuartz supports the same Admonition-callout syntax as Obsidian.\n\nThis includes\n- 12 Distinct callout types (each with several aliases)\n- Collapsable callouts\n\nSee [documentation on supported types and syntax here](https://help.obsidian.md/How+to/Use+callouts#Types).\n\n## Showcase\n\n\u003e [!EXAMPLE] Examples\n\u003e\n\u003e Aliases: example\n\n\u003e [!note] Notes\n\u003e\n\u003e Aliases: note\n\n\u003e [!abstract] Summaries \n\u003e\n\u003e Aliases: abstract, summary, tldr\n\n\u003e [!info] Info \n\u003e\n\u003e Aliases: info, todo\n\n\u003e [!tip] Hint \n\u003e\n\u003e Aliases: tip, hint, important\n\n\u003e [!success] Success \n\u003e\n\u003e Aliases: success, check, done\n\n\u003e [!question] Question \n\u003e\n\u003e Aliases: question, help, faq\n\n\u003e [!warning] Warning \n\u003e\n\u003e Aliases: warning, caution, attention\n\n\u003e [!failure] Failure \n\u003e\n\u003e Aliases: failure, fail, missing\n\n\u003e [!danger] Error\n\u003e\n\u003e Aliases: danger, error\n\n\u003e [!bug] Bug\n\u003e\n\u003e Aliases: bug\n\n\u003e [!quote] Quote\n\u003e\n\u003e Aliases: quote, cite\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/config":{"title":"Configuration","content":"\n## Configuration\nQuartz is designed to be extremely configurable. You can find the bulk of the configuration scattered throughout the repository depending on how in-depth you'd like to get.\n\nThe majority of configuration can be found under `data/config.yaml`. An annotated example configuration is shown below.\n\n```yaml {title=\"data/config.yaml\"}\n# The name to display in the footer\nname: Jacky Zhao\n\n# whether to globally show the table of contents on each page\n# this can be turned off on a per-page basis by adding this to the\n# front-matter of that note\nenableToc: true\n\n# whether to by-default open or close the table of contents on each page\nopenToc: false\n\n# whether to display on-hover link preview cards\nenableLinkPreview: true\n\n# whether to render titles for code blocks\nenableCodeBlockTitle: true \n\n# whether to render copy buttons for code blocks\nenableCodeBlockCopy: true \n\n# whether to render callouts\nenableCallouts: true\n\n# whether to try to process Latex\nenableLatex: true\n\n# whether to enable single-page-app style rendering\n# this prevents flashes of unstyled content and improves\n# smoothness of Quartz. More info in issue #109 on GitHub\nenableSPA: true\n\n# whether to render a footer\nenableFooter: true\n\n# whether backlinks of pages should show the context in which\n# they were mentioned\nenableContextualBacklinks: true\n\n# whether to show a section of recent notes on the home page\nenableRecentNotes: false\n\n# whether to display an 'edit' button next to the last edited field\n# that links to github\nenableGitHubEdit: true\nGitHubLink: https://github.com/jackyzha0/quartz/tree/hugo/content\n\n# whether to use Operand to power semantic search\n# IMPORTANT: replace this API key with your own if you plan on using\n# Operand search!\nenableSemanticSearch: false\noperandApiKey: \"REPLACE-WITH-YOUR-OPERAND-API-KEY\"\n\n# page description used for SEO\ndescription:\n Host your second brain and digital garden for free. Quartz features extremely fast full-text search,\n Wikilink support, backlinks, local graph, tags, and link previews.\n\n# title of the home page (also for SEO)\npage_title:\n \"🪴 Quartz 3.2\"\n\n# links to show in the footer\nlinks:\n - link_name: Twitter\n link: https://twitter.com/_jzhao\n - link_name: Github\n link: https://github.com/jackyzha0\n```\n\n### Code Block Titles\nTo add code block titles with Quartz:\n\n1. Ensure that code block titles are enabled in Quartz's configuration:\n\n ```yaml {title=\"data/config.yaml\", linenos=false}\n enableCodeBlockTitle: true\n ```\n\n2. Add the `title` attribute to the desired [code block\n fence](https://gohugo.io/content-management/syntax-highlighting/#highlighting-in-code-fences):\n\n ```markdown {linenos=false}\n ```yaml {title=\"data/config.yaml\"}\n enableCodeBlockTitle: true # example from step 1\n ```\n ```\n\n**Note** that if `{title=\u003cmy-title\u003e}` is included, and code block titles are not\nenabled, no errors will occur, and the title attribute will be ignored.\n\n### HTML Favicons\nIf you would like to customize the favicons of your Quartz-based website, you \ncan add them to the `data/config.yaml` file. The **default** without any set \n`favicon` key is:\n\n```html {title=\"layouts/partials/head.html\", linenostart=15}\n\u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n```\n\nThe default can be overridden by defining a value to the `favicon` key in your \n`data/config.yaml` file. For example, here is a `List[Dictionary]` example format, which is\nequivalent to the default:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon:\n - { rel: \"shortcut icon\", href: \"icon.png\", type: \"image/png\" }\n# - { ... } # Repeat for each additional favicon you want to add\n```\n\nIn this format, the keys are identical to their HTML representations.\n\nIf you plan to add multiple favicons generated by a website (see list below), it\nmay be easier to define it as HTML. Here is an example which appends the \n**Apple touch icon** to Quartz's default favicon:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon: |\n \u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n \u003clink rel=\"apple-touch-icon\" sizes=\"180x180\" href=\"/apple-touch-icon.png\"\u003e\n```\n\nThis second favicon will now be used as a web page icon when someone adds your \nwebpage to the home screen of their Apple device. If you are interested in more \ninformation about the current and past standards of favicons, you can read \n[this article](https://www.emergeinteractive.com/insights/detail/the-essentials-of-favicons/).\n\n**Note** that all generated favicon paths, defined by the `href` \nattribute, are relative to the `static/` directory.\n\n### Graph View\nTo customize the Interactive Graph view, you can poke around `data/graphConfig.yaml`.\n\n```yaml {title=\"data/graphConfig.yaml\"}\n# if true, a Global Graph will be shown on home page with full width, no backlink.\n# A different set of Local Graphs will be shown on sub pages.\n# if false, Local Graph will be default on every page as usual\nenableGlobalGraph: false\n\n### Local Graph ###\nlocalGraph:\n # whether automatically generate a legend\n enableLegend: false\n \n # whether to allow dragging nodes in the graph\n enableDrag: true\n \n # whether to allow zooming and panning the graph\n enableZoom: true\n \n # how many neighbours of the current node to show (-1 is all nodes)\n depth: 1\n \n # initial zoom factor of the graph\n scale: 1.2\n \n # how strongly nodes should repel each other\n repelForce: 2\n\n # how strongly should nodes be attracted to the center of gravity\n centerForce: 1\n\n # what the default link length should be\n linkDistance: 1\n \n # how big the node labels should be\n fontSize: 0.6\n \n # scale at which to start fading the labes on nodes\n opacityScale: 3\n\n### Global Graph ###\nglobalGraph:\n\t# same settings as above\n\n### For all graphs ###\n# colour specific nodes path off of their path\npaths:\n - /moc: \"#4388cc\"\n```\n\n\n## Styling\nWant to go even more in-depth? You can add custom CSS styling and change existing colours through editing `assets/styles/custom.scss`. If you'd like to target specific parts of the site, you can add ids and classes to the HTML partials in `/layouts/partials`. \n\n### Partials\nPartials are what dictate what gets rendered to the page. Want to change how pages are styled and structured? You can edit the appropriate layout in `/layouts`.\n\nFor example, the structure of the home page can be edited through `/layouts/index.html`. To customize the footer, you can edit `/layouts/partials/footer.html`\n\nMore info about partials on [Hugo's website.](https://gohugo.io/templates/partials/)\n\nStill having problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n\n## Language Support\n[CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) comes out of the box with Quartz.\n\nWant to support languages that read from right-to-left (like Arabic)? Hugo (and by proxy, Quartz) supports this natively.\n\nFollow the steps [Hugo provides here](https://gohugo.io/content-management/multilingual/#configure-languages) and modify your `config.toml`\n\nFor example:\n\n```toml\ndefaultContentLanguage = 'ar'\n[languages]\n [languages.ar]\n languagedirection = 'rtl'\n title = 'مدونتي'\n weight = 1\n```\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/custom-Domain":{"title":"Custom Domain","content":"\n### Registrar\nThis step is only applicable if you are using a **custom domain**! If you are using a `\u003cYOUR-USERNAME\u003e.github.io` domain, you can skip this step.\n\nFor this last bit to take effect, you also need to create a CNAME record with the DNS provider you register your domain with (i.e. NameCheap, Google Domains).\n\nGitHub has some [documentation on this](https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site), but the tldr; is to\n\n1. Go to your forked repository (`github.com/\u003cYOUR-GITHUB-USERNAME\u003e/quartz`) settings page and go to the Pages tab. Under \"Custom domain\", type your custom domain, then click **Save**.\n2. Go to your DNS Provider and create a CNAME record that points from your domain to `\u003cYOUR-GITHUB-USERNAME.github.io.` (yes, with the trailing period).\n\n\t![Example Configuration for Quartz](google-domains.png)*Example Configuration for Quartz*\n3. Wait 30 minutes to an hour for the network changes to kick in.\n4. Done!","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/editing":{"title":"Editing Content in Quartz","content":"\n## Editing \nQuartz runs on top of [Hugo](https://gohugo.io/) so all notes are written in [Markdown](https://www.markdownguide.org/getting-started/).\n\n### Folder Structure\nHere's a rough overview of what's what.\n\n**All content in your garden can found in the `/content` folder.** To make edits, you can open any of the files and make changes directly and save it. You can organize content into any folder you'd like.\n\n**To edit the main home page, open `/content/_index.md`.**\n\nTo create a link between notes in your garden, just create a normal link using Markdown pointing to the document in question. Please note that **all links should be relative to the root `/content` path**. \n\n```markdown\nFor example, I want to link this current document to `notes/config.md`.\n[A link to the config page](notes/config.md)\n```\n\nSimilarly, you can put local images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\nYou can also use wikilinks if that is what you are more comfortable with!\n\n### Front Matter\nHugo is picky when it comes to metadata for files. Make sure that your title is double-quoted and that you have a title defined at the top of your file like so. You can also add tags here as well.\n\n```yaml\n---\ntitle: \"Example Title\"\ntags:\n- example-tag\n---\n\nRest of your content here...\n```\n\n### Obsidian\nI recommend using [Obsidian](http://obsidian.md/) as a way to edit and grow your digital garden. It comes with a really nice editor and graphical interface to preview all of your local files.\n\nThis step is **highly recommended**.\n\n\u003e 🔗 Step 3: [How to setup your Obsidian Vault to work with Quartz](obsidian.md)\n\n## Previewing Changes\nThis step is purely optional and mostly for those who want to see the published version of their digital garden locally before opening it up to the internet. This is *highly recommended* but not required.\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)\n\nFor those who like to live life more on the edge, viewing the garden through Obsidian gets you pretty close to the real thing.\n\n## Publishing Changes\nNow that you know the basics of managing your digital garden using Quartz, you can publish it to the internet!\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/hosting":{"title":"Deploying Quartz to the Web","content":"\n## Hosting on GitHub Pages\nQuartz is designed to be effortless to deploy. If you forked and cloned Quartz directly from the repository, everything should already be good to go! Follow the steps below.\n\n### Enable GitHub Actions\nBy default, GitHub disables workflows from running automatically on Forked Repostories. Head to the 'Actions' tab of your forked repository and Enable Workflows to setup deploying your Quartz site!\n\n![Enable GitHub Actions](github-actions.png)*Enable GitHub Actions*\n\n### Enable GitHub Pages\n\nHead to the 'Settings' tab of your forked repository and go to the 'Pages' tab.\n\n1. (IMPORTANT) Set the source to deploy from `master` (and not `hugo`) using `/ (root)`\n2. Set a custom domain here if you have one!\n\n![Enable GitHub Pages](github-pages.png)*Enable GitHub Pages*\n\n### Pushing Changes\nTo see your changes on the internet, we need to push it them to GitHub. Quartz is a `git` repository so updating it is the same workflow as you would follow as if it were just a regular software project.\n\n```shell\n# Navigate to Quartz folder\ncd \u003cpath-to-quartz\u003e\n\n# Commit all changes\ngit add .\ngit commit -m \"message describing changes\"\n\n# Push to GitHub to update site\ngit push origin hugo\n```\n\nNote: we specifically push to the `hugo` branch here. Our GitHub action automatically runs everytime a push to is detected to that branch and then updates the `master` branch for redeployment.\n\n### Setting up the Site\nNow let's get this site up and running. Never hosted a site before? No problem. Have a fancy custom domain you already own or want to subdomain your Quartz? That's easy too.\n\nHere, we take advantage of GitHub's free page hosting to deploy our site. Change `baseURL` in `/config.toml`. \n\nMake sure that your `baseURL` has a trailing `/`!\n\n[Reference `config.toml` here](https://github.com/jackyzha0/quartz/blob/hugo/config.toml)\n\n```toml\nbaseURL = \"https://\u003cYOUR-DOMAIN\u003e/\"\n```\n\nIf you are using this under a subdomain (e.g. `\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz`), include the trailing `/`. **You need to do this especially if you are using GitHub!**\n\n```toml\nbaseURL = \"https://\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz/\"\n```\n\nChange `cname` in `/.github/workflows/deploy.yaml`. Again, if you don't have a custom domain to use, you can use `\u003cYOUR-USERNAME\u003e.github.io`.\n\nPlease note that the `cname` field should *not* have any path `e.g. end with /quartz` or have a trailing `/`.\n\n[Reference `deploy.yaml` here](https://github.com/jackyzha0/quartz/blob/hugo/.github/workflows/deploy.yaml)\n\n```yaml {title=\".github/workflows/deploy.yaml\"}\n- name: Deploy \n uses: peaceiris/actions-gh-pages@v3 \n with: \n\tgithub_token: ${{ secrets.GITHUB_TOKEN }} # this can stay as is, GitHub fills this in for us!\n\tpublish_dir: ./public \n\tpublish_branch: master\n\tcname: \u003cYOUR-DOMAIN\u003e\n```\n\nHave a custom domain? [Learn how to set it up with Quartz ](custom%20Domain.md).\n\n### Ignoring Files\nOnly want to publish a subset of all of your notes? Don't worry, Quartz makes this a simple two-step process.\n\n❌ [Excluding pages from being published](ignore%20notes.md)\n\n---\n\nNow that your Quartz is live, let's figure out how to make Quartz really *yours*!\n\n\u003e Step 6: 🎨 [Customizing Quartz](config.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/ignore-notes":{"title":"Ignoring Notes","content":"\n### Quartz Ignore\nEdit `ignoreFiles` in `config.toml` to include paths you'd like to exclude from being rendered.\n\n```toml\n...\nignoreFiles = [ \n \"/content/templates/*\", \n \"/content/private/*\", \n \"\u003cyour path here\u003e\"\n]\n```\n\n`ignoreFiles` supports the use of Regular Expressions (RegEx) so you can ignore patterns as well (e.g. ignoring all `.png`s by doing `\\\\.png$`).\nTo ignore a specific file, you can also add the tag `draft: true` to the frontmatter of a note.\n\n```markdown\n---\ntitle: Some Private Note\ndraft: true\n---\n...\n```\n\nMore details in [Hugo's documentation](https://gohugo.io/getting-started/configuration/#ignore-content-and-data-files-when-rendering).\n\n### Global Ignore\nHowever, just adding to the `ignoreFiles` will only prevent the page from being access through Quartz. If you want to prevent the file from being pushed to GitHub (for example if you have a public repository), you need to also add the path to the `.gitignore` file at the root of the repository.","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/obsidian":{"title":"Obsidian Vault Integration","content":"\n## Setup\nObsidian is the preferred way to use Quartz. You can either create a new Obsidian Vault or link one that your already have.\n\n### New Vault\nIf you don't have an existing Vault, [download Obsidian](https://obsidian.md/) and create a new Vault in the `/content` folder that you created and cloned during the [setup](setup.md) step.\n\n### Linking an existing Vault\nThe easiest way to use an existing Vault is to copy all of your files (directory and hierarchies intact) into the `/content` folder.\n\n## Settings\nGreat, now that you have your Obsidian linked to your Quartz, let's fix some settings so that they play well.\n\n1. Under Options \u003e Files and Links, set the New link format to always use Absolute Path in Vault.\n2. Go to Settings \u003e Files \u0026 Links \u003e Turn \"on\" automatically update internal links.\n\n![Obsidian Settings](obsidian-settings.png)*Obsidian Settings*\n\n## Templates\nInserting front matter everytime you want to create a new Note gets annoying really quickly. Luckily, Obsidian supports templates which makes inserting new content really easily.\n\n**If you decide to overwrite the `/content` folder completely, don't remove the `/content/templates` folder!**\n\nHead over to Options \u003e Core Plugins and enable the Templates plugin. Then go to Options \u003e Hotkeys and set a hotkey for 'Insert Template' (I recommend `[cmd]+T`). That way, when you create a new note, you can just press the hotkey for a new template and be ready to go!\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/philosophy":{"title":"Quartz Philosophy","content":"\n\u003e “[One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important.” — Richard Hamming\n\n## Why Quartz?\nHosting a public digital garden isn't easy. There are an overwhelming number of tutorials, resources, and guides for tools like [Notion](https://www.notion.so/), [Roam](https://roamresearch.com/), and [Obsidian](https://obsidian.md/), yet none of them have super easy to use *free* tools to publish that garden to the world.\n\nI've personally found that\n1. It's nice to access notes from anywhere\n2. Having a public digital garden invites open conversations\n3. It makes keeping personal notes and knowledge *playful and fun*\n\nI was really inspired by [Bianca](https://garden.bianca.digital/) and [Joel](https://joelhooks.com/digital-garden)'s digital gardens and wanted to try making my own.\n\n**The goal of Quartz is to make hosting your own public digital garden free and simple.** You don't even need your own website. Quartz does all of that for you and gives your own little corner of the internet.\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/preview-changes":{"title":"Preview Changes","content":"\nIf you'd like to preview what your Quartz site looks like before deploying it to the internet, here's exactly how to do that!\n\nNote that both of these steps need to be completed.\n\n## Install `hugo-obsidian`\nThis step will generate the list of backlinks for Hugo to parse. Ensure you have [Go](https://golang.org/doc/install) (\u003e= 1.16) installed.\n\n```bash\n# Install and link `hugo-obsidian` locally\ngo install github.com/jackyzha0/hugo-obsidian@latest\n```\n\nIf you are running into an error saying that `command not found: hugo-obsidian`, make sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize hugo-obsidian as an executable.\n\nAfterwards, start the Hugo server as shown above and your local backlinks and interactive graph should be populated!\n\n## Installing Hugo\nHugo is the static site generator that powers Quartz. [Install Hugo with \"extended\" Sass/SCSS version](https://gohugo.io/getting-started/installing/) first. Then,\n\n```bash\n# Navigate to your local Quartz folder\ncd \u003clocation-of-your-local-quartz\u003e\n\n# Start local server\nmake serve\n\n# View your site in a browser at http://localhost:1313/\n```\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/search":{"title":"Search","content":"\nQuartz supports two modes of searching through content.\n\n## Full-text\nFull-text search is the default in Quartz. It produces results that *exactly* match the search query. This is easier to setup but usually produces lower quality matches.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: false\n```\n\n## Natural Language\nNatural language search is powered by [Operand](https://operand.ai/). It understands language like a person does and finds results that best match user intent. In this sense, it is closer to how Google Search works.\n\nNatural language search tends to produce higher quality results than full-text search.\n\nHere's how to set it up.\n\n1. Create an Operand Account on [their website](https://operand.ai/).\n2. Go to Dashboard \u003e Settings \u003e Integrations.\n3. Follow the steps to setup the GitHub integration. Operand needs access to GitHub in order to index your digital garden properly!\n4. Head over to Dashboard \u003e Objects and press `(Cmd + K)` to open the omnibar and select 'Create Collection'.\n\t1. Set the 'Collection Label' to something that will help you remember it.\n\t2. You can leave the 'Parent Collection' field empty.\n5. Click into your newly made Collection.\n\t1. Press the 'share' button that looks like three dots connected by lines.\n\t2. Set the 'Interface Type' to `object-search` and click 'Create'.\n\t3. This will bring you to a new page with a search bar. Ignore this for now.\n6. Go back to Dashboard \u003e Settings \u003e API Keys and find your Quartz-specific Operand API key under 'Other keys'.\n\t1. Copy the key (which looks something like `0e733a7f-9b9c-48c6-9691-b54fa1c8b910`).\n\t2. Open `data/config.yaml`. Set `enableSemanticSearch` to `true` and `operandApiKey` to your copied key.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: true\noperandApiKey: \"0e733a7f-9b9c-48c6-9691-b54fa1c8b910\"\n```\n7. Make a commit and push your changes to GitHub. See the [[hosting|hosting]] page if you haven't done this already.\n\t1. This step is *required* for Operand to be able to properly index your content. \n\t2. Head over to Dashboard \u003e Objects and select the collection that you made earlier\n8. Press `(Cmd + K)` to open the omnibar again and select 'Create GitHub Repo'\n\t1. Set the 'Repository Label' to `Quartz`\n\t2. Set the 'Repository Owner' to your GitHub username\n\t3. Set the 'Repository Ref' to `master`\n\t4. Set the 'Repository Name' to the name of your repository (usually just `quartz` if you forked the repository without changing the name)\n\t5. Leave 'Root Path' and 'Root URL' empty\n9. Wait for your repository to index and enjoy natural language search in Quartz! Operand refreshes the index every 2h so all you need to do is just push to GitHub to update the contents in the search.","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/setup":{"title":"Setup","content":"\n## Making your own Quartz\nSetting up Quartz requires a basic understanding of `git`. If you are unfamiliar, [this resource](https://resources.nwplus.io/2-beginner/how-to-git-github.html) is a great place to start!\n\n### Forking\n\u003e A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project.\n\nNavigate to the GitHub repository for the Quartz project:\n\n📁 [Quartz Repository](https://github.com/jackyzha0/quartz)\n\nThen, Fork the repository into your own GitHub account. If you don't have an account, you can make on for free [here](https://github.com/join). More details about forking a repo can be found on [GitHub's documentation](https://docs.github.com/en/get-started/quickstart/fork-a-repo).\n\n### Cloning\nAfter you've made a fork of the repository, you need to download the files locally onto your machine. Ensure you have `git`, then type the following command replacing `YOUR-USERNAME` with your GitHub username.\n\n```shell\ngit clone https://github.com/YOUR-USERNAME/quartz\n```\n\n## Editing\nGreat! Now you have everything you need to start editing and growing your digital garden. If you're ready to start writing content already, check out the recommended flow for editing notes in Quartz.\n\n\u003e ✏️ Step 2: [Editing Notes in Quartz](editing.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/showcase":{"title":"Showcase","content":"\nWant to see what Quartz can do? Here are some cool community gardens :)\n\n- [Quartz Documentation (this site!)](https://quartz.jzhao.xyz/)\n- [Jacky Zhao's Garden](https://jzhao.xyz/)\n- [Scaling Synthesis - A hypertext research notebook](https://scalingsynthesis.com/)\n- [AWAGMI Intern Notes](https://notes.awagmi.xyz/)\n- [Shihyu's PKM](https://shihyuho.github.io/pkm/)\n- [Chloe's Garden](https://garden.chloeabrasada.online/)\n- [SlRvb's Site](https://slrvb.github.io/Site/)\n- [Course notes for Information Technology Advanced Theory](https://a2itnotes.github.io/quartz/)\n- [Brandon Boswell's Garden](https://brandonkboswell.com)\n- [Siyang's Courtyard](https://siyangsun.github.io/courtyard/)\n\nIf you want to see your own on here, submit a [Pull Request adding yourself to this file](https://github.com/jackyzha0/quartz/blob/hugo/content/notes/showcase.md)!\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/troubleshooting":{"title":"Troubleshooting and FAQ","content":"\nStill having trouble? Here are a list of common questions and problems people encounter when installing Quartz.\n\nWhile you're here, join our [Discord](https://discord.gg/cRFFHYye7t) :)\n\n### Does Quartz have Latex support?\nYes! See [CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) for a brief demo.\n\n### Can I use \\\u003cObsidian Plugin\\\u003e in Quartz?\nUnless it produces direct Markdown output in the file, no. There currently is no way to bundle plugin code with Quartz.\n\nThe easiest way would be to add your own HTML partial that supports the functionality you are looking for.\n\n### My GitHub pages is just showing the README and not Quartz\nMake sure you set the source to deploy from `master` (and not `hugo`) using `/ (root)`! See more in the [hosting](hosting.md) guide\n\n### Some of my pages have 'January 1, 0001' as the last modified date\nThis is a problem caused by `git` treating files as case-insensitive by default and some of your posts probably have capitalized file names. You can turn this off in your Quartz by running this command.\n\n```shell\n# in the root of your Quartz (same folder as config.toml)\ngit config core.ignorecase true\n\n# or globally (not recommended)\ngit config --global core.ignorecase true\n```\n\n### Can I publish only a subset of my pages?\nYes! Quartz makes selective publishing really easy. Heres a guide on [excluding pages from being published](ignore%20notes.md).\n\n### Can I host this myself and not on GitHub Pages?\nYes! All built files can be found under `/public` in the `master` branch. More details under [hosting](hosting.md).\n\n### `command not found: hugo-obsidian`\nMake sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize `hugo-obsidian` as an executable.\n\n```shell\n# Add the following 2 lines to your ~/.bash_profile\nexport GOPATH=/Users/$USER/go\nexport PATH=$GOPATH/bin:$PATH\n\n# In your current terminal, to reload the session\nsource ~/.bash_profile\n```\n\n### How come my notes aren't being rendered?\nYou probably forgot to include front matter in your Markdown files. You can either setup [Obsidian](obsidian.md) to do this for you or you need to manually define it. More details in [the 'how to edit' guide](editing.md).\n\n### My custom domain isn't working!\nWalk through the steps in [the hosting guide](hosting.md) again. Make sure you wait 30 min to 1 hour for changes to take effect.\n\n### How do I setup Google Analytics?\nYou can edit it in `config.toml` and either use a V3 (UA-) or V4 (G-) tag.\n\n### How do I change the content on the home page?\nTo edit the main home page, open `/content/_index.md`.\n\n### How do I change the colours?\nYou can change the theme by editing `assets/custom.scss`. More details on customization and themeing can be found in the [customization guide](config.md).\n\n### How do I add images?\nYou can put images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\n### My Interactive Graph and Backlinks aren't up to date\nBy default, the `linkIndex.json` (which Quartz needs to generate the Interactive Graph and Backlinks) are not regenerated locally. To set that up, see the guide on [local editing](editing.md)\n\n### Can I use React/Vue/some other framework?\nNot out of the box. You could probably make it work by editing `/layouts/_default/single.html` but that's not what Quartz is designed to work with. 99% of things you are trying to do with those frameworks you can accomplish perfectly fine using just vanilla HTML/CSS/JS.\n\n## Still Stuck?\nQuartz isn't perfect! If you're still having troubles, file an issue in the GitHub repo with as much information as you can reasonably provide. Alternatively, you can message me on [Twitter](https://twitter.com/_jzhao) and I'll try to get back to you as soon as I can.\n\n🐛 [Submit an Issue](https://github.com/jackyzha0/quartz/issues)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/updating":{"title":"Updating","content":"\nHaven't updated Quartz in a while and want all the cool new optimizations? On Unix/Mac systems you can run the following command for a one-line update! This command will show you a log summary of all commits since you last updated, press `q` to acknowledge this. Then, it will show you each change in turn and press `y` to accept the patch or `n` to reject it. Usually you should press `y` for most of these unless it conflicts with existing changes you've made! \n\n```shell\nmake update\n```\n\nOr, if you don't want the interactive parts and just want to force update your local garden (this assumed that you are okay with some of your personalizations been overriden!)\n\n```shell\nmake update-force\n```\n\nOr, manually checkout the changes yourself.\n\n\u003e [!warning] Warning!\n\u003e\n\u003e If you customized the files in `data/`, or anything inside `layouts/`, your customization may be overwritten!\n\u003e Make sure you have a copy of these changes if you don't want to lose them.\n\n\n```shell\n# add Quartz as a remote host\ngit remote add upstream git@github.com:jackyzha0/quartz.git\n\n# index and fetch changes\ngit fetch upstream\ngit checkout -p upstream/hugo -- layouts .github Makefile assets/js assets/styles/base.scss assets/styles/darkmode.scss config.toml data \n```\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/requirements/overview":{"title":"Logos Network Requirements Overview","content":"\nThis document describes the requirements of the Logos Network.\n\n\u003e Network sovereignty is an extension of the collective sovereignty of the individuals within. \n\n\u003e Meaningful participation in the network should be acheivable by affordable and accessible consumer grade hardware.\n\n\u003e Privacy by default. \n\n\u003e A given CiC should have the option to gracefully exit the network and operate on its own.\n\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["requirements"]},"/private/roadmap/consensus/candidates/carnot/FAQ":{"title":"Frequently Asked Questions","content":"\n## Network Requirements and Assumptions\n\n### What assumptions do we need Waku to fulfill? - Corey\n\u003e `Moh:` Waku needs to fill the following requirements, taken from the Carnot paper:\n\n\u003e **Definition 3** (Probabilistic Reliable Dissemination). _After the GST, and when the leader is correct, all the correct nodes deliver the proposal sent by the leader (w.h.p)._\n\n\u003e **Definition 4** (Probabilistic Fulfillment). _After the GST, and when the current and previous leaders are correct, the number of votes collected by teh current leader is $2c+1$ (w.h.p)._\n\n## Tradeoffs\n\n### I think the main clear disadvantage of such a scheme is the added latency of the multiple layers. - Alvaro\n\n\u003e `Moh:` The added latency will be O(log(n/C)), where C is the committee size. But I guess it will be hard to avoid it. Though it also depends on how fast the network layer (potentially Waku) propagats msgs and also on execution time of the transaction as well.\n\n\u003e `Alvaro:` Well IIUC the only latency we are introducing is directly proportional to the levels of subcommitee nesting (ie the log(n/C)), which is understandably the price to pay. We have to make sure though that what we gain by introducing this is really worth the extra cost vs the typical comittee formation via randao or perhaps VDFs\n\n\u003e `Moh:` Again the Typical committee formation with randao can reduce their wait time value to match our latency, but then it becomes vulnerable and fail if the network latency becomes greater than their slot interval. If they keep it too large it may not fail but becomes slow. We won't have that problem. If an adversary has the power to slow down the network then their liveness will fail, whereas we won't have that issue.\n\n## How would you compare Aptos and Carnot? - Alvaro\n\n\u003e `Moh:` It is variant of DiemBFT, Sui is based on Nahrwal, both cannot scale to more than few hunderd of nodes. That is why they achieve that low latency.\n\n\u003e `Alvaro:` Yes, so they need to select a committee of that size in order to operate at that latency What's wrong with selecting a committee vs Carnot's solution? This I'm asking genuinely to understand and because everyone will ask this question when we release.\n\n\u003e `Moh:` When you select a committee you have to wait for a time slot to make sure the result of consensus has propagated. Again strong synchrony assumption (slot time), formation of forks, increase in PoS attack vector come into play\nWithin committee the protocol does not need a wait time but for its results to get propagated if scalability is to be achieved, then wait time has to be added or signatures have to be collected from thousands of nodes.\n\n\u003e `Alvaro:` Can you elaborate?\n\n\u003e `Moh:` Ethereum (and any other protocol who runs the consensus in a single committee selected from a large group on nodes) has wait time so that the output of the consenus propagates to all honest nodes before the next committee is selected. Else the next committee will fail or only forks will be formed and chain length won't increase. But since this wait time as stated, increases latency, makes the protocol vulnerable, Ethereum wants to avoid it to achieve responsivess. To avoid wait time (add responsiveness) a protocol has to collect attestation signatures from 2/3rd of all nodes (not a single committee) to move to the second round (Carnot is already responsive). But aggregating and verifying signatures thousands of signatures is expensive and time consuming. This is why they are working to improve BLS signatures. Instead we have changed the consensus protocol in such a way that a small number of signatures need to be aggregated and verified to achieve responsiveness and fast finality. We can further improve performance by using the improved BLS signatures.\n\n\u003e One cannot achieve fast finality while running the consensus in a small committee. Because attestation of a Block within the single committee is not enough. This block can be averted if the leader of the next committee has not seen it. Therefore, there should be enough delay so that all honest nodes can see it. This is why we have this wait/slot time. Another issue can be a malicious leader from the next chosen committee can also avert a block of honest leader and hence preventing honest leaders from getting rewards. If blocks of honest leaders are averted for long time, stake of malicious leaders will increase. Moreover, malicious leaders can delay blocks of honest nodes by making fork and averting them. Addressing these issues will further make the protocol complex, while still laking fast finality.\n\n## Data Distribution\n\n### How much failure rate of erasure code transmission are we expecting. Basically, what are the EC coding parameters that we expect to be sending such that we have some failure rate of transmission? Has that been looked into? - Dmitriy\n\u003e `Moh:` This is a great question and it points to the tension between the failure rate vs overhead. We have briefly looked into this (today me and Marcin @madxor discussed such cases), but we haven’t thoroughly analyzed this. In our case, the rate of failure also depends on committee size. We look into $10^{-3}$ to $10^{-6}$ probability of failure. And in this case, the coding overhead can be somewhere between 200%-500% approximately. This means for a committee size of 500 (while expecting receipt of messages from 251 correct nodes), for a failure rate of $10^{-6}$ a single node has to send \u003e 6Mb of data for a 1Mb of actual data. Though 5x overhead is large, it still prevent us from sending/receiving 500 Mb of data in return for a failure probability of 1 proposal out of 1 million. From the protocol perspective, we can address EC failures in multiple ways: a: Since the root committee only forwards the coded chunks only when they have successfully rebuilt the block. This means the root committee can be contacted to download additional coded chunks to decode the block. b: We allow this failure and let the leader be replaced but since there is proof that the failure is due to the reason that a decoder failed to reconstruct the block, therefore, the leader cannot be punished (if we chose to employ punishment in PoS). \n\n### How much data should a given block be. Are there limits on this and if so, what are they and what do they depend on? - Dmitriy\n\u003e `Moh:` This question can be answered during simulations and experiments over links of different bandwidths and latencies. We will test the protocol performances with different block sizes. As we know increasing the block size results in increased throughput as well as latency. What is the most appropriate block size can be determined once we observe the tradeoff between throughput vs latency.\n\n## Signature Propagation\n\n### Who sends the signatures up from a given committee? Do that have any leadered power within the committee? - Tanguy\n\u003e `Moh:` Each node in a committee multicasts its vote to all members of the parent committee. Since the size of the vote is small the bit complexity will be low. Introducing a leader within each committee will create a single point of failure within each committee. This is why we avoid maintaining a leader within each committee\n\n## Network Scale\n\n### What is our expected minimum number of nodes within the network? - Dmitriy\n\u003e `Moh:` For a small number of nodes we can have just a single committee. But I am not sure how many nodes will join our network \n\n## Byzantine Behavior\n\n### Can we also consider a flavor that adds attestation/attribution to misbehaving nodes? That will come at a price but there might be a set of use cases which would like to have lower performance with strong attribution. Not saying that it must be part of the initial design, but can be think-through/added later. - Marcin\n\u003e `Moh:` Attestation to misbehaving nodes is part of this protocol. For example, if a node sends an incorrect vote or if a leader proposes an invalid transaction, then this proof will be shared with the network to punish the misbehaving nodes (Though currently this is not part of pseudocode). But it is not possible to reliably prove the attestation of not participation.\n\n\u003e `Marcin:` Great, and definitely, we cannot attest that a node was not participating - I was not suggesting that;). But we can also think about extending the attestation for lazy-participants case (if it’s not already part of the protocol).\n\n\u003e `Moh:` OK, thanks for the clarification 😁 . Of course we can have this feature to forward the proof of participation of successor committees. In the first version of Carnot we had this feature as a sliding window. One could choose the size of the window (in terms of tree levels) for which a node should forward the proof of participation. In the most recent version the size of sliding window is 0. And it is 1 for the root committee. It means root committee members have to forward the proof of participation of their child committee members. Since I was able to prove protocol correctness without forwarding the proofs so we avoid it. But it can be part of the protocol without any significant changes in the protocol\n\n\u003e If the proof scheme is efficient ( as the results you presented) in practice and the cost of creating and verifying proofs is not significant then actually adding proofs can be good. But not required.\n\n### Also, how do you reward online validators / punish offline ones if you can't prove at the block level that someone attested or not? - Tanguy\n\u003e `Moh:` This is very tricky and so far no one has done it right (to my knowledge). Current reward mechanism for attestation, favours fast nodes.This means if malicious nodes in the network are fast, they can increase their stake in the network faster than the honest nodes and eventually take control of the network. Or in the case of Ethereum a Byzantine leader can include signature of malicious nodes more frequently in the proof of attestation, hence malicious nodes will be rewarded more frequently. Also let me add that I don't have definite answer to your question currently, but I think by revising the protocol assumptions, incentive mechanism and using a game theoretical approach this problem can be resolved.\n\n\u003e An honest node should wait for a specific number of children votes (to make sure everyone is voting on the same proposal) before voting but does not need to provide any cryptographic proof. Though we build a threshold signature from root committee members and it’s children but not from the whole tree. As long as enough number of nodes follow the the protocol we should be fine. I am working on protocol proofs. Also I think bugs should be discovered during development and testing phase. Changing protocol to detect potential bug might not be a good practice.\n\n### doesn't having randomly distributed malicious nodes (say there is a 20%) increase the odds that over a third of a committee end up being from those malicious ones? It seems intuitive: since a 20% at the global scale is always \u003c1/3, but when randomly distributed there is always non-zero chance they end up in a single group, thus affecting liveness more and more the closer we get to that global 1/3. Consequently, if I'm understanding the algorithm correctly, it would have worse liveness guarantees that classical pBFT, say with a randomly-selected commitee from the total set. - Alvaro\n\n\u003e `Alexander:` We assume that fraction of malicious nodes is $1/4$ and given we chooses comm. sizes, which will depend on total number of nodes, appropriately this guarantees that with high probability we are below $1/3$ in each committee.\n\n\u003e `Alvaro:` ok, but then both the global guarantee is below that current \"standard\" of 1/3 of malicious nodes and even then we are talking about non-zero probabilities that a comm has the power to slow down consensus via requiring reformation of comms (is this right?)\n\n\u003e `Alexander:` This is the price we pay to improve scalability. Also these probabilities of failure can be very low.\n\n### What happens in Carnot when one committee is taken over by \u003e1/3 intra-comm byzantine nodes? - Alvaro\n\n\u003e `Moh:` When there is a failure the overlay is recalculated. By gradually increasing the fault tolerance by a small value, the probability of failure of a committee slightly increases but upon recalculating the correct overlay, inactive nodes that caused the failure of previous overlay (when no committee has more than 1/3 Byzantine nodes) will be slashed.\n\n\n\n## Synchronicity\n\n### How to guarantee synchronicity. In particular how to avoid that in a big network different nodes see a proposal with 2c+1 votes but different votes and thus different random seed - Giacomo\n\n\u003e `Moh:` The assumption is that there exists some known finite time bound Δ and a special event called GST (Global Stabilization Time) such that:\n\n\u003e The adversary must cause the GST event to eventually happen after some unknown finite time. Any message sent at time x must be delivered by time $\\delta + \\text{max}(x,GST)$. In the Partial synchrony model, the system behaves asynchronously till GST and synchronously after GST.\n\n\u003e Moreover, votes travel one level at a time from tree leaves to the tree root. We only need the proof of votes of root+child committees to conclude with a high probability that the majority of nodes have voted.\n\n### That's a timeout? How does this work exactly without timing assumptions? Trying to find this in the document -Alvaro\n\n\u003e `Moh:` Each committee only verifies the votes of its child committees. Once a verified 2/3rd votes of its child members, it then sends it vote to its parent. In this way each layer of the tree verifies the votes (attests) the layer below. Thus, a node does not have to collect and verify 2/3rd of all thousands of votes (as done in other responsive BFTs) but only from its child nodes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["Carnot","consensus"]},"/private/roadmap/consensus/candidates/carnot/overview":{"title":"Carnot Overview","content":"\nCarnot (formerly LogosBFT) is a Byzantine Fault Tolerant (BFT) [consensus](roadmap/consensus/index.md) candidate for the Nomos Network that utilizes Fountain Codes and a committees tree structure to optimize message propagation in the presence of a large number of nodes, while maintaining high througput and fast finality. More specifically, these are the research contributions in Carnot. To our knowledge, Carnot is the first consensus protocol that can achieve together all of these properties:\n\n1. Scalability: Carnot is highly scalable, scaling to thousands of nodes.\n2. Responsiveness: The ability of a protocol to operate with the speed of a wire but not a maximum delay (block delay, slot time, etc.) is called responsiveness. Responsiveness reduces latency and helps the Carnot achieve Fast Finality. Moreover, it improves Carnot's resilience against adversaries that can slow down network traffic. \n3. Fork avoidance: Carnot avoids the formation of forks in a happy path. Forks formation has the following adverse consequences that the Carnot avoids.\n 1. Wastage of resources on orphan blocks and reduced throughput with increased latency for transactions in orphan blocks\n 2. Increased attack vector on PoS as attackers can employ a strategy to force the network to accept their fork resulting in increased stake for adversaries.\n\n- [FAQ](FAQ.md): Here is a page that tracks various questions people have around Carnot.\n\n## Work Streams\n\n### Current State of the Art\nAn ongoing survey of the current state of the art around Consensus Mechanisms and their peripheral dependencies is being conducted by Tuanir, and can be found in the following WIP Overleaf document: \n- [WIP Consensus SoK](https://www.overleaf.com/project/633acc1acaa6ffe456d1ab1f)\n\n### Committee Tree Overlay\nThe basis of Carnot is dependent upon establishing an committee overlay tree structure for message distribution. \n\nAn overview video can be found in the following link: \n- [Carnot Overview by Moh during Offsite](https://drive.google.com/file/d/17L0JPgC0L1ejbjga7_6ZitBfHUe3VO11/view?usp=sharing)\n\nThe details of this are being worked on by Moh and Alexander and can be found in the following overleaf documents: \n- [Moh's draft](https://www.overleaf.com/project/6341fb4a3cf4f20f158afad3)\n- [Alexander's notes on the statistical properties of committees](https://www.overleaf.com/project/630c7e20e56998385e7d8416)\n- [Alexander's python code for computing committee sizes](https://github.com/AMozeika/committees)\n\nA simulation notebook is being worked on by Corey to investigate the properties of various tree overlay structures and estimate their practical performance:\n- [Corey's Overlay Jupyter Notebook](https://github.com/logos-co/scratch/tree/main/corpetty/committee_sim)\n\n#### Failure Recovery\nThere exists a timeout that triggers an overlay reconfiguration. Currently work is being done to calculate the probabilities of another failure based on a given percentage of byzantine nodes within the network. \n- [Recovery Failure Probabilities]() - LINK TO WORK HERE\n\n### Random Beacon\nA random beacon is required to choose a leader and establish a seed for defining the overlay tree. Marcin is working on the various avenues. His previous presentations can be found in the following presentation slides (in chronological order):\n- [Intro to Multiparty Random Beacons](https://cloud.logos.co/index.php/s/b39EmQrZRt5rrfL)\n- [Circles of Trust](https://cloud.logos.co/index.php/s/NXJZX8X8pHg6akw)\n- [Compact Certificates of Knowledge](https://cloud.logos.co/index.php/s/oSJ4ykR4A55QHkG)\n\n### Erasure Coding (LT Codes / Fountain Codes / Raptor Codes)\nIn order to reduce message complexity during propagation, we are investigating the use of Luby Transform (LT) codes, more specifically [Fountain Codes](https://en.wikipedia.org/wiki/Fountain_code), to break up the block to be propagated to validators and recombined by local peers within a committee. \n- [LT Code implementation in Rust](https://github.com/chrido/fountain) - unclear about legal status of LT or Raptor Codes, it is currently under investigation.\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","Carnot"]},"/private/roadmap/consensus/candidates/claro":{"title":"Claro: Consensus Candidate","content":"\n\n\n**Claro** (formerly Glacier) is a consensus candidate for the Logos network that aims to be an improvement to the Avalanche family of consensus protocols. \n\n\n### Implementations\nThe protocol has been implemented in multiple languages to facilitate learning and testing. The individual code repositories can be found in the following links:\n- Rust (reference)\n- Python\n- Common Lisp\n\n### Simulations/Experiments/Analysis\nIn order to test the performance of the protocol, and how it stacked up to the Avalanche family of protocols, we have performed a multitude of simulations and experiments under various assumptions. \n- [Alvaro's initial Python implementations and simulation code](https://github.com/status-im/consensus-models)\n\n### Specification\nCurrently the Claro consensus protocol is being drafted into a specification so that other implementations can be created. It's draft resides under [Vac](https://vac.dev) and can be tracked [here](https://github.com/vacp2p/rfc/pull/512/)\n\n### Additional Information\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","claro"]},"/private/roadmap/consensus/development/overview":{"title":"Development Work","content":"","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/development/prototypes":{"title":"Consensus Prototypes","content":"\nConsensus Prototypes is a collection of Rust implementations of the [Consensus Candidates](tags/candidates)\n\n## Tiny Node\n\n\n## Required Roles\n- Lead Developer (filled)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/overview":{"title":"Consensus Work","content":"\nConsensus is the foundation of the network. It is how a group of peer-to-peer nodes understands how to agree on information in a distributed way, particuluarly in the presence of byzantine actors. \n\n## Consensus Roadmap\n### Consensus Candidates\n- [Carnot](private/roadmap/consensus/candidates/carnot/overview.md) - Carnot is the current leading consensus candidate for the Nomos network. It is designed to maximize efficiency of message dissemination while supoorting hundreds of thousands of full validators. It gets its name from the thermodynamic concept of the [Carnot Cycle](https://en.wikipedia.org/wiki/Carnot_cycle), which defines maximal efficiency of work from heat through iterative gas expansions and contractions. \n- [Claro](claro.md) - Claro is a variant of the Avalanche Snow family of protocols, designed to be more efficient at the decision making process by leveraging the concept of \"confidence\" across peer responses. \n\n\n### Theoretical Analysis\n- [snow-family](snow-family.md)\n\n### Development\n- [prototypes](prototypes.md)\n\n## Open Roles\n- [distributed-systems-researcher](distributed-systems-researcher.md)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus"]},"/private/roadmap/consensus/theory/overview":{"title":"Consensus Theory Work","content":"\nThis track of work is dedicated to creating theoretical models of distributed consensus in order to evaluate them from a mathematical standpoint. \n\n## Navigation\n- [Snow Family Analysis](snow-family.md)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory"]},"/private/roadmap/consensus/theory/snow-family":{"title":"Theoretical Analysis of the Snow Family of Consensus Protocols","content":"\nIn order to evaluate the properties of the Avalanche family of consensus protocols more rigorously than the original [whitepapers](), we work to create an analytical framework to explore and better understand the theoretical boundaries of the underlying protocols, and under what parameterization they will break vs a set of adversarial strategies","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory","snow"]},"/private/roadmap/networking/carnot-waku-specification":{"title":"A Specification proposal for using Waku for Carnot Consensus","content":"\n##### Definition Reference \n- $k$ - size of a given committee\n- $n_C$ - number of committees in the overlay, or nodes in the tree\n- $d$ - depth of the overlay tree\n- $n_d$ - number of committees at a given depth of the tree\n\n## Motivation\nIn #Carnot, an overlay is created to facilitate message distribution and voting aggregation. This document will focus on the differentiated channels of communication for message distribution. Whether or not voting aggregation and subsequenty traversal back up the tree can utilize the same channels will be investigated later. \n\nThe overlay is described as a binary tree of committees, where a individual in each committee propogates messages to an assigned node in their two children committees of the tree, until the leaf nodes have recieved enough information to reconstitute the proposal block. \n\nThis communication protocol will naturally form \"pools of information streams\" that people will need to listen to in order to do their assigned work:\n- inner committee communication\n- parent-child chain communication\n- intitial leader distribution\n\n### **inner committee communication** \nall members of a given committee will need to gossip with each other in order to reform the initial proposal block\n- This results in $n_C$ pools of $k$-sized communication pools.\n\n### **parent-child chain communication** \nThe formation of the committee and the lifecycle of a chunk of erasure coded data forms a number of \"parent-child\" chains. \n- If we completely minimize the communcation between committees, then this results in $k$ number of $n_C$-sized communication pools.\n- It is not clear if individual levels of the tree needs to \"execute\" the message to their children, or if the root committee can broadcast to everyone within its assigned parent-chain communcation pool at the same time.\n- It is also unclear if individual levels of the tree need to send independant messages to each of their children, or if a unified communication pool can be leveraged at the tree-level. This results in $d$ communication pools of $n_d$-size. \n\n### **initial leader distribution**\nFor each proposal, a leader needs to distribute the erasure coded proposal block to the root committee\n- This results in a single communication pool of size $k(+1)$.\n- the $(+1)$ above is the leader, who could also be a part of the root committee. The leader changes with each block proposal, and we seek to minimize the time between leader selection and a round start. Thusly, this results in a requirement that each node in the network must maintain a connection to every node in the root committee. \n\n## Proposal\nThis part of the document will attempt to propose using various aspects of Waku, to facilitate both the setup of the above-mentioned communication pools as well as encryption schemes to add a layer of privacy (and hopefully efficiency) to message distribution. \n\nWe seek to minimize the availability of data such that an individual has only the information to do his job and nothing more.\n\nWe also seek to minimize the amount of messages being passed such that eventually everyone can reconstruct the initial proposal block\n\n`???` for Waku-Relay, 6 connections is optimal, resulting in latency ???\n\n`???` Is it better to have multiple pubsub topics with a simple encryption scheme or a single one with a complex encryption scheme\n\nAs there seems to be a lot of dynamic change from one proposal to the next, I would expect [`noise`](https://vac.dev/wakuv2-noise) to be a quality candidate to facilitate the creation of secure ephemeral keys in the to-be proposed encryption scheme. \n\nIt is also of interest how [`contentTopics`](https://rfc.vac.dev/spec/23/) can be leveraged to optimize the communication pools. \n\n## Whiteboard diagram and notes\n![Whiteboard Diagram](images/Overlay-Communications-Brainstorm.png)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku","carnot","networking","consensus"]},"/private/roadmap/networking/overview":{"title":"P2P Networking Overview","content":"\nThis page summarizes the work around the P2P networking layer of the Nomos project.\n\n## Waku\n[Waku](https://waku.org) is an privacy-preserving, ephemeral, peer-to-peer (P2P) messaging suite of protocols which is developed under [Vac](https://vac.dev) and maintained/productionized by the [Logos Collective](https://logos.co). \n\nIt is hopeful that Nomos can leverage the work of the Waku project to provide the P2P networking layer and peripheral services associated with passing messages around the network. Below is a list of the associated work to investigate the use of Waku within the Nomos Project. \n\n### Scalability and Fault-Tolerance Studies\nCurrently, the amount of research and analysis of the scalability of Waku is not sufficient to give enough confidence that Waku can serve as the networking layer for the Nomos project. Thusly, it is our effort to push this analysis forward by investigating the various boundaries of scale for Waku. Below is a list of endeavors in this direction which we hope serves the broader community: \n- [Status' use of Waku study w/ Kurtosis](status-waku-kurtosis.md)\n- [Using Waku for Carnot Overlay](carnot-waku-specification.md)\n\n### Rust implementations\nWe have created and maintain a stop-gap solution to using Waku with the Rust programming language, which is wrapping the [go-waku](https://github.com/status-im/go-waku) library in Rust and publishing it as a crate. This library allows us to do tests with our [Tiny Node](roadmap/development/prototypes.md#Tiny-Node) implementation more quickly while also providing other projects in the ecosystem to leverage Waku within their Rust codebases more quickly. \n\nIt is desired that we implement a more robust and efficient Rust library for Waku, but this is a significant amount of work. \n\nLinks:\n- [Rust bindings to go-waku repo](https://github.com/waku-org/waku-rust-bindings)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","overview"]},"/private/roadmap/networking/status-network-agents":{"title":"Status Network Agents Breakdown","content":"\nThis page creates a model to describe the impact of the various clients within the Status ecosystem by describing their individual contribution to the messages within the Waku network they leverage. \n\nThis model will serve to create a realistic network topology while also informing the appropriate _dimensions of scale_ that are relevant to explore in the [Status Waku scalability study](status-waku-kurtosis.md)\n\nStatus has three main clients that users interface with (in increasing \"network weight\" ordering):\n- Status Web\n- Status Mobile\n- Status Desktop\n\nEach of these clients has differing (on average) resources available to them, and thusly, provides and consumes different Waku protocols and services within the Status network. Here we will detail their associated messaging impact to the network using the following model:\n\n```\nAgent\n - feature\n - protocol\n - contentTopic, messageType, payloadSize, frequency\n```\n\nBy describing all `Agents` and their associated feature list, we should be able do the following:\n\n- Estimate how much impact per unit time an individual `Agent` impacts the Status network\n- Create a realistic network topology and usage within a simulation framework (_e.g._ Kurtosis)\n- Facilitate a Status Specification of `Agents`\n- Set an example for future agent based modeling and simulation work for the Waku protocol suite \n\n## Status Web\n\n## Status Mobile\n\n## Status Desktop\nStatus Desktop serves as the backbone for the Status Network, as the software runs on hardware that is has more available resources, typically has more stable network and robust connections, and generally has a drastically lower churn (or none at all). This results in it running the most Waku protocols for longer periods of time, resulting int he heaviest usage of the Waku network w.r.t. messaging. \n\nHere is the model breakdown of its usage:\n```\nStatus Desktop\n - Prekey bundle broadcast\n - Account sync\n - Historical message melivery\n - Waku-Relay (answering message queries)\n - Message propogation\n - Waku-Relay\n - Waku-Lightpush (receiving)\n```","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["status","waku","scalability"]},"/private/roadmap/networking/status-waku-kurtosis":{"title":"Status' use of Waku - A Scalability Study","content":"\n[Status](https://status.im) is the largest consumer of the Waku protocol, leveraging it for their entire networking stack. THeir upcoming release of Status Desktop and the associated Communities product will heavily push the limits of what Waku can do. As mentioned in the [Networking Overview](private/roadmap/networking/overview.md) page, rigorous scalability studies have yet to be conducted of Waku (v2). \n\nWhile these studies most immediately benefit the Status product suite, it behooves the Nomos Project to assist as the lessons learned immediately inform us the limits of what the Waku protocol suite can handle, and how that fits within our [Technical Requirements](private/requirements/overview.md).\n\nThis work has been kicked off as a partnership with the [Kurtosis](https://kurtosis.com) distributed systems development platform. It is our hope that the experience and accumen gained during this partnership and study will serve us in the future with respect to Nomos developme, and more broadly, all projects under the Logos Collective. \n\nAs such, here is an overview of the various resources towards this endeavor:\n- [Status Network Agent Breakdown](status-network-agents.md) - A document that describes the archetypal agents that participate in the Status Network and their associated Waku consumption.\n- [Wakurtosis repo](https://github.com/logos-co/wakurtosis) - A Kurtosis module to run scalability studies\n- [Waku Topology Test repo](https://github.com/logos-co/Waku-topology-test) - a Python script that facilitates setting up a reasonable network topology for the purpose of injecting the network configuration into the above Kurtosis repo\n- [Initial Vac forum post introducing this work](https://forum.vac.dev/t/waku-v2-scalability-studies/142)\n- [Waku Github Issue detailing work progression](https://github.com/waku-org/pm/issues/2)\n - this is also a place to maintain communications of progress\n- [Initial Waku V2 theoretical scalability study](https://vac.dev/waku-v1-v2-bandwidth-comparison)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","scalability","waku"]},"/private/roadmap/virtual-machines/overview":{"title":"overview","content":"\n## Motivation\nLogos seeks to use a privacy-first virtual machine for transaction execution. We believe this can only be acheived through zero-knowledge. The majority of current work in the field focuses more towards the aggregation and subsequent verification of transactions. This leads us to explore the researching and development of a privacy-first virtual machine. \n\nLINK TO APPROPRIATE NETWORK REQUIREMENTS HERE\n\n#### Educational Resources\n- primer on Zero Knowledge Virtual Machines - [link](https://youtu.be/GRFPGJW0hic)\n\n### Implementations:\n- TinyRAM - link\n- CairoVM\n- zkSync\n- Hermes\n- [MIDEN](https://polygon.technology/solutions/polygon-miden/) (Polygon)\n- RISC-0\n\t- RISC-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General Building Blocks of a ZK-VM\n- CPU\n\t- modeled with \"execution trays\"\n- RAM\n\t- overhead to look out for\n\t\t- range checks\n\t\t- bitwise operations\n\t\t- hashing\n- Specialized circuits\n- Recursion\n\n## Approaches\n- zk-WASM\n- zk-EVM\n- RISC-0\n\t- RISK-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t\t- https://youtu.be/2MXHgUGEsHs - Why use the RISC Zero zkVM?\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General workstreams\n- bytecode compiler\n- zero-knowledge circuit design\n- opcode architecture (???)\n- engineering\n- required proof system\n- control flow\n\t- MAST (as used in MIDEN)\n\n## Roles\n- [ZK Research Engineer](zero-knowledge-research-engineer.md)\n- Senior Rust Developer\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["virtual machines","zero knowledge"]},"/private/roles/distributed-systems-researcher":{"title":"Open Role: Distributed Systems Researcher","content":"\n\n## About Status\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception. Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n \n\n## Who are we?\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the Status Network. We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality\n\n## The job\n\n**Responsibilities:**\n- This role is dedicated to pure research\n- Primarily, ensuring that solutions are sound and diving deeper into their formal definition.\n- Additionally, he/she would be regularly going through papers, bringing new ideas and staying up-to-date.\n- Designing, specifying and verifying distributed systems by leveraging formal and experimental techniques.\n- Conducting theoretical and practical analysis of the performance of distributed systems.\n- Designing and analysing incentive systems.\n- Collaborating with both internal and external customers and the teams responsible for the actual implementation.\n- Researching new techniques for designing, analysing and implementing dependable distributed systems.\n- Publishing and presenting research results both internally and externally.\n\n \n**Ideally you will have:**\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]\n- Strong background in Computer Science and Math, or a related area.\n- Academic background (The ability to analyze, digest and improve the State of the Art in our fields of interest. Specifically, familiarity with formal proofs and/or the scientific method.)\n- Distributed Systems with a focus on Blockchain\n- Analysis of algorithms\n- Familiarity with Python and/or complex systems modeling software\n- Deep knowledge of algorithms (much more academic, such as have dealt with papers, moving from research to pragmatic implementation)\n- Experience in analysing the correctness and security of distributed systems.\n- Familiarity with the application of formal method techniques. \n- Comfortable with “reverse engineering” code in a number of languages including Java, Go, Rust, etc. Even if no experience in these languages, the ability to read and \"reverse engineer\" code of other projects is important.\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Capable of deep and creative thinking.\n- Passionate about blockchain technology in general.\n- Able to manage the uncertainties and ambiguities associated with working in a remote-first, distributed, decentralised environment.\n- A strong alignment to our principles: https://status.im/about/#our-principles\n\n\n**Bonus points:**\n- Experience working remotely. \n- Experience working for an open source organization. \n- TLA+/PRISM would be desirable.\n- PhD in Computer Science, Mathematics, or a related area. \n- Experience Multi-Party Computation and Zero-Knowledge Proofs\n- Track record of scientific publications.\n- Previous experience in remote or globally distributed teams.\n\n## Hiring process\n\nThe hiring process for this role will be:\n- Interview with our People Ops team\n- Interview with Alvaro (Team Lead)\n- Interview with Corey (Chief Security Officer)\n- Interview with Jarrad (Cofounder) or Daniel \n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n \n\n## Compensation\n\nWe are happy to pay salaries in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: https://people-ops.status.im/tag/perks/\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role"]},"/private/roles/rust-developer":{"title":"Rust Developer","content":"\n# Role: Rust Developer\nat Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is an organization building the tools and infrastructure for the advancement of a secure, private, and open web3. We have been completely distributed since inception. Our team is currently 100+ core contributors strong and welcomes a growing number of community members from all walks of life, scattered all around the globe. We care deeply about open source, and our organizational structure has a minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**About Logos**\n\nA group of Status Contributors is also involved in a new community lead project, called Logos, and this particular role will enable you to also focus on this project. Logos is a grassroots movement to provide trust-minimized, corruption-resistant governing services and social institutions to underserved citizens. \n\nLogos’ infrastructure will provide a base for the provisioning of the next-generation of governing services and social institutions - paving the way to economic opportunities for those who need them most, whilst respecting basic human rights through the network’s design.You can read more about Logos here: [in this small handbook](https://github.com/acid-info/public-assets/blob/master/logos-manual.pdf) for mindful readers like yourself.\n\n**Who are we?**\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the [Status Network](https://statusnetwork.com/). We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality.\n\n**Responsibilities:**\n\n- Develop and maintenance of internal rust libraries\n- 1st month: comfortable with dev framework, simulation app. Improve python lib?\n- 2th-3th month: Start dev of prototype node services\n\n**Ideally you will have:**\n\n- “Extensive” Rust experience (Async programming is a must) \n Ideally they have some GitHub projects to show\n- Experience with Python\n- Strong competency in developing and maintaining complex libraries or applications\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles) \n \n\n**Bonus points if**\n\n-  E.g. Comfortable working remotely and asynchronously\n-  Experience working for an open source organization.  \n-  Peer-to-peer or networking experience\n\n_[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]_\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)\n\n**Hiring Process** \n\nThe hiring process for this role will be:\n\n1. Interview with Maya (People Ops team)\n2. Interview with Corey (Logos Program Owner)\n3. Interview with Daniel (Engineering Lead)\n4. Interview with Jarrad (Cofounder)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role","engineering","rust"]},"/private/roles/zero-knowledge-research-engineer":{"title":"Zero Knowledge Research Engineer","content":"at Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception.  Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**Who are we**\n\n[Vac](http://vac.dev/) **builds** [public good](https://en.wikipedia.org/wiki/Public_good) protocols for the decentralized web.\n\nWe do applied research based on which we build protocols, libraries and publications. Custodians of protocols that reflect [a set of principles](http://vac.dev/principles) - liberty, privacy, etc.\n\nYou can see a sample of some of our work here: [Vac, Waku v2 and Ethereum Messaging](https://vac.dev/waku-v2-ethereum-messaging), [Privacy-preserving p2p economic spam protection in Waku v2](https://vac.dev/rln-relay), [Waku v2 RFC](https://rfc.vac.dev/spec/10/). Our attitude towards ZK: [Vac \u003c3 ZK](https://forum.vac.dev/t/vac-3-zk/97).\n\n**The role**\n\nThis role will be part of a new team that will make a provable and private WASM engine that runs everywhere. As a research engineer, you will be responsible for researching, designing, analyzing and implementing circuits that allow for proving private computation of execution in WASM. This includes having a deep understanding of relevant ZK proof systems and tooling (zk-SNARK, Circom, Plonk/Halo 2, zk-STARK, etc), as well as different architectures (zk-EVM Community Effort, Polygon Hermez and similar) and their trade-offs. You will collaborate with the Vac Research team, and work with requirements from our new Logos program. As one of the first hires of a greenfield project, you are expected to take on significant responsibility,  while collaborating with other research engineers, including compiler engineers and senior Rust engineers. \n \n\n**Key responsibilities** \n\n- Research, analyze and design proof systems and architectures for private computation\n- Be familiar and adapt to research needs zero-knowledge circuits written in Rust Design and implement zero-knowledge circuits in Rust\n- Write specifications and communicate research findings through write-ups\n- Break down complex problems, and know what can and what can’t be dealt with later\n- Perform security analysis, measure performance of and debug circuits\n\n**You ideally will have**\n\n- Very strong academic or engineering background (PhD-level or equivalent in industry); relevant research experience\n- Experience with low level/strongly typed languages (C/C++/Go/Rust or Java/C#)\n- Experience with Open Source software\n- Deep understanding of Zero-Knowledge proof systems (zk-SNARK, circom, Plonk/Halo2, zk-STARK), elliptic curve cryptography, and circuit design\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles)\n\n**Bonus points if** \n\n- Experience in provable and/or private computation (zkEVM, other ZK VM)\n- Rust Zero Knowledge tooling\n- Experience with WebAssemblyWASM\n\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role. Just explain to us why in your cover letter].\n\n**Hiring process** \n\nThe hiring process for this role will be:\n\n1. Interview with Angel/Maya from our Talent team\n2. Interview with team member from the Vac team\n3. Pair programming task with the Vac team\n4. Interview with Oskar, the Vac team lead\n5. Interview with Jacek, Program lead\n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["engineering","role","zero knowledge"]},"/roadmap/acid/milestones-overview":{"title":"Comms Milestones Overview","content":"\n- [Comms Roadmap](https://www.notion.so/eb0629444f0a431b85f79c569e1ca91b?v=76acbc1631d4479cbcac04eb08138c19)\n- [Comms Projects](https://www.notion.so/b9a44ea08d2a4d2aaa9e51c19b476451?v=f4f6184e49854fe98d61ade0bf02200d)\n- [Comms planner deadlines](https://www.notion.so/2585646d01b24b5fbc79150e1aa92347?v=feae1d82810849169b06a12c849d8088)","lastmodified":"2023-08-21T15:49:54.901241828Z","tags":["milestones"]},"/roadmap/acid/updates/2023-08-02":{"title":"2023-08-02 Acid weekly","content":"\n## Leads roundup - acid\n\n**Al / Comms**\n\n- Status app relaunch comms campaign plan in the works. Approx. date for launch 31.08.\n- Logos comms + growth plan post launch is next up TBD.\n- Will be waiting for specs for data room, raise etc.\n- Hires: split the role for content studio to be more realistic in getting top level talent.\n\n**Matt / Copy**\n\n- Initiative updating old documentation like CC guide to reflect broader scope of BUs\n- Brand guidelines/ modes of presentation are in process\n- Wikipedia entry on network states and virtual states is live on \n\n**Eddy / Digital Comms**\n\n- Logos Discord will be completed by EOD.\n- Codex Discord will be done tomorrow.\n - LPE rollout plan, currently working on it, will be ready EOW\n- Podcast rollout needs some\n- Overarching BU plan will be ready in next couple of weeks as things on top have taken priority.\n\n**Amir / Studio**\n\n- Started execution of LPE for new requirements, broken down in smaller deliveries. Looking to have it working and live by EOM.\n- Hires: still looking for 3 positions with main focus on developer side. \n\n**Jonny / Podcast**\n\n- Podcast timelines are being set. In production right now. Nick delivered graphics for HiO but we need a full pack.\n- First HiO episode is in the works. Will be ready in 2 weeks to fit in the rollout of the LPE.\n\n**Louisa / Events**\n\n- Global strategy paper for wider comms plan.\n- Template for processes and executions when preparing events.\n- Decision made with Carl to move Network State event to November in satellite of other events. Looking into ETH Lisbon / Staking Summit etc.\n - Seoul Q4 hackathon is already in the works. Needs bounty planning.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/acid/updates/2023-08-09":{"title":"2023-08-09 Acid weekly","content":"\n## **Top level priorities:**\n\nLogos Growth Plan\nStatus Relaunch\nLaunch of LPE\nPodcasts (Target: Every week one podcast out)\nHiring: TD studio and DC studio roles\n\n## **Movement Building:**\n\n- Logos collective comms plan skeleton ready - will be applied for all BUs as next step\n- Goal is to have plan + overview to set realistic KPIs and expectations\n- Discord Server update on various views\n- Status relaunch comms plan is ready for input from John et al.\n- Reach out to BUs for needs and deliverables\n\n## **TD Studio**\n\nFull focus on LPE:\n- On track, target of end of august\n- review of options, more diverse landscape of content\n- Episodes page proposals\n- Players in progress\n- refactoring from prev code base\n- structure of content ready in GDrive\n\n## **Copy**\n\n- Content around LPE\n- Content for podcast launches\n- Status launch - content requirements to receive\n- Organization of doc sites review\n- TBD what type of content and how the generation workflows will look like\n\n## **Podcast**\n\n- Good state in editing and producing the shows\n- First interview edited end to end with XMTP is ready. 2 weeks with social assets and all included. \n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n- 3 recorded for HIO, motion graphics in progress\n- First E2E podcast ready in 2 weeks for LPE\n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n\n## **DC Studio**\n\n- Brand guidelines for HiO are ready and set. Thanks `Shmeda`!\n- Logos State branding assets are being developed\n- Presentation templates update\n\n## **Events**\n\n- Network State event probably in Istanbul in November re: Devconnect will confirm shortly.\n- Program elements and speakers are top priority\n- Hackathon in Seoul in Q1 2024 - late Febuary probably\n- Jarrad will be speaking at HCPP and EthRome\n- Global event strategy written and in review\n- Lou presented social media and event KPIs on Paris event\n\n## **CRM \u0026 Marketing tool**\n\n- Get feedback from stakeholders and users\n- PM implementation to be planned (+- 3 month time TBD) with working group\n- LPE KPI: Collecting email addresses of relevant people\n- Careful on how we manage and use data, important for BizDev\n- Careful on which segments of the project to manage using the CRM as it can be very off brand","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/codex/milestones-overview":{"title":"Codex Milestones Overview","content":"\n## Milestones\n- [Zenhub Tracker](https://app.zenhub.com/workspaces/engineering-62cee4c7a335690012f826fa/roadmap)\n- [Miro Tracker](https://miro.com/app/board/uXjVOtZ40xI=/?share_link_id=33106977104)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones-overview"]},"/roadmap/codex/updates/2023-07-21":{"title":"2023-07-21 Codex weekly","content":"\n## Codex update 07/12/2023 to 07/21/2023\n\nOverall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc...\n\nOur main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. A lot of grunt work is being done to make that possible. Progress is steady, but there are lots of stabilization and testing \u0026 infra related work going on.\n\nWe're also onboarding several new members to the team (4 to be precise), this will ultimately accelerate our progress, but it requires some upfront investment from some of the more experienced team members.\n\n### DevOps/Infrastructure:\n\n- Adopted nim-codex Docker builds for Dist Tests.\n- Ordered Dedicated node on Hetzner.\n- Configured Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Created Geth and Prometheus Docker images for Dist-Tests.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Set up Ingress Controller in Dist-Tests cluster.\n\n### Testing:\n\n- Set up deployer to gather metrics.\n- Debugging and identifying potential deadlock in the Codex client.\n- Added metrics, built image, and ran tests.\n- Updated dist-test log for Kibana compatibility.\n- Ran dist-tests on a new master image.\n- Debugging continuous tests.\n\n### Development:\n\n- Worked on codex-dht nimble updates and fixing key format issue.\n- Updated CI and split Windows CI tests to run on two CI machines.\n- Continued updating dependencies in codex-dht.\n- Fixed decoding large manifests ([PR #479](https://github.com/codex-storage/nim-codex/pull/497)).\n- Explored the existing implementation of NAT Traversal techniques in `nim-libp2p`.\n\n### Research\n\n- Exploring additional directions for remote verification techniques and the interplay of different encoding approaches and cryptographic primitives\n - https://eprint.iacr.org/2021/1500.pdf\n - https://dankradfeist.de/ethereum/2021/06/18/pcs-multiproofs.html\n - https://eprint.iacr.org/2021/1544.pdf\n- Onboarding Balázs as our ZK researcher/engineer\n- Continued research in DAS related topics\n - Running simulation on newly setup infrastructure\n- Devised a new direction to reduce metadata overhead and enable remote verification https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n- Looked into NAT Traversal ([issue #166](https://github.com/codex-storage/nim-codex/issues/166)).\n\n### Cross-functional (Combination of DevOps/Testing/Development):\n\n- Fixed discovery related issues.\n- Planned Codex Demo update for the Logos event and prepared environment for the demo.\n- Described requirements for Dist Tests logs format.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Dist Tests logs adoption requirements - Updated log format for Kibana compatibility.\n- Hetzner Dedicated server was configured.\n- Set up Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper in Dist-Tests cluster.\n- Setup Grafana in Dist-Tests cluster.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Setup Ingress Controller in Dist-Tests cluster.\n\n---\n\n#### Conversations\n1. zk_id _—_ 07/24/2023 11:59 AM\n\u003e \n\u003e We've explored VDI for rollups ourselves in the last week, curious to know your thoughts\n2. dryajov _—_ 07/25/2023 1:28 PM\n\u003e \n\u003e It depends on what you mean, from a high level (A)VID is probably the closest thing to DAS in academic research, in fact DAS is probably either a subset or a superset of VID, so it's definitely worth digging into. But I'm not sure what exactly you're interested in, in the context of rollups...\n1. zk_id _—_ 07/25/2023 3:28 PM\n \n The part of the rollups seems to be the base for choosing proofs that scale linearly with the amount of nodes (which makes it impractical for large numbers of nodes). The protocol is very simple, and would only need to instead provide constant proofs with the Kate commitments (at the cost of large computational resources is my understanding). This was at least the rationale that I get from reading the paper and the conversation with Bunz, one of the founders of the Espresso shared sequencer (which is where I found the first reference to this paper). I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal. You still would need the sampling for the light clients though, of course.\n \n2. dryajov _—_ 07/25/2023 8:31 PM\n \n \u003e I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal.\n \n Yeah, great question. What follows is strictly IMO, as I haven't seen this formally contrasted anywhere, so my reasoning can be wrong in subtle ways.\n \n - (A)VID - **dispersing** and storing data in a verifiable manner\n - Sampling - verifying already **dispersed** data\n \n tl;dr Sampling allows light nodes to protect against dishonest majority attacks. In other words, a light node cannot be tricked to follow an incorrect chain by a dishonest validator majority that withholds data. More details are here - [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\") ------------- First, DAS implies (A)VID, as there is an initial phase where data is distributed to some subset of nodes. Moreover, these nodes, usually the validators, attest that they received the data and that it is correct. If a majority of validators accepts, then the block is considered correct, otherwise it is rejected. This is the verifiable dispersal part. But what if the majority of validators are dishonest? Can you prevent them from tricking the rest of the network from following the chain?\n \n Dankrad Feist\n \n [Data availability checks](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html)\n \n Primer on data availability checks\n \n3. _[_8:31 PM_]_\n \n ## Dealing with dishonest majorities\n \n This is easy if all the data is downloaded by all nodes all the time, but we're trying to avoid just that. But lets assume, for the sake of the argument, that there are full nodes in the network that download all the data and are able to construct fraud proofs for missing data, can this mitigate the problem? It turns out that it can't, because proving data (un)availability isn't a directly attributable fault - in other words, you can observe/detect it but there is no way you can prove it to the rest of the network reliably. More details here [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\") So, if there isn't much that can be done by detecting that a block isn't available, what good is it for? Well nodes can still avoid following the unavailable chain and thus be tricked by a dishonest majority. However, simply attesting that data has been publishing is not enough to prevent a dishonest majority from attacking the network. (edited)\n \n4. dryajov _—_ 07/25/2023 9:06 PM\n \n To complement, the relevant quote from [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\"), is:\n \n \u003e Here, fraud proofs are not a solution, because not publishing data is not a uniquely attributable fault - in any scheme where a node (\"fisherman\") has the ability to \"raise the alarm\" about some piece of data not being available, if the publisher then publishes the remaining data, all nodes who were not paying attention to that specific piece of data at that exact time cannot determine whether it was the publisher that was maliciously withholding data or whether it was the fisherman that was maliciously making a false alarm.\n \n The relevant quote from from [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\"), is:\n \n \u003e There is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a consensus supermajority has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? Honest full nodes, obviously, will not follow this chain, as they can’t download the data. But light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.\n \n Both articles are a bit old, but the intuitions still hold.\n \n\nJuly 26, 2023\n\n6. zk_id _—_ 07/26/2023 10:42 AM\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n7. _[_10:45 AM_]_\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n8. zk_id\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n ### dryajov _—_ 07/26/2023 4:42 PM\n \n Great! Glad to help anytime \n \n9. zk_id\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n dryajov _—_ 07/26/2023 4:43 PM\n \n Yes, I'd argue it is crucial in a network with distributed validation, where all nodes are either fully light or partially light nodes.\n \n10. _[_4:46 PM_]_\n \n Btw, there is probably more we can share/compare notes on in this problem space, we're looking at similar things, perhaps from a slightly different perspective in Codex's case, but the work done on DAS with the EF directly is probably very relevant for you as well \n \n\nJuly 27, 2023\n\n12. zk_id _—_ 07/27/2023 3:05 AM\n \n I would love to. Do you have those notes somewhere?\n \n13. zk_id _—_ 07/27/2023 4:01 AM\n \n all the links you have, anything, would be useful\n \n14. zk_id\n \n I would love to. Do you have those notes somewhere?\n \n dryajov _—_ 07/27/2023 4:50 PM\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n\nJuly 28, 2023\n\n16. zk_id _—_ 07/28/2023 5:47 AM\n \n Would love to see anything that is possible\n \n17. _[_5:47 AM_]_\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n18. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n dryajov _—_ 07/28/2023 4:07 PM\n \n Yes, we're also working in this direction as this is crucial for us as well. There should be some result coming soon(tm), now that @bkomuves is helping us with this part.\n \n19. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n bkomuves _—_ 07/28/2023 4:44 PM\n \n my current view (it's changing pretty often :) is that there is tension between:\n \n - commitment cost\n - proof cost\n - and verification cost\n \n the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n\nJuly 29, 2023\n\n21. bkomuves\n \n my current view (it's changing pretty often :) is that there is tension between: \n \n - commitment cost\n - proof cost\n - and verification cost\n \n  the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n zk_id _—_ 07/29/2023 4:23 AM\n \n I agree. That's also my understanding (although surely much more superficial).\n \n22. _[_4:24 AM_]_\n \n There is also the dimension of computation vs size cost\n \n23. _[_4:25 AM_]_\n \n ie the VID scheme (of the paper that kickstarted this conversation) has all the properties we need, but it scales n^2 in message complexity which makes it lose the properties we are looking for after 1k nodes. We need to scale confortably to 10k nodes.\n \n24. _[_4:29 AM_]_\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is:\n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n\nAugust 1, 2023\n\n26. dryajov\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n Leobago _—_ 08/01/2023 1:13 PM\n \n Note much public write-ups yet. You can find some content here:\n \n - [https://blog.codex.storage/data-availability-sampling/](https://blog.codex.storage/data-availability-sampling/ \"https://blog.codex.storage/data-availability-sampling/\")\n \n - [https://github.com/codex-storage/das-research](https://github.com/codex-storage/das-research \"https://github.com/codex-storage/das-research\")\n \n \n We also have a few Jupiter notebooks but they are not public yet. As soon as that content is out we can let you know ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n Codex Storage Blog\n \n [Data Availability Sampling](https://blog.codex.storage/data-availability-sampling/)\n \n The Codex team is busy building a new web3 decentralized storage platform with the latest advances in erasure coding and verification systems. Part of the challenge of deploying decentralized storage infrastructure is to guarantee that the data that has been stored and is available for retrieval from the beginning until\n \n GitHub\n \n [GitHub - codex-storage/das-research: This repository hosts all the ...](https://github.com/codex-storage/das-research)\n \n This repository hosts all the research on DAS for the collaboration between Codex and the EF. - GitHub - codex-storage/das-research: This repository hosts all the research on DAS for the collabora...\n \n [](https://opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research)\n \n ![GitHub - codex-storage/das-research: This repository hosts all the ...](https://images-ext-2.discordapp.net/external/DxXI-YBkzTrPfx_p6_kVpJzvVe6Ix6DrNxgrCbcsjxo/https/opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research?width=400\u0026height=200)\n \n27. zk_id\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is: \n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n dryajov _—_ 08/01/2023 1:55 PM\n \n This might interest you as well - [https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a \"https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a\")\n \n Medium\n \n [Combining KZG and erasure coding](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a)\n \n The Hitchhiker’s Guide to Subspace  — Episode II\n \n [](https://miro.medium.com/v2/resize:fit:1200/0*KGb5QHFQEd0cvPeP.png)\n \n ![Combining KZG and erasure coding](https://images-ext-2.discordapp.net/external/LkoJxMEskKGMwVs8XTPVQEEu0senjEQf42taOjAYu0k/https/miro.medium.com/v2/resize%3Afit%3A1200/0%2AKGb5QHFQEd0cvPeP.png?width=400\u0026height=200)\n \n28. _[_1:56 PM_]_\n \n This is a great analysis of the current state of the art in structure of data + commitment and the interplay. I would also recoment reading the first article of the series which it also links to\n \n29. zk_id _—_ 08/01/2023 3:04 PM\n \n Thanks @dryajov @Leobago ! Much appreciated!\n \n30. _[_3:05 PM_]_\n \n Very glad that we can discuss these things with you. Maybe I have some specific questions once I finish reading a huge pile of pending docs that I'm tackling starting today...\n \n31. zk_id _—_ 08/01/2023 6:34 PM\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n32. zk_id\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n Leobago _—_ 08/01/2023 6:36 PM\n \n Yes, checkout the visual branch and make sure to enable plotting in the config file, it should produce a bunch of figures ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n33. _[_6:37 PM_]_\n \n You might find also some bugs here and there on that branch ![😅](https://discord.com/assets/b45af785b0e648fe2fb7e318a6b8010c.svg)\n \n34. zk_id _—_ 08/01/2023 7:44 PM\n \n Thanks!","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-01":{"title":"2023-08-01 Codex weekly","content":"\n# Codex update Aug 1st\n\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial design writeup https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n - Work break down and review for Ben and Tomasz (epic coming up)\n - This is required to integrate the proving system\n\n### Milestone: Block discovery and retrieval\n\n- Some initial work break down and milestones here - https://docs.google.com/document/d/1hnYWLvFDgqIYN8Vf9Nf5MZw04L2Lxc9VxaCXmp9Jb3Y/edit\n - Initial analysis of block discovery - https://rpubs.com/giuliano_mega/1067876\n - Initial block discovery simulator - https://gmega.shinyapps.io/block-discovery-sim/\n\n### Milestone: Distributed Client Testing\n\n- Lots of work around log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - This is a first try of running against an L2\n - Mostly done, waiting on related fixes to land before merge - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Reservations and slot management\n\n- Lots of work around slot reservation and queuing https://github.com/codex-storage/nim-codex/pull/455\n\n## Remote auditing\n\n### Milestone: Implement Poseidon2\n\n- First pass at an implementation by Balazs\n - private repo, but can give access if anyone is interested\n\n### Milestone: Refine proving system\n\n- Lost of thinking around storage proofs and proving systems\n - private repo, but can give access if anyone is interested\n\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator.\n- Implemented logical error-rates and delays to interactions between DHT clients.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-11":{"title":"2023-08-11 Codex weekly","content":"\n\n# Codex update August 11\n\n---\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial Merkle Tree implementation - https://github.com/codex-storage/nim-codex/pull/504\n- Work on persisting/serializing Merkle Tree is underway, PR upcoming\n\n### Milestone: Block discovery and retrieval\n\n- Continued analysis of block discovery and retrieval - https://hackmd.io/_KOAm8kNQamMx-lkQvw-Iw?both=#fn5\n - Reviewing papers on peers sampling and related topics\n - [Wormhole Peer Sampling paper](http://publicatio.bibl.u-szeged.hu/3895/1/p2p13.pdf)\n - [Smoothcache](https://dl.acm.org/doi/10.1145/2713168.2713182)\n- Starting work on simulations based on the above work\n\n### Milestone: Distributed Client Testing\n\n- Continuing working on log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n - More related issues/PRs:\n - https://github.com/codex-storage/infra-codex/pull/20\n - https://github.com/codex-storage/infra-codex/pull/20\n- Testing and debugging Condex in continuous testing environment\n - Debugging continuous tests [cs-codex-dist-tests/pull/44](https://github.com/codex-storage/cs-codex-dist-tests/pull/44)\n - pod labeling [cs-codex-dist-tests/issues/39](https://github.com/codex-storage/cs-codex-dist-tests/issues/39)\n\n---\n## Infra\n\n### Milestone: Kubernetes Configuration and Management\n- Move Dist-Tests cluster to OVH and define naming conventions\n- Configure Ingress Controller for Kibana/Grafana\n- **Create documentation for Kubernetes management**\n- **Configure Dist/Continuous-Tests Pods logs shipping**\n\n### Milestone: Continuous Testing and Labeling\n- Watch the Continuous tests demo\n- Implement and configure Dist-Tests labeling\n- Set up logs shipping based on labels\n- Improve Docker workflows and add 'latest' tag\n\n### Milestone: CI/CD and Synchronization\n- Set up synchronization by codex-storage\n- Configure Codex Storage and Demo CI/CD environments\n\n---\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - Done but merge is blocked by a few issues - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Marketplace Sales\n\n- Lots of cleanup and refactoring\n - Finished refactoring state machine PR [link](https://github.com/codex-storage/nim-codex/pull/469)\n - Added support for loading node's slots during Sale's module start [link](https://github.com/codex-storage/nim-codex/pull/510)\n\n---\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator - https://github.com/cortze/py-dht.\n\n\nNOTE: Several people are/where out during the last few weeks, so some milestones are paused until they are back","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/innovation_lab/milestones-overview":{"title":"Innovation Lab Milestones Overview","content":"\niLab Milestones can be found on the [Notion Page](https://www.notion.so/Logos-Innovation-Lab-dcff7b7a984b4f9e946f540c16434dc9?pvs=4)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/innovation_lab/updates/2023-07-12":{"title":"2023-07-12 Innovation Lab Weekly","content":"\n**Logos Lab** 12th of July\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\n**Milestone**: deliver the first transactional Waku Object called Payggy (attached some design screenshots).\n\nIt is now possible to make transactions on the blockchain and the objects send notifications over the messaging layer (e.g. Waku) to the other participants. What is left is the proper transaction status management and some polishing.\n\nThere is also work being done on supporting external objects, this enables creating the objects with any web technology. This work will guide the separation of the interfaces between the app and the objects and lead us to release it as an SDK.\n\n**Next milestone**: group chat support\n\nThe design is already done for the group chat functionality. There is ongoing design work for a new Waku Object that would showcase what can be done in a group chat context.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nLink to Payggy design files:\nhttps://scene.zeplin.io/project/64ae9e965652632169060c7d\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/UtVHf2EU\n\n--- \n\n#### Conversation\n\n1. petty _—_ 07/15/2023 5:49 AM\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n2. petty\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n3. attila🍀 _—_ 07/15/2023 6:18 AM\n \n at the moment most of the code is in the `waku-objects-playground` repo later we may split it to several repos here is the link: [https://github.com/logos-innovation-lab/waku-objects-playground](https://github.com/logos-innovation-lab/waku-objects-playground \"https://github.com/logos-innovation-lab/waku-objects-playground\")","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-02":{"title":"2023-08-02 Innovation Lab weekly","content":"\n**Logos Lab** 2nd of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nThe last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite. \n\nStill, a lot of progress were made and the team released the first version of a color system in the form of an npm package, which lets the users to choose any color they like to customize their app. It is based on grayscale design and uses luminance, hence the name of the library. Try it in the Playground app or check the links below.\n\n**Milestone**: group chat support\n\nThere is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nGrayscale design:\nhttps://grayscale.design/\n\nLuminance package on npm:\nhttps://www.npmjs.com/package/@waku-objects/luminance\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/ZMU4yyWG\n\n--- \n\n### Conversation\n\n1. fryorcraken _—_ Yesterday at 10:58 PM\n \n \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n \n While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n\nAugust 3, 2023\n\n2. fryorcraken\n \n \u003e \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation. While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n3. attila🍀 _—_ Today at 4:21 AM\n \n This is great news and I think it will help with adoption. I did not find a JS API for status (maybe I was looking at the wrong places), the closest was the `status-js-api` project but that still uses whisper and the repo recommends to use `js-waku` instead ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg) [https://github.com/status-im/status-js-api](https://github.com/status-im/status-js-api \"https://github.com/status-im/status-js-api\") Also I also found the `56/STATUS-COMMUNITIES` spec: [https://rfc.vac.dev/spec/56/](https://rfc.vac.dev/spec/56/ \"https://rfc.vac.dev/spec/56/\") It seems to be quite a complete solution for community management with all the bells and whistles. However our use case is a private group chat for your existing contacts, so it seems to be a bit overkill for that.\n \n4. fryorcraken _—_ Today at 5:32 AM\n \n The repo is status-im/status-web\n \n5. _[_5:33 AM_]_\n \n Spec is [https://rfc.vac.dev/spec/55/](https://rfc.vac.dev/spec/55/ \"https://rfc.vac.dev/spec/55/\")\n \n6. fryorcraken\n \n The repo is status-im/status-web\n \n7. attila🍀 _—_ Today at 6:05 AM\n \n As constructive feedback I can tell you that it is not trivial to find it and use it in other projects It is presented as a React component without documentation and by looking at the code it seems to provide you the whole chat UI of the desktop app, which is not necessarily what you need if you want to embed it in your app It seems to be using this package: [https://www.npmjs.com/package/@status-im/js](https://www.npmjs.com/package/@status-im/js \"https://www.npmjs.com/package/@status-im/js\") Which also does not have documentation I assume that package is built from this: [https://github.com/status-im/status-web/tree/main/packages/status-js](https://github.com/status-im/status-web/tree/main/packages/status-js \"https://github.com/status-im/status-web/tree/main/packages/status-js\") This looks promising, but again there is no documentation. Of course you can use the code to figure out things, but at least I would be interested in what are the requirements and high level architecture (does it require an ethereum RPC endpoint, where does it store data, etc.) so that I can evaluate if this is the right approach for me. So maybe a lesson here is to put effort in the documentation and the presentation as well and if you have the budget then have someone on the team whose main responsibility is that (like a devrel or dev evangelist role)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-11":{"title":"2023-08-17 \u003cTEAM\u003e weekly","content":"\n\n# **Logos Lab** 11th of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nWe merged the group chat but it surfaced plenty of issues that were not a problem with 1on1 chats, both with our Waku integration and from product perspective as well. Spent the bigger part of the week with fixing these. We also registered a new domain, wakuplay.im where the latest version is deployed. It uses the Gnosis chain for transactions and currently the xDai and Gno tokens are supported, but it is easy to add other ERC-20 tokens now.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions. The design is ready and the implementaton has started.\n\n**Next milestone**: Basic Waku Objects website\n\nWork started toward having a structure for a website and the content is shaping up nicely. The implementation has been started on it as well.\n\nDeployed version of the main branch:\nhttps://www.wakuplay.im/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/eaYVgSUG","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["\u003cTEAM\u003e-updates"]},"/roadmap/nomos/milestones-overview":{"title":"Nomos Milestones Overview","content":"\n[Milestones Overview Notion Page](https://www.notion.so/ec57b205d4b443aeb43ee74ecc91c701?v=e782d519939f449c974e53fa3ab6978c)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/nomos/updates/2023-07-24":{"title":"2023-07-24 Nomos weekly","content":"\n**Research**\n\n- Milestone 1: Understanding Data Availability (DA) Problem\n - High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris.\n - Explored the necessity and key challenges associated with DA.\n - In-depth study of Verifiable Information Dispersal (VID) as it relates to data availability.\n - **Blocker:** The experimental tests for our specific EC scheme are pending, which is blocking progress to make final decision on KZG + commitments for our architecture.\n- Milestone 2: Privacy for Proof of Stake (PoS)\n - Analyzed the capabilities and limitations of mixnets, specifically within the context of timing attacks in private PoS.\n - Invested time in understanding timing attacks and how Nym mixnet caters to these challenges.\n - Reviewed the Crypsinous paper to understand its privacy vulnerabilities, notably the issue with probabilistic leader election and the vulnerability of anonymous broadcast channels to timing attacks.\n\n**Development**\n\n- Milestone 1: Mixnet and Networking\n - Initiated integration of libp2p to be used as the full node's backend, planning to complete in the next phase.\n - Begun planning for the next steps for mixnet integration, with a focus on understanding the components of the Nym mixnet, its problem-solving mechanisms, and the potential for integrating some of its components into our codebase.\n- Milestone 2: Simulation Application\n - Completed pseudocode for Carnot Simulator, created a test pseudocode, and provided a detailed description of the simulation. The relevant resources can be found at the following links:\n - Carnot Simulator pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/carnot_simulation_psuedocode.py)\n - Test pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/test_carnot_simulation.py)\n - Description of the simulation (https://www.notion.so/Carnot-Simulation-c025dbab6b374c139004aae45831cf78)\n - Implemented simulation network fixes and warding improvements, and increased the run duration of integration tests. The corresponding pull requests can be accessed here:\n - Simulation network fix (https://github.com/logos-co/nomos-node/pull/262)\n - Vote tally fix (https://github.com/logos-co/nomos-node/pull/268)\n - Increased run duration of integration tests (https://github.com/logos-co/nomos-node/pull/263)\n - Warding improvements (https://github.com/logos-co/nomos-node/pull/269)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-07-31":{"title":"2023-07-31 Nomos weekly","content":"\n**Nomos 31st July**\n\n[Network implementation and Mixnet]:\n\nResearch\n- Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder.\n- Considered the use of a new NetworkInterface in the simulation to mimic the mixnet, but currently, no significant benefits from doing so have been identified.\nDevelopment\n- Fixes were made on the Overlay interface.\n- Near completion of the libp2p integration with all tests passing so far, a PR is expected to be opened soon.\n- Link to libp2p PRs: https://github.com/logos-co/nomos-node/pull/278, https://github.com/logos-co/nomos-node/pull/279, https://github.com/logos-co/nomos-node/pull/280, https://github.com/logos-co/nomos-node/pull/281\n- Started working on the foundation of the libp2p-mixnet transport.\n\n[Private PoS]:\n\nResearch\n- Discussions were held on the Privacy PoS (PPoS) proposal, aligning a general direction of team members.\n- Reviews on the PPoS proposal were done.\n- A proposal to merge the PPoS proposal with the efficient one was made, in order to have both privacy and efficiency.\n- Discussions on merging Efficient PoS (EPoS) with PPoS are in progress.\n\n[Carnot]:\n\nResearch\n- Analyzing Bribery attack scenarios, which seem to make Carnot more vulnerable than expected.\n\n\n**Development**\n\n- Improved simulation application to meet test scale requirements (https://github.com/logos-co/nomos-node/pull/274).\n- Created a strategy to solve the large message sending issue in the simulation application.\n\n[Data Availability Sampling (or VID)]:\n\nResearch\n- Conducted an analysis of stored data \"degradation\" problem for data availability, modeling fractions of nodes which leave the system at regular time intervals\n- Continued literature reading on Verifiable Information Dispersal (VID) for DA problem, as well as encoding/commitment schemes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-07":{"title":"2023-08-07 Nomos weekly","content":"\nNomos weekly report\n================\n\n### Network implementation and Mixnet:\n#### Research\n- Researched the Nym mixnet architecture in depth in order to design our prototype architecture.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1661386628)\n- Discussions about how to manage the mixnet topology.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1665101243)\n#### Development\n- Implemented a prototype for building a Sphinx packet, mixing packets at the first hop of gossipsub with 3 mixnodes (+ encryption + delay), raw TCP connections between mixnodes, and the static entire mixnode topology.\n (Link: https://github.com/logos-co/nomos-node/pull/288)\n- Added support for libp2p in tests.\n (Link: https://github.com/logos-co/nomos-node/pull/287)\n- Added support for libp2p in nomos node.\n (Link: https://github.com/logos-co/nomos-node/pull/285)\n\n### Private PoS:\n#### Research\n- Worked on PPoS design and addressed potential metadata leakage due to staking and rewarding.\n- Focus on potential bribery attacks and privacy reasoning, but not much progress yet.\n- Stopped work on Accountability mechanism and PPoS efficiency due to prioritizing bribery attacks.\n\n### Carnot:\n#### Research\n- Addressed two solutions for the bribery attack. Proposals pending.\n- Work on accountability against attacks in Carnot including Slashing mechanism for attackers is paused at the moment.\n- Modeled data decimation using a specific set of parameters and derived equations related to it.\n- Proposed solutions to address bribery attacks without compromising the protocol's scalability.\n\n### Data Availability Sampling (VID):\n#### Research\n- Analyzed data decimation in data availability problem.\n (Link: https://www.overleaf.com/read/gzqvbbmfnxyp)\n- DA benchmarks and analysis for data commitments and encoding. This confirms that (for now), we are on the right path.\n- Explored the idea of node sharding: https://arxiv.org/abs/1907.03331 (taken from Celestia), but discarded it because it doesn't fit our architecture.\n\n#### Testing and Node development:\n- Fixes and enhancements made to nomos-node.\n (Link: https://github.com/logos-co/nomos-node/pull/282)\n (Link: https://github.com/logos-co/nomos-node/pull/289)\n (Link: https://github.com/logos-co/nomos-node/pull/293)\n (Link: https://github.com/logos-co/nomos-node/pull/295)\n- Ran simulations with 10K nodes.\n- Updated integration tests in CI to use waku or libp2p network.\n (Link: https://github.com/logos-co/nomos-node/pull/290)\n- Fix for the node throughput during simulations.\n (Link: https://github.com/logos-co/nomos-node/pull/295)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-14":{"title":"2023-08-17 Nomos weekly","content":"\n\n# **Nomos weekly report 14th August**\n---\n\n## **Network Privacy and Mixnet**\n\n### Research\n- Mixnet architecture discussions. Potential agreement on architecture not very different from PoC\n- Mixnet preliminary design [https://www.notion.so/Mixnet-Architecture-613f53cf11a245098c50af6b191d31d2]\n### Development\n- Mixnet PoC implementation starting [https://github.com/logos-co/nomos-node/pull/302]\n- Implementation of mixnode: a core module for implementing a mixnode binary\n- Implementation of mixnet-client: a client library for mixnet users, such as nomos-node\n\n### **Private PoS**\n- No progress this week.\n\n---\n## **Data Availability**\n### Research\n- Continued analysis of node decay in data availability problem\n- Improved upper bound on the probability of the event that data is no longer available given by the (K,N) erasure ECC scheme [https://www.overleaf.com/read/gzqvbbmfnxyp]\n\n### Development\n- Library survey: Library used for the benchmarks is not yet ready for requirements, looking for alternatives\n- RS \u0026 KZG benchmarking for our use case https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450\n- Study documentation on Danksharding and set of questions for Leonardo [https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450]\n\n---\n## **Testing, CI and Simulation App**\n\n### Development\n- Sim fixes/improvements [https://github.com/logos-co/nomos-node/pull/299], [https://github.com/logos-co/nomos-node/pull/298], [https://github.com/logos-co/nomos-node/pull/295]\n- Simulation app and instructions shared [https://github.com/logos-co/nomos-node/pull/300], [https://github.com/logos-co/nomos-node/pull/291], [https://github.com/logos-co/nomos-node/pull/294]\n- CI: Updated and merged [https://github.com/logos-co/nomos-node/pull/290]\n- Parallel node init for improved simulation run times [https://github.com/logos-co/nomos-node/pull/300]\n- Implemented branch overlay for simulating 100K+ nodes [https://github.com/logos-co/nomos-node/pull/291]\n- Sequential builds for nomos node features updated in CI [https://github.com/logos-co/nomos-node/pull/290]","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/vac/milestones-overview":{"title":"Vac Milestones Overview","content":"\n[Overview Notion Page](https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632?pvs=4) - Information copied here for now\n\n## Info\n### Structure of milestone names:\n\n`vac:\u003cunit\u003e:\u003ctag\u003e:\u003cfor_project\u003e:\u003ctitle\u003e_\u003ccounter\u003e`\n- `vac` indicates it is a vac milestone\n- `unit` indicates the vac unit `p2p`, `dst`, `tke`, `acz`, `sc`, `zkvm`, `dr`, `rfc`\n- `tag` tags a specific area / project / epic within the respective vac unit, e.g. `nimlibp2p`, or `zerokit`\n- `for_project` indicates which Logos project the milestone is mainly for `nomos`, `waku`, `codex`, `nimbus`, `status`; or `vac` (meaning it is internal / helping all projects as a base layer)\n- `title` the title of the milestone\n- `counter` an optional counter; `01` is implicit; marked with a `02` onward indicates extensions of previous milestones\n\n## Vac Unit Roadmaps\n- [Roadmap: P2P](https://www.notion.so/Roadmap-P2P-a409c34cb95b4b81af03f60cbf32f9c1?pvs=21)\n- [Roadmap: Token Economics](https://www.notion.so/Roadmap-Token-Economics-e91f1cb58ebc4b1eb46b074220f535d0?pvs=21)\n- [Roadmap: Distributed Systems Testing (DST))](https://www.notion.so/Roadmap-Distributed-Systems-Testing-DST-4ef0d8694d3e40d6a0cfe706855c43e6?pvs=21)\n- [Roadmap: Applied Cryptography and ZK (ACZ)](https://www.notion.so/Roadmap-Applied-Cryptography-and-ZK-ACZ-00b3ba101fae4a099a2d7af2144ca66c?pvs=21)\n- [Roadmap: Smart Contracts (SC)](https://www.notion.so/Roadmap-Smart-Contracts-SC-e60e0103cad543d5832144d5dd4611a0?pvs=21)\n- [Roadmap: zkVM](https://www.notion.so/Roadmap-zkVM-59cb588bd2404e659633e008101310b5?pvs=21)\n- [Roadmap: Deep Research (DR)](https://www.notion.so/Roadmap-Deep-Research-DR-561a864c890549c3861bf52ab979d7ab?pvs=21)\n- [Roadmap: RFC Process](https://www.notion.so/Roadmap-RFC-Process-f8516d19132b41a0beb29c24510ebc09?pvs=21)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/vac/updates/2023-07-10":{"title":"2023-07-10 Vac Weekly","content":"- *vc::Deep Research*\n - refined deep research roadmaps https://github.com/vacp2p/research/issues/190, https://github.com/vacp2p/research/issues/192\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Prepared Paris talks\n - Implemented perf protocol to compare the performances with other libp2ps https://github.com/status-im/nim-libp2p/pull/925\n- *vsu::Tokenomics*\n - Fixing bugs on the SNT staking contract;\n - Definition of the first formal verification tests for the SNT staking contract;\n - Slides for the Paris off-site\n- *vsu::Distributed Systems Testing*\n - Replicated message rate issue (still on it)\n - First mockup of offline data\n - Nomos consensus test working\n- *vip::zkVM*\n - hiring\n - onboarding new researcher\n - presentation on ECC during Logos Research Call (incl. preparation)\n - more research on nova, considering additional options\n - Identified 3 research questions to be taken into consideration for the ZKVM and the publication\n - Researched Poseidon implementation for Nova, Nova-Scotia, Circom\n- *vip::RLNP2P*\n - finished rln contract for waku product - https://github.com/waku-org/rln-contract\n - fixed homebrew issue that prevented zerokit from building - https://github.com/vacp2p/zerokit/commit/8a365f0c9e5c4a744f70c5dd4904ce8d8f926c34\n - rln-relay: verify proofs based upon bandwidth usage - https://github.com/waku-org/nwaku/commit/3fe4522a7e9e48a3196c10973975d924269d872a\n - RLN contract audit cont' https://hackmd.io/@blockdev/B195lgIth\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-17":{"title":"2023-07-17 Vac weekly","content":"\n**Last week**\n- *vc*\n - Vac day in Paris (13th)\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Paris offsite Paris (all CCs)\n- *vsu::Tokenomics*\n - Bugs found and solved in the SNT staking contract\n - attend events in Paris\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - QoS on all four infras\n - Continue work on theoretical gossipsub analysis (varying regular graph sizes)\n - Peer extraction using WLS (almost finished)\n - Discv5 testing\n - Wakurtosis CI improvements\n - Provide offline data\n- *vip::zkVM*\n - onboarding new researcher\n - Prepared and presented ZKVM work during VAC offsite\n - Deep research on Nova vs Stark in terms of performance and related open questions\n - researching Sangria\n - Worked on NEscience document (https://www.notion.so/Nescience-WIP-0645c738eb7a40869d5650ae1d5a4f4e)\n - zerokit:\n - worked on PR for arc-circom\n- *vip::RLNP2P*\n - offsite Paris\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - EthCC \u0026 Logos event Paris (all CCs)\n- *vsu::Tokenomics*\n - Attend EthCC and side events in Paris\n - Integrate staking contracts with radCAD model\n - Work on a new approach for Codex collateral problem\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - Finish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report\n - Restructure the Analysis script and start modelling Status control messages\n - Split Wakurtosis analysis module into separate repository (delayed)\n - Deliver simulation results (incl fixing discv5 error with new Kurtosis version)\n - Second iteration Nomos CI\n- *vip::zkVM*\n - Continue researching on Nova open questions and Sangria\n - Draft the benchmark document (by the end of the week)\n - research hardware for benchmarks\n - research Halo2 cont'\n - zerokit:\n - merge a PR for deployment of arc-circom\n - deal with arc-circom master fail\n- *vip::RLNP2P*\n - offsite paris\n- *blockers*\n - *vip::zkVM:zerokit*: ark-circom deployment to crates io; contact to ark-circom team","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-24":{"title":"2023-08-03 Vac weekly","content":"\nNOTE: This is a first experimental version moving towards the new reporting structure:\n\n**Last week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - related work section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - basic torpush encode/decode ( https://github.com/vacp2p/nim-libp2p-experimental/pull/1 )\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - (focus on Tor-push PoC)\n- *vsu::P2P*\n - admin/misc\n - EthCC (all CCs)\n- *vsu::Tokenomics*\n - admin/misc\n - Attended EthCC and side events in Paris\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - Kicked off a new approach for Codex collateral problem\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - Integrated SNT staking contracts with Python\n - milestone (50%, 2023/07/14) SNT litepaper\n - (delayed)\n - milestone(30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - milestone (95%, 2023/07/31) Wakurtosis Waku Report\n - Add timout to injection async call in WLS to avoid further issues (PR #139 https://github.com/vacp2p/wakurtosis/pull/139)\n - Plotting \u0026 analyse 100 msg/s off line Prometehus data\n - milestone (90%, 2023/07/31) Nomos CI testing\n - fixed errors in Nomos consensus simulation\n - milestone (30%, ...) gossipsub model analysis\n - add config options to script, allowing to load configs that can be directly compared to Wakurtosis results\n - added support for small world networks\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - (write ups will be available here: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Solved the open questions on Nova adn completed the document (will update the page)\n - Reviewed Nescience and working on a document\n - Reviewed partly the write up on FHE\n - writeup for Nova and Sangria; research on super nova\n - reading a new paper revisiting Nova (https://eprint.iacr.org/2023/969)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - zkvm\n - Researching Nova to understand the folding technique for ZKVM adaptation\n - zerokit\n - Rostyslav became circom-compat maintainer\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro\n - completed\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - admin/misc\n - EthCC + offsite\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - working on contributions section, based on https://hackmd.io/X1DoBHtYTtuGqYg0qK4zJw\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - working on establishing a connection via nim-libp2p tor-transport\n - setting up goerli test node (cont')\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - continue working on paper\n- *vsu::P2P*\n - milestone (...)\n - Implement ChokeMessage for GossipSub\n - Continue \"limited flood publishing\" (https://github.com/status-im/nim-libp2p/pull/911)\n- *vsu::Tokenomics*\n - admin/misc:\n - (3 CC days off)\n - Catch up with EthCC talks that we couldn't attend (schedule conflicts)\n - milestone (50%, 2023/07/14) SNT litepaper\n - Start building the SNT agent-based simulation\n- *vsu::Distributed Systems Testing*\n - milestone (100%, 2023/07/31) Wakurtosis Waku Report\n - finalize simulations\n - finalize report\n - milestone (100%, 2023/07/31) Nomos CI testing\n - finalize milestone\n - milestone (30%, ...) gossipsub model analysis\n - Incorporate Status control messages\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - Refine the Nescience WIP and FHE documents\n - research HyperNova\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Continue exploring Nova and other ZKPs and start technical writing on Nova benchmarks\n - zkvm\n - zerokit\n - circom: reach an agreement with other maintainers on master branch situation\n- *vip::RLNP2P*\n - maintenance\n - investigate why docker builds of nwaku are failing [zerokit dependency related]\n - documentation on how to use rln for projects interested (https://discord.com/channels/864066763682218004/1131734908474236968/1131735766163267695)(https://ci.infra.status.im/job/nim-waku/job/manual/45/console)\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - revert rln bandwidth reduction based on offsite discussion, move to different validator\n- *blockers*","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-31":{"title":"2023-07-31 Vac weekly","content":"\n- *vc::Deep Research*\n - milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission\n - proposed solution section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - establishing torswitch and testing code\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - addressed feedback on current version of paper\n- *vsu::P2P*\n - nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH's EIP-4844\n - Merged IDontWant (https://github.com/status-im/nim-libp2p/pull/934) \u0026 Limit flood publishing (https://github.com/status-im/nim-libp2p/pull/911) 𝕏\n - This wraps up the \"mandatory\" optimizations for 4844. We will continue working on stagger sending and other optimizations\n - nim-libp2p: (70%, 2023/07/31) WebRTC transport\n- *vsu::Tokenomics*\n - admin/misc\n - 2 CCs off for the week\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - milestone (50%, 2023/07/14) SNT litepaper\n - milestone (30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - admin/misc\n - Analysis module extracted from wakurtosis repo (https://github.com/vacp2p/wakurtosis/pull/142, https://github.com/vacp2p/DST-Analysis)\n - hiring\n - milestone (99%, 2023/07/31) Wakurtosis Waku Report\n - Re-run simulations\n - merge Discv5 PR (https://github.com/vacp2p/wakurtosis/pull/129).\n - finalize Wakurtosis Tech Report v2\n - milestone (100%, 2023/07/31) Nomos CI testing\n - delivered first version of Nomos CI integration (https://github.com/vacp2p/wakurtosis/pull/141)\n - milestone (30%, 2023/08/31 gossipsub model: Status control messages\n - Waku model is updated to model topics/content-topics\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - achievment :: nova questions answered (see document in Project: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Nescience WIP done (to be delivered next week, priority)\n - FHE review (lower prio)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Working on discoveries about other benchmarks done on plonky2, starky, and halo2\n - zkvm\n - zerokit\n - fixed ark-circom master \n - achievment :: publish ark-circom https://crates.io/crates/ark-circom\n - achievment :: publish zerokit_utils https://crates.io/crates/zerokit_utils\n - achievment :: publish rln https://crates.io/crates/rln (𝕏 jointly with RLNP2P)\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) RLN-Relay Waku production readiness\n - Updated rln-contract to be more modular - and downstreamed to waku fork of rln-contract - https://github.com/vacp2p/rln-contract and http://github.com/waku-org/waku-rln-contract\n - Deployed to sepolia\n - Fixed rln enabled docker image building in nwaku - https://github.com/waku-org/nwaku/pull/1853\n - zerokit:\n - achievement :: zerokit v0.3.0 release done - https://github.com/vacp2p/zerokit/releases/tag/v0.3.0 (𝕏 jointly with zkVM)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-07":{"title":"2023-08-07 Vac weekly","content":"\n\nMore info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week):\nhttps://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n**Vac week 32** August 7th\n- *vsu::P2P*\n - `vac:p2p:nim-libp2p:vac:maintenance`\n - Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n - `vac:p2p:nim-chronos:vac:maintenance`\n - Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n - Investigate flaky test using REUSE_PORT\n- *vsu::Tokenomics*\n - (...)\n- *vsu::Distributed Systems Testing*\n - `vac:dst:wakurtosis:waku:techreport`\n - delivered: Wakurtosis Tech Report v2 (https://docs.google.com/document/d/1U3bzlbk_Z3ZxN9tPAnORfYdPRWyskMuShXbdxCj4xOM/edit?usp=sharing)\n - `vac:dst:wakurtosis:vac:rlog`\n - working on research log post on Waku Wakurtosis simulations\n - `vac:dst:gsub-model:status:control-messages`\n - delivered: the analytical model can now handle Status messages; status analysis now has a separate cli and config; handles top 5 message types (by expected bandwidth consumption)\n - `vac:dst:gsub-model:vac:refactoring`\n - Refactoring and bug fixes\n - introduced and tested 2 new analytical models\n - `vac:dst:wakurtosis:waku:topology-analysis`\n - delivered: extracted into separate module, independent of wls message\n - `vac:dst:wakurtosis:nomos:ci-integration_02`\n - planning\n - `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n - planning; check usage of new codex simulator tool (https://github.com/codex-storage/cs-codex-dist-tests)\n- *vip::zkVM*\n - `vac:zkvm::vac:research-existing-proof-systems`\n - 90% Nescience WIP done – to be reviewed carefully since no other follow up documents were giiven to me\n - 50% FHE review - needs to be refined and summarized\n - finished SuperNova writeup ( https://www.notion.so/SuperNova-research-document-8deab397f8fe413fa3a1ef3aa5669f37 )\n - researched starky\n - 80% Halo2 notes ( https://www.notion.so/halo2-fb8d7d0b857f43af9eb9f01c44e76fb9 )\n - `vac:zkvm::vac:proof-system-benchmarks`\n - More discoveries on benchmarks done on ZK-snarks and ZK-starks but all are high level\n - Viewed some circuits on Nova and Poseidon\n - Read through Halo2 code (and Poseidon code) from Axiom\n- *vip::RLNP2P*\n - `vac:acz:rlnp2p:waku:production-readiness`\n - Waku rln contract registry - https://github.com/waku-org/waku-rln-contract/pull/3\n - mark duplicated messages as spam - https://github.com/waku-org/nwaku/pull/1867\n - use waku-org/waku-rln-contract as a submodule in nwaku - https://github.com/waku-org/nwaku/pull/1884\n - `vac:acz:zerokit:vac:maintenance`\n - Fixed atomic_operation ffi edge case error - https://github.com/vacp2p/zerokit/pull/195\n - docs cleanup - https://github.com/vacp2p/zerokit/pull/196\n - fixed version tags - https://github.com/vacp2p/zerokit/pull/194\n - released zerokit v0.3.1 - https://github.com/vacp2p/zerokit/pull/198\n - marked all functions as virtual in rln-contract for inheritors - https://github.com/vacp2p/rln-contract/commit/a092b934a6293203abbd4b9e3412db23ff59877e\n - make nwaku use zerokit v0.3.1 - https://github.com/waku-org/nwaku/pull/1886\n - rlnp2p implementers draft - https://hackmd.io/@rymnc/rln-impl-w-waku\n - `vac:acz:zerokit:vac:zerokit-v0.4`\n - zerokit v0.4.0 release planning - https://github.com/vacp2p/zerokit/issues/197\n- *vc::Deep Research*\n - `vac:dr:valpriv:vac:tor-push-poc`\n - redesigned the torpush integration in nimbus https://github.com/vacp2p/nimbus-eth2-experimental/pull/2\n - `vac:dr:valpriv:vac:tor-push-relwork`\n - Addressed further comments in paper, improved intro, added source level variation approach\n - `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n - cont' work on the document","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-14":{"title":"2023-08-17 Vac weekly","content":"\n\nVac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n# Vac week 33 August 14th\n\n---\n## *vsu::P2P*\n### `vac:p2p:nim-libp2p:vac:maintenance`\n- Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n- delivered: Perf protocol https://github.com/status-im/nim-libp2p/pull/925\n- delivered: Test-plans for the perf protocol https://github.com/lchenut/test-plans/tree/perf-nim\n- Bandwidth estimate as a parameter (waiting for final review) https://github.com/status-im/nim-libp2p/pull/941\n### `vac:p2p:nim-chronos:vac:maintenance`\n- delivered: Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n- delivered: fixed flaky test using REUSE_PORT https://github.com/status-im/nim-chronos/pull/438\n\n---\n## *vsu::Tokenomics*\n - admin/misc:\n - (5 CC days off)\n### `vac:tke::codex:economic-analysis`\n- Filecoin economic structure and Codex token requirements\n### `vac:tke::status:SNT-staking`\n- tests with the contracts\n### `vac:tke::nomos:economic-analysis`\n- resume discussions with Nomos team\n\n---\n## *vsu::Distributed Systems Testing (DST)*\n### `vac:dst:wakurtosis:waku:techreport`\n- 1st Draft of Wakurtosis Research Blog (https://github.com/vacp2p/vac.dev/pull/123)\n- Data Process / Analysis of Non-Discv5 K13 Simulations (Wakurtosis Tech Report v2.5)\n### `vac:dst:shadow:vac:basic-shadow-simulation`\n- Basic Shadow Simulation of a gossipsub node (Setup, 5nodes)\n### `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n- Try and plan on how to refactor/generalize testing tool from Codex.\n- Learn more about Kubernetes\n### `vac:dst:wakurtosis:nomos:ci-integration_02`\n- Enable subnetworks\n- Plan how to use wakurtosis with fixed version\n### `vac:dst:eng:vac:bundle-simulation-data`\n- Run requested simulations\n\n---\n## *vsu:Smart Contracts (SC)*\n### `vac:sc::vac:secureum-upskilling`\n - Learned about \n - cold vs warm storage reads and their gas implications\n - UTXO vs account models\n - `DELEGATECALL` vs `CALLCODE` opcodes, `CREATE` vs `CREATE2` opcodes; Yul Assembly\n - Unstructured proxies https://eips.ethereum.org/EIPS/eip-1967\n - C3 Linearization https://forum.openzeppelin.com/t/solidity-diamond-inheritance/2694) (Diamond inheritance and resolution)\n - Uniswap deep dive\n - Finished Secureum slot 2 and 3\n### `vac:sc::vac:maintainance/misc`\n - Introduced Vac's own `foundry-template` for smart contract projects\n - Goal is to have the same project structure across projects\n - Github repository: https://github.com/vacp2p/foundry-template\n\n---\n## *vsu:Applied Cryptogarphy \u0026 ZK (ACZ)*\n - `vac:acz:zerokit:vac:maintenance`\n - PR reviews https://github.com/vacp2p/zerokit/pull/200, https://github.com/vacp2p/zerokit/pull/201\n\n---\n## *vip::zkVM*\n### `vac:zkvm::vac:research-existing-proof-systems`\n- delivered Nescience WIP doc\n- delivered FHE review\n- delivered Nova vs Sangria done - Some discussions during the meeting\n- started HyperNova writeup\n- started writing a trimmed version of FHE writeup\n- researched CCS (for HyperNova)\n- Research Protogalaxy https://eprint.iacr.org/2023/1106 and Protostar https://eprint.iacr.org/2023/620.\n### `vac:zkvm::vac:proof-system-benchmarks`\n- More work on benchmarks is ongoing\n- Putting down a document that explains the differences\n\n---\n## *vc::Deep Research*\n### `vac:dr:valpriv:vac:tor-push-poc`\n- revised the code for PR\n### `vac:dr:valpriv:vac:tor-push-relwork`\n- added section for mixnet, non-Tor/non-onion routing-based anonymity network\n### `vac:dr:gsub-scaling:vac:gossipsub-simulation`\n- Used shadow simulator to run first GossibSub simulation\n### `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n- Finalized 1st draft of the GossipSub scaling article","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-21":{"title":"2023-08-21 Vac weekly","content":"\n\nVac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\nVac Github Repos: https://www.notion.so/Vac-Repositories-75f7feb3861048f897f0fe95ead08b06\n\n# **Vac week 34** August 21th\n## *vsu::P2P*\n - `vac:p2p:nim-libp2p:vac:maintenance`\n - Test-plans for the perf protocol (99%: need to find why the executable doesn't work) https://github.com/libp2p/test-plans/pull/262\n - WebRTC: Merge all protocols (60%: slowed down by some complications and bad planning with Mbed-TLS) https://github.com/status-im/nim-webrtc/pull/3\n - WebRTC: DataChannel (25%)\n## *vsu::Tokenomics*\n - admin/misc:\n - (3 CC days off)\n - `vac:tke::codex:economic-analysis`\n - Call w/ Codex on token incentives, business analysis of Filecoin\n - `vac:tke::status:SNT-staking`\n - Bug fixes for tests for the contracts\n - `vac:tke::nomos:economic-analysis`\n - Narrowed focus to: 1) quantifying bribery attacks, 2) assessing how to min risks and max privacy of delegated staking\n - `vac:tke::waku:economic-analysis`\n - Caught up w/ Waku team on RLN, adopting a proactive effort to pitch them solutions\n## *vsu::Distributed Systems Testing (DST)*\n - `vac:dst:wakurtosis:vac:rlog`\n - Pushed second draft and figures (https://github.com/vacp2p/vac.dev/tree/DST-Wakurtosis)\n - `vac:dst:shadow:vac:basic-shadow-simulation`\n - Run 10K simulation of basic gossipsub node\n - `vac:dst:gsub-model:status:control-messages`\n - Got access to status superset\n - `vac:dst:analysis:nomos:nomos-simulation-analysis`\n - Basic CLI done, json to csv, can handle 10k nodes\n - `vac:dst:wakurtosis:waku:topology-analysis`\n - Collection + analysis: now supports all waku protocols, along with relay\n - Cannot get gossip-sub peerage from waku or prometheus (working on getting info from gossipsub layer)\n - `vac:dst:wakurtosis:waku:techreport_02`\n - Merged 4 pending PRs; master now supports regular graphs\n - `vac:dst:eng:vac:bundle-simulation-data`\n - Run 1 and 10 rate simulations. 100 still being run\n - `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n - Working on split the structure of codex tool; Working on diagrams also\n## *vsu:Smart Contracts (SC)*\n - `vac:sc::status:community-contracts-ERC721`\n - delivered (will need maintenance and adding features as requested in the future)\n - `vac:sc::status:community-contracts-ERC20`\n - started working on ERC20 contracts\n - `vac:sc::vac:secureum-upskilling`\n - Secureum: Finished Epoch 0, Slot 4 and 5\n - Deep dive on First Depositor/Inflation attacks\n - Learned about Minimal Proxy Contract pattern\n - More Uniswap V2 protocol reading \n - `vac:sc::vac:maintainance/misc`\n - Worked on moving community dapp contracts to new foundry-template\n## *vsu:Applied Cryptogarphy \u0026 ZK (ACZ)*\n - `vac:acz:rlnp2p:waku:rln-relay-enhancments`\n - rpc handler for waku rln relay - https://github.com/waku-org/nwaku/pull/1852\n - fixed ganache's change in method to manage subprocesses, fixed timeouts related to it - https://github.com/waku-org/nwaku/pull/1913\n - should error out on rln-relay mount failure - https://github.com/waku-org/nwaku/pull/1904\n - fixed invalid start index being used in rln-relay - https://github.com/waku-org/nwaku/pull/1915\n - constrain the values that can be used as idCommitments in the rln-contract - https://github.com/vacp2p/rln-contract/pull/26\n - assist with waku-simulator testing\n - remove registration capabilities from nwaku, it should be done out of band - https://github.com/waku-org/nwaku/pull/1916\n - add `deployedBlockNumber` to the rln-contract for ease of fetching events from the client - https://github.com/vacp2p/rln-contract/pull/27\n - `vac:acz:zerokit:vac:maintenance`\n - exposed `seq_atomic_operation` ffi api to allow users to make use of the current index without making multiple ffi calls - https://github.com/vacp2p/zerokit/pull/206 \n - use pmtree instead of vacp2p_pmtree now that changes have been upstreamed - https://github.com/vacp2p/zerokit/pull/203\n - Prepared a PR to fix a stopgap introduces by PR 201 https://github.com/vacp2p/zerokit/pull/207 \n - PR review https://github.com/vacp2p/zerokit/pull/202, https://github.com/vacp2p/zerokit/pull/206\n - `vac:acz:zerokit:vac:zerokit-v0.4`\n - substitute id_commitments for rate_commitments and update tests in rln-v2 - https://github.com/vacp2p/zerokit/pull/205\n - rln-v2 working branch - https://github.com/vacp2p/zerokit/pull/204\n - misc research while ooo:\n - stealth commitment scheme inspired by erc-5564 - https://github.com/rymnc/erc-5564-bn254, associated circuit - https://github.com/rymnc/circom-rln-erc5564 (very heavy on the constraints)\n## *vip::zkVM*\n- `vac:zkvm::vac:research-existing-proof-systems`\n - Updated the Nova questions document (https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451 -\u003e Projects -\u003e Nova_Research_Answers.pdf)\n - Researched ProtoStar and Nova aleternatives\n- `vac:zkvm::vac:proof-system-benchmarks`\n - Drafted the Nova Benchamarks document (https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451 -\u003e Projects -\u003e Benchmarks.pdf)\n - Researched hash functions \n - Researched benchmarks\n## *vc::Deep Research*\n - `vac:dr:valpriv:vac:tor-push-poc`\n - Reimplemented torpush without any gossip sharing\n - Added discovering peers for torpush in every epoch/10 minutes\n - torswitch directly pushes messages to separately identified peers\n - `vac:dr:valpriv:vac:tor-push-relwork`\n - added quantified measures related to privacy in the paper section\n - `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n - Explored different unstructured p2p application architectuture\n - Studied literature on better bandwidth utilization in unstructured p2p networks.\n - `vac:dr:gsub-scaling:vac:gossipsub-simulation`\n - Worked on GossibSup simulation in shadow simulator. Tried understanding different libp2p functions\n - Created short awk scripts for analyzing results.\n - `vac:dr:consensus:nomos:carnot-bribery-article`\n - Continue work on the article on bribery attacks, PoS and Carnot\n - Completed presentation about the bribery attacks and Carnot\n - `vac:dr:consensus:nomos:carnot-paper`\n - Discussed Carnot tests and results with Nomos team. Some adjustment to the parameters needed to be made to accurate results.","lastmodified":"2023-08-21T19:29:39.682290617Z","tags":["vac-updates"]},"/roadmap/waku/milestone-waku-10-users":{"title":"Milestone: Waku Network supports 10k Users","content":"\n```mermaid\n%%{ \n init: { \n 'theme': 'base', \n 'themeVariables': { \n 'primaryColor': '#BB2528', \n 'primaryTextColor': '#fff', \n 'primaryBorderColor': '#7C0000', \n 'lineColor': '#F8B229', \n 'secondaryColor': '#006100', \n 'tertiaryColor': '#fff' \n } \n } \n}%%\ngantt\n\tdateFormat YYYY-MM-DD \n\tsection Scaling\n\t\t10k Users :done, 2023-01-20, 2023-07-31\n```\n\n## Completion Deliverable\nTBD\n\n## Epics\n- [Github Issue Tracker](https://github.com/waku-org/pm/issues/12)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/milestones-overview":{"title":"Waku Milestones Overview","content":"\n- 90% - [Waku Network support for 10k users](roadmap/waku/milestone-waku-10-users.md)\n- 80% - Waku Network support for 1MM users\n- 65% - Restricted-run (light node) protocols are production ready\n- 60% - Peer management strategy for relay and light nodes are defined and implemented\n- 10% - Quality processes are implemented for `nwaku` and `go-waku`\n- 80% - Define and track network and community metrics for continuous monitoring improvement\n- 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties)\n- 15% - Dogfooding of RLN by platforms has started\n- 06% - First protocol to incentivize operators has been defined","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/updates/2023-07-24":{"title":"2023-07-24 Waku weekly","content":"\nDisclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones.\n\n---\n\n## Docs\n\n### **Milestone**: Foundation for Waku docs (done)\n\n#### _achieved_:\n- overall layout\n- concept docs\n- community/showcase pages\n\n### **Milestone**: Foundation for node operator docs (done)\n#### _achieved_:\n- nodes overview page\n- guide for running nwaku (binaries, source, docker)\n- peer discovery config guide\n- reference docs for config methods and options\n\n### **Milestone**: Foundation for js-waku docs\n#### _achieved_:\n- js-waku overview + installation guide\n- lightpush + filter guide\n- store guide\n- @waku/create-app guide\n\n#### _next:_\n- improve @waku/react guide\n\n#### _blocker:_\n- polyfills issue with [js-waku](https://github.com/waku-org/js-waku/issues/1415)\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n### **Milestone**: Running nwaku in the cloud\n### **Milestone**: Add Waku guide to learnweb3.io\n### **Milestone**: Encryption docs for js-waku\n### **Milestone**: Advanced node operator doc (postgres, WSS, monitoring, common config)\n### **Milestone**: Foundation for go-waku docs\n### **Milestone**: Foundation for rust-waku-bindings docs\n### **Milestone**: Waku architecture docs\n### **Milestone**: Waku detailed roadmap and milestones\n### **Milestone**: Explain RLN\n\n---\n\n## Eco Dev (WIP)\n\n### **Milestone**: EthCC Logos side event organisation (done)\n### **Milestone**: Community Growth\n#### _achieved_: \n- Wrote several bounties, improved template; setup onboarding flow in Discord.\n\n#### _next_: \n- Review template, publish on GitHub\n\n### **Milestone**: Business Development (continuous)\n#### _achieved_: \n- Discussions with various leads in EthCC\n#### _next_: \n- Booking calls with said leads\n\n### **Milestone**: Setting Up Content Strategy for Waku\n\n#### _achieved_: \n- Discussions with Comms Hubs re Waku Blog \n- expressed needs and intent around future blog post and needed amplification\n- discuss strategies to onboard/involve non-dev and potential CTAs.\n\n### **Milestone**: Web3Conf (dates)\n### **Milestone**: DeCompute conf\n\n---\n\n## Research (WIP)\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- rendezvous hashing \n- weighting function \n- updated LIGHTPUSH to handle autosharding\n\n#### _next:_\n- update FILTER \u0026 STORE for autosharding\n\n---\n\n## nwaku (WIP)\n\n### **Milestone**: Postgres integration.\n#### _achieved:_\n- nwaku can store messages in a Postgres database\n- we started to perform stress tests\n\n#### _next:_\n- Analyse why some messages are not stored during stress tests happened in both sqlite and Postgres, so maybe the issue isn't directly related to _store_.\n\n### **Milestone**: nwaku as a library (C-bindings)\n#### _achieved:_\n- The integration is in progress through N-API framework\n\n#### _next:_\n- Make the nodejs to properly work by running the _nwaku_ node in a separate thread.\n\n---\n\n## go-waku (WIP)\n\n\n---\n\n## js-waku (WIP)\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved: \n- spec test for connection manager\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n### **Milestone**: Static Sharding\n#### _next_: \n- start implementation of static sharding in js-waku\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- js-lip2p upgrade to remove usage of polyfills (draft PR)\n\n#### _next_: \n- merge and release js-libp2p upgrade\n\n### **Milestone**: Waku Relay in the Browser\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-07-31":{"title":"2023-07-31 Waku weekly","content":"\n## Docs\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n#### _next:_ \n- rewrite docs in British English\n### **Milestone**: Running nwaku in the cloud\n#### _next:_ \n- publish guides for Digital Ocean, Oracle, Fly.io\n\n---\n## Eco Dev (WIP)\n\n---\n## Research\n\n### **Milestone**: Detailed network requirements and task breakdown\n#### _achieved:_ \n- gathering rough network requirements\n#### _next:_ \n- detailed task breakdown per milestone and effort allocation\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- update FILTER \u0026 STORE for autosharding\n#### _next:_ \n- RFC review \u0026 updates \n- code review \u0026 updates\n\n---\n## nwaku\n\n### **Milestone**: nwaku release process automation\n#### _next_:\n- setup automation to test/simulate current `master` to prevent/limit regressions\n- expand target architectures and platforms for release artifacts (e.g. arm64, Win...)\n### **Milestone**: HTTP Rest API for protocols\n#### _next:_ \n- Filter API added \n- tests to complete.\n\n---\n## go-waku\n\n### **Milestone**: Increase Maintability Score. Refer to [CodeClimate report](https://codeclimate.com/github/waku-org/go-waku)\n#### _next:_ \n- define scope on which issues reported by CodeClimate should be fixed. Initially it should be limited to reduce code complexity and duplication.\n\n### **Milestone**: RLN updates, refer [issue](https://github.com/waku-org/go-waku/issues/608).\n_achieved_:\n- expose `set_tree`, `key_gen`, `seeded_key_gen`, `extended_seeded_keygen`, `recover_id_secret`, `set_leaf`, `init_tree_with_leaves`, `set_metadata`, `get_metadata` and `get_leaf` \n- created an example on how to use RLN with go-waku\n- service node can pass in index to keystore credentials and can verify proofs based on bandwidth usage\n#### _next_: \n- merkle tree batch operations (in progress) \n- usage of persisted merkle tree db\n\n### **Milestone**: Improve test coverage for functional tests of all protocols. Refer to [CodeClimate report]\n#### _next_: \n- define scope on which code sections should be covered by tests\n\n### **Milestone**: C-Bindings\n#### _next_: \n- update API to match nwaku's (by using callbacks instead of strings that require freeing)\n\n---\n## js-waku\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved_: \n- extend ConnectionManager with EventEmitter and dispatch peers tagged with their discovery + make it public on the Waku interface\n#### _next_: \n- fallback improvement for peer connect rejection\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n#### _next_: \n- robusting support around peer-exchange for examples\n### **Milestone**: Static Sharding\n#### _achieved_: \n- WIP implementation of static sharding in js-waku\n#### _next_: \n- investigation around gauging connection loss;\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- improve \u0026 update @waku/react \n- merge and release js-libp2p upgrade\n\n#### _next:_\n- update examples to latest release + make sure no old/unused packages there\n\n### **Milestone**: Maintenance\n#### _achieved_: \n- update to libp2p@0.46.0\n#### _next_:\n- suit of optional tests in pipeline\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-06":{"title":"2023-08-06 Waku weekly","content":"\nMilestones for current works are created and used. Next steps are:\n1) Refine scope of [research work](https://github.com/waku-org/research/issues/3) for rest of the year and create matching milestones for research and waku clients\n2) Review work not coming from research and setting dates\nNote that format matches the Notion page but can be changed easily as it's scripted\n\n\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n- _blocker_: \n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Docker compose with `nwaku` + `postgres` + `prometheus` + `grafana` + `postgres_exporter` https://github.com/alrevuelta/nwaku-compose/pull/3\n- _next_: Carry on with stress testing\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: feedback/update cycles for FILTER \u0026 LIGHTPUSH\n- _next_: New fleet, updating ENR from live subscriptions and merging\n- _blocker_: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app.\n\n**[Move Waku v1 and Waku-Bridge to new repos](https://github.com/waku-org/nwaku/issues/1767)** {E:2023-qa}\n\n- _achieved_: Removed v1 and wakubridge code from nwaku repo\n- _next_: Remove references to `v2` from nwaku directory structure and documents\n\n**[nwaku c-bindings](https://github.com/waku-org/nwaku/issues/1332)** {E:2023-many-platforms}\n\n- _achieved_:\n - Moved the Waku execution into a secondary working thread. Essential for NodeJs.\n - Adapted the NodeJs example to use the `libwaku` with the working-thread approach. The example had been receiving relay messages during a weekend. The memory was stable without crashing. \n- _next_: start applying the thread-safety recommendations https://github.com/waku-org/nwaku/issues/1878\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node.\n\n---\n## js-waku\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: robustness around peer-exchange, and highlight discovery vs connections for PX on the web-chat example\n- _next_: saving successfully connected PX peers to local storage for easier connections on reload\n\n**[Waku Relay scalability in the Browser](https://github.com/waku-org/js-waku/issues/905)** {NO EPIC}\n\n- _achieved_: draft of direct browser-browser RTC example https://github.com/waku-org/js-waku-examples/pull/260 \n- _next_: improve the example (connection re-usage), work on contentTopic based RTC example\n\n---\n## go-waku\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: updated c-bindings to use callbacks\n- _next_: refactor v1 encoding functions and update RFC\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Enabled -race flag and ran all unit tests to identify data races.\n- _next_: Fix issues reported by the data race detector tool\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: use zerokit batch insert/delete for members, exposed function to retrieve data from merkle tree, modified zerokit and go-zerokit-rln to pass merkle tree persistance configuration settings\n- _next_: resume onchain sync from persisted tree db\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: Basic peer management to ensure standard in/out ratio for relay peers.\n- _next_: add service slots to peer manager\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: production of swags and marketing collaterals for web3conf completed\n- _next_: web3conf talk and side event production. various calls with commshub for preparing marketing collaterals.\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)** {E:2023-eco-growth}\n\n- _next_: create guide on `@waku/react` and debugging js-waku web apps\n\n**[Docs general improvement/incorporating feedback (2023)](https://github.com/waku-org/docs.waku.org/issues/102)** {E:2023-eco-growth}\n\n- _achieved_: rewrote the docs in UK English\n- _next_: update docs terms, announce js-waku docs\n\n**[Foundation of js-waku docs](https://github.com/waku-org/docs.waku.org/issues/101)** {E:2023-eco-growth}\n\n_achieved_: added guide on js-waku bootstrapping\n\n---\n## Research\n\n**[1.1 Network requirements and task breakdown](https://github.com/waku-org/research/issues/6)** {E:2023-1mil-users}\n\n- _achieved_: Setup project management tools; determined number of shards to 8; some conversations on RLN memberships\n- _next_: Breakdown and assign tasks under each milestone for the 1 million users/public Waku Network epic.\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-14":{"title":"2023-08-14 Waku weekly","content":"\n\n# 2023-08-14 Waku weekly\n---\n## Epics\n\n**[Waku Network Can Support 10K Users](https://github.com/waku-org/pm/issues/12)** {E:2023-10k-users}\n\nAll software has been delivered. Pending items are:\n- Running stress testing on PostgreSQL to confirm performance gain https://github.com/waku-org/nwaku/issues/1894\n- Setting up a staging fleet for Status to try static sharding\n- Running simulations for Store protocol: [Will confirm with Vac/DST on dates/commitment](https://github.com/vacp2p/research/issues/191#issuecomment-1672542165) and probably move this to [1mil epic](https://github.com/waku-org/pm/issues/31)\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub\n- _next_: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning\n- _blocker_: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)**\n\n- _next_: document notes/recommendations for NodeJS, begin docs on `js-waku` encryption\n\n---\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: minor CI fixes and improvements\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Learned that the insertion rate is constrained by the `relay` protocol. i.e. the maximum insert rate is limited by `relay` so I couldn't push the \"insert\" operation to a limit from a _Postgres_ point of view. For example, if 25 clients publish messages concurrently, and each client publishes 300 msgs, all the messages are correctly stored. If repeating the same operation but with 50 clients, then many messages are lost because the _relay_ protocol doesn't process all of them.\n- _next_: Carry on with stress testing. Analyze the performance differences between _Postgres_ and _SQLite_ regarding the _read_ operations.\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: many feedback/update cycles for FILTER, LIGHTPUSH, STORE \u0026 RFC\n- _next_: updating ENR for live subscriptions\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node. Add more tests.\n\n---\n## js-waku\n\n**[Maintenance](https://github.com/waku-org/js-waku/issues/1455)** {E:2023-qa}\n\n- achieved: upgrade libp2p \u0026 chainsafe deps to libp2p 0.46.3 while removing deprecated libp2p standalone interface packages (new breaking change libp2p w/ other deps), add tsdoc for referenced types, setting up/fixing prettier/eslint conflict \n\n**[Developer Experience (2023)](https://github.com/waku-org/js-waku/issues/1453)** {E:2023-eco-growth}\n\n- _achieved_: non blocking pipeline step (https://github.com/waku-org/js-waku/issues/1411)\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: close the \"fallback mechanism for peer rejections\", refactor peer-exchange compliance test\n- _next_: peer-exchange to be included with default discovery, action peer-exchange browser feedback\n\n---\n## go-waku\n\n**[Maintenance](https://github.com/waku-org/go-waku/issues/634)** {E:2023-qa}\n\n- _achieved_: improved keep alive logic for identifying if machine is waking up; added vacuum feature to sqlite and postgresql; made migrations optional; refactored db and migration code, extracted code to generate node key to its own separate subcommand\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: PR for updating the RFC to use callbacks, and refactored the encoding functions\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Fixed issues reported by the data race detector tool.\n- _next_: identify areas where test coverage needs improvement.\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: exposed merkle tree configuration, removed embedded resources from go-zerokit-rln, fixed nwaku / go-waku rlnKeystore compatibility, added merkle tree persistence and modified zerokit to print to stderr any error obtained while executing functions via FFI.\n- _next_: interop with nwaku\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: add service slots to peer manager.\n- _next_: implement relay connectivity loop, integrate gossipsub scoring for peer disconnections\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]}} \ No newline at end of file diff --git a/indices/contentIndex.faa8fe6cf8d303f6933dc0857054253e.min.json b/indices/contentIndex.faa8fe6cf8d303f6933dc0857054253e.min.json deleted file mode 100644 index 9e2a4400c..000000000 --- a/indices/contentIndex.faa8fe6cf8d303f6933dc0857054253e.min.json +++ /dev/null @@ -1 +0,0 @@ -{"/":{"title":"Logos Technical Roadmap and Activity","content":"This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. To learn more about the motivation, please visit the [Logos Collective Site](https://logos.co).\n\n## Navigation\n\n### Waku\n- [Milestones](roadmap/waku/milestones-overview.md)\n- [weekly updates](tags/waku-updates)\n\n### Codex\n- [Milestones](roadmap/codex/milestones-overview.md)\n- [weekly updates](tags/codex-updates)\n\n### Nomos\n- [Milestones](roadmap/nomos/milestones-overview.md)\n- [weekly updates](tags/nomos-updates)\n\n### Vac\n- [Milestones](roadmap/vac/milestones-overview.md)\n- [weekly updates](tags/vac-updates)\n\n### Innovation Lab\n- [Milestones](roadmap/innovation_lab/milestones-overview.md)\n- [weekly updates](tags/ilab-updates)\n\n### Comms (Acid Info)\n- [Milestones](roadmap/acid/milestones-overview.md)\n- [weekly updates](tags/acid-updates)\n","lastmodified":"2023-08-21T15:49:54.901241828Z","tags":[]},"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":{"title":"CJK + Latex Support (测试)","content":"\n## Chinese, Japanese, Korean Support\n几乎在我们意识到之前,我们已经离开了地面。\n\n우리가 그것을 알기도 전에 우리는 땅을 떠났습니다.\n\n私たちがそれを知るほぼ前に、私たちは地面を離れていました。\n\n## Latex\n\nBlock math works with two dollar signs `$$...$$`\n\n$$f(x) = \\int_{-\\infty}^\\infty\n f\\hat(\\xi),e^{2 \\pi i \\xi x}\n \\,d\\xi$$\n\t\nInline math also works with single dollar signs `$...$`. For example, Euler's identity but inline: $e^{i\\pi} = 0$\n\nAligned equations work quite well:\n\n$$\n\\begin{aligned}\na \u0026= b + c \\\\ \u0026= e + f \\\\\n\\end{aligned}\n$$\n\nAnd matrices\n\n$$\n\\begin{bmatrix}\n1 \u0026 2 \u0026 3 \\\\\na \u0026 b \u0026 c\n\\end{bmatrix}\n$$\n\n## RTL\nMore information on configuring RTL languages like Arabic in the [config](config.md) page.\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/callouts":{"title":"Callouts","content":"\n## Callout support\n\nQuartz supports the same Admonition-callout syntax as Obsidian.\n\nThis includes\n- 12 Distinct callout types (each with several aliases)\n- Collapsable callouts\n\nSee [documentation on supported types and syntax here](https://help.obsidian.md/How+to/Use+callouts#Types).\n\n## Showcase\n\n\u003e [!EXAMPLE] Examples\n\u003e\n\u003e Aliases: example\n\n\u003e [!note] Notes\n\u003e\n\u003e Aliases: note\n\n\u003e [!abstract] Summaries \n\u003e\n\u003e Aliases: abstract, summary, tldr\n\n\u003e [!info] Info \n\u003e\n\u003e Aliases: info, todo\n\n\u003e [!tip] Hint \n\u003e\n\u003e Aliases: tip, hint, important\n\n\u003e [!success] Success \n\u003e\n\u003e Aliases: success, check, done\n\n\u003e [!question] Question \n\u003e\n\u003e Aliases: question, help, faq\n\n\u003e [!warning] Warning \n\u003e\n\u003e Aliases: warning, caution, attention\n\n\u003e [!failure] Failure \n\u003e\n\u003e Aliases: failure, fail, missing\n\n\u003e [!danger] Error\n\u003e\n\u003e Aliases: danger, error\n\n\u003e [!bug] Bug\n\u003e\n\u003e Aliases: bug\n\n\u003e [!quote] Quote\n\u003e\n\u003e Aliases: quote, cite\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/config":{"title":"Configuration","content":"\n## Configuration\nQuartz is designed to be extremely configurable. You can find the bulk of the configuration scattered throughout the repository depending on how in-depth you'd like to get.\n\nThe majority of configuration can be found under `data/config.yaml`. An annotated example configuration is shown below.\n\n```yaml {title=\"data/config.yaml\"}\n# The name to display in the footer\nname: Jacky Zhao\n\n# whether to globally show the table of contents on each page\n# this can be turned off on a per-page basis by adding this to the\n# front-matter of that note\nenableToc: true\n\n# whether to by-default open or close the table of contents on each page\nopenToc: false\n\n# whether to display on-hover link preview cards\nenableLinkPreview: true\n\n# whether to render titles for code blocks\nenableCodeBlockTitle: true \n\n# whether to render copy buttons for code blocks\nenableCodeBlockCopy: true \n\n# whether to render callouts\nenableCallouts: true\n\n# whether to try to process Latex\nenableLatex: true\n\n# whether to enable single-page-app style rendering\n# this prevents flashes of unstyled content and improves\n# smoothness of Quartz. More info in issue #109 on GitHub\nenableSPA: true\n\n# whether to render a footer\nenableFooter: true\n\n# whether backlinks of pages should show the context in which\n# they were mentioned\nenableContextualBacklinks: true\n\n# whether to show a section of recent notes on the home page\nenableRecentNotes: false\n\n# whether to display an 'edit' button next to the last edited field\n# that links to github\nenableGitHubEdit: true\nGitHubLink: https://github.com/jackyzha0/quartz/tree/hugo/content\n\n# whether to use Operand to power semantic search\n# IMPORTANT: replace this API key with your own if you plan on using\n# Operand search!\nenableSemanticSearch: false\noperandApiKey: \"REPLACE-WITH-YOUR-OPERAND-API-KEY\"\n\n# page description used for SEO\ndescription:\n Host your second brain and digital garden for free. Quartz features extremely fast full-text search,\n Wikilink support, backlinks, local graph, tags, and link previews.\n\n# title of the home page (also for SEO)\npage_title:\n \"🪴 Quartz 3.2\"\n\n# links to show in the footer\nlinks:\n - link_name: Twitter\n link: https://twitter.com/_jzhao\n - link_name: Github\n link: https://github.com/jackyzha0\n```\n\n### Code Block Titles\nTo add code block titles with Quartz:\n\n1. Ensure that code block titles are enabled in Quartz's configuration:\n\n ```yaml {title=\"data/config.yaml\", linenos=false}\n enableCodeBlockTitle: true\n ```\n\n2. Add the `title` attribute to the desired [code block\n fence](https://gohugo.io/content-management/syntax-highlighting/#highlighting-in-code-fences):\n\n ```markdown {linenos=false}\n ```yaml {title=\"data/config.yaml\"}\n enableCodeBlockTitle: true # example from step 1\n ```\n ```\n\n**Note** that if `{title=\u003cmy-title\u003e}` is included, and code block titles are not\nenabled, no errors will occur, and the title attribute will be ignored.\n\n### HTML Favicons\nIf you would like to customize the favicons of your Quartz-based website, you \ncan add them to the `data/config.yaml` file. The **default** without any set \n`favicon` key is:\n\n```html {title=\"layouts/partials/head.html\", linenostart=15}\n\u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n```\n\nThe default can be overridden by defining a value to the `favicon` key in your \n`data/config.yaml` file. For example, here is a `List[Dictionary]` example format, which is\nequivalent to the default:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon:\n - { rel: \"shortcut icon\", href: \"icon.png\", type: \"image/png\" }\n# - { ... } # Repeat for each additional favicon you want to add\n```\n\nIn this format, the keys are identical to their HTML representations.\n\nIf you plan to add multiple favicons generated by a website (see list below), it\nmay be easier to define it as HTML. Here is an example which appends the \n**Apple touch icon** to Quartz's default favicon:\n\n```yaml {title=\"data/config.yaml\", linenos=false}\nfavicon: |\n \u003clink rel=\"shortcut icon\" href=\"icon.png\" type=\"image/png\"\u003e\n \u003clink rel=\"apple-touch-icon\" sizes=\"180x180\" href=\"/apple-touch-icon.png\"\u003e\n```\n\nThis second favicon will now be used as a web page icon when someone adds your \nwebpage to the home screen of their Apple device. If you are interested in more \ninformation about the current and past standards of favicons, you can read \n[this article](https://www.emergeinteractive.com/insights/detail/the-essentials-of-favicons/).\n\n**Note** that all generated favicon paths, defined by the `href` \nattribute, are relative to the `static/` directory.\n\n### Graph View\nTo customize the Interactive Graph view, you can poke around `data/graphConfig.yaml`.\n\n```yaml {title=\"data/graphConfig.yaml\"}\n# if true, a Global Graph will be shown on home page with full width, no backlink.\n# A different set of Local Graphs will be shown on sub pages.\n# if false, Local Graph will be default on every page as usual\nenableGlobalGraph: false\n\n### Local Graph ###\nlocalGraph:\n # whether automatically generate a legend\n enableLegend: false\n \n # whether to allow dragging nodes in the graph\n enableDrag: true\n \n # whether to allow zooming and panning the graph\n enableZoom: true\n \n # how many neighbours of the current node to show (-1 is all nodes)\n depth: 1\n \n # initial zoom factor of the graph\n scale: 1.2\n \n # how strongly nodes should repel each other\n repelForce: 2\n\n # how strongly should nodes be attracted to the center of gravity\n centerForce: 1\n\n # what the default link length should be\n linkDistance: 1\n \n # how big the node labels should be\n fontSize: 0.6\n \n # scale at which to start fading the labes on nodes\n opacityScale: 3\n\n### Global Graph ###\nglobalGraph:\n\t# same settings as above\n\n### For all graphs ###\n# colour specific nodes path off of their path\npaths:\n - /moc: \"#4388cc\"\n```\n\n\n## Styling\nWant to go even more in-depth? You can add custom CSS styling and change existing colours through editing `assets/styles/custom.scss`. If you'd like to target specific parts of the site, you can add ids and classes to the HTML partials in `/layouts/partials`. \n\n### Partials\nPartials are what dictate what gets rendered to the page. Want to change how pages are styled and structured? You can edit the appropriate layout in `/layouts`.\n\nFor example, the structure of the home page can be edited through `/layouts/index.html`. To customize the footer, you can edit `/layouts/partials/footer.html`\n\nMore info about partials on [Hugo's website.](https://gohugo.io/templates/partials/)\n\nStill having problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n\n## Language Support\n[CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) comes out of the box with Quartz.\n\nWant to support languages that read from right-to-left (like Arabic)? Hugo (and by proxy, Quartz) supports this natively.\n\nFollow the steps [Hugo provides here](https://gohugo.io/content-management/multilingual/#configure-languages) and modify your `config.toml`\n\nFor example:\n\n```toml\ndefaultContentLanguage = 'ar'\n[languages]\n [languages.ar]\n languagedirection = 'rtl'\n title = 'مدونتي'\n weight = 1\n```\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/custom-Domain":{"title":"Custom Domain","content":"\n### Registrar\nThis step is only applicable if you are using a **custom domain**! If you are using a `\u003cYOUR-USERNAME\u003e.github.io` domain, you can skip this step.\n\nFor this last bit to take effect, you also need to create a CNAME record with the DNS provider you register your domain with (i.e. NameCheap, Google Domains).\n\nGitHub has some [documentation on this](https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site), but the tldr; is to\n\n1. Go to your forked repository (`github.com/\u003cYOUR-GITHUB-USERNAME\u003e/quartz`) settings page and go to the Pages tab. Under \"Custom domain\", type your custom domain, then click **Save**.\n2. Go to your DNS Provider and create a CNAME record that points from your domain to `\u003cYOUR-GITHUB-USERNAME.github.io.` (yes, with the trailing period).\n\n\t![Example Configuration for Quartz](google-domains.png)*Example Configuration for Quartz*\n3. Wait 30 minutes to an hour for the network changes to kick in.\n4. Done!","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/editing":{"title":"Editing Content in Quartz","content":"\n## Editing \nQuartz runs on top of [Hugo](https://gohugo.io/) so all notes are written in [Markdown](https://www.markdownguide.org/getting-started/).\n\n### Folder Structure\nHere's a rough overview of what's what.\n\n**All content in your garden can found in the `/content` folder.** To make edits, you can open any of the files and make changes directly and save it. You can organize content into any folder you'd like.\n\n**To edit the main home page, open `/content/_index.md`.**\n\nTo create a link between notes in your garden, just create a normal link using Markdown pointing to the document in question. Please note that **all links should be relative to the root `/content` path**. \n\n```markdown\nFor example, I want to link this current document to `notes/config.md`.\n[A link to the config page](notes/config.md)\n```\n\nSimilarly, you can put local images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\nYou can also use wikilinks if that is what you are more comfortable with!\n\n### Front Matter\nHugo is picky when it comes to metadata for files. Make sure that your title is double-quoted and that you have a title defined at the top of your file like so. You can also add tags here as well.\n\n```yaml\n---\ntitle: \"Example Title\"\ntags:\n- example-tag\n---\n\nRest of your content here...\n```\n\n### Obsidian\nI recommend using [Obsidian](http://obsidian.md/) as a way to edit and grow your digital garden. It comes with a really nice editor and graphical interface to preview all of your local files.\n\nThis step is **highly recommended**.\n\n\u003e 🔗 Step 3: [How to setup your Obsidian Vault to work with Quartz](obsidian.md)\n\n## Previewing Changes\nThis step is purely optional and mostly for those who want to see the published version of their digital garden locally before opening it up to the internet. This is *highly recommended* but not required.\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)\n\nFor those who like to live life more on the edge, viewing the garden through Obsidian gets you pretty close to the real thing.\n\n## Publishing Changes\nNow that you know the basics of managing your digital garden using Quartz, you can publish it to the internet!\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/hosting":{"title":"Deploying Quartz to the Web","content":"\n## Hosting on GitHub Pages\nQuartz is designed to be effortless to deploy. If you forked and cloned Quartz directly from the repository, everything should already be good to go! Follow the steps below.\n\n### Enable GitHub Actions\nBy default, GitHub disables workflows from running automatically on Forked Repostories. Head to the 'Actions' tab of your forked repository and Enable Workflows to setup deploying your Quartz site!\n\n![Enable GitHub Actions](github-actions.png)*Enable GitHub Actions*\n\n### Enable GitHub Pages\n\nHead to the 'Settings' tab of your forked repository and go to the 'Pages' tab.\n\n1. (IMPORTANT) Set the source to deploy from `master` (and not `hugo`) using `/ (root)`\n2. Set a custom domain here if you have one!\n\n![Enable GitHub Pages](github-pages.png)*Enable GitHub Pages*\n\n### Pushing Changes\nTo see your changes on the internet, we need to push it them to GitHub. Quartz is a `git` repository so updating it is the same workflow as you would follow as if it were just a regular software project.\n\n```shell\n# Navigate to Quartz folder\ncd \u003cpath-to-quartz\u003e\n\n# Commit all changes\ngit add .\ngit commit -m \"message describing changes\"\n\n# Push to GitHub to update site\ngit push origin hugo\n```\n\nNote: we specifically push to the `hugo` branch here. Our GitHub action automatically runs everytime a push to is detected to that branch and then updates the `master` branch for redeployment.\n\n### Setting up the Site\nNow let's get this site up and running. Never hosted a site before? No problem. Have a fancy custom domain you already own or want to subdomain your Quartz? That's easy too.\n\nHere, we take advantage of GitHub's free page hosting to deploy our site. Change `baseURL` in `/config.toml`. \n\nMake sure that your `baseURL` has a trailing `/`!\n\n[Reference `config.toml` here](https://github.com/jackyzha0/quartz/blob/hugo/config.toml)\n\n```toml\nbaseURL = \"https://\u003cYOUR-DOMAIN\u003e/\"\n```\n\nIf you are using this under a subdomain (e.g. `\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz`), include the trailing `/`. **You need to do this especially if you are using GitHub!**\n\n```toml\nbaseURL = \"https://\u003cYOUR-GITHUB-USERNAME\u003e.github.io/quartz/\"\n```\n\nChange `cname` in `/.github/workflows/deploy.yaml`. Again, if you don't have a custom domain to use, you can use `\u003cYOUR-USERNAME\u003e.github.io`.\n\nPlease note that the `cname` field should *not* have any path `e.g. end with /quartz` or have a trailing `/`.\n\n[Reference `deploy.yaml` here](https://github.com/jackyzha0/quartz/blob/hugo/.github/workflows/deploy.yaml)\n\n```yaml {title=\".github/workflows/deploy.yaml\"}\n- name: Deploy \n uses: peaceiris/actions-gh-pages@v3 \n with: \n\tgithub_token: ${{ secrets.GITHUB_TOKEN }} # this can stay as is, GitHub fills this in for us!\n\tpublish_dir: ./public \n\tpublish_branch: master\n\tcname: \u003cYOUR-DOMAIN\u003e\n```\n\nHave a custom domain? [Learn how to set it up with Quartz ](custom%20Domain.md).\n\n### Ignoring Files\nOnly want to publish a subset of all of your notes? Don't worry, Quartz makes this a simple two-step process.\n\n❌ [Excluding pages from being published](ignore%20notes.md)\n\n---\n\nNow that your Quartz is live, let's figure out how to make Quartz really *yours*!\n\n\u003e Step 6: 🎨 [Customizing Quartz](config.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":["setup"]},"/private/notes/ignore-notes":{"title":"Ignoring Notes","content":"\n### Quartz Ignore\nEdit `ignoreFiles` in `config.toml` to include paths you'd like to exclude from being rendered.\n\n```toml\n...\nignoreFiles = [ \n \"/content/templates/*\", \n \"/content/private/*\", \n \"\u003cyour path here\u003e\"\n]\n```\n\n`ignoreFiles` supports the use of Regular Expressions (RegEx) so you can ignore patterns as well (e.g. ignoring all `.png`s by doing `\\\\.png$`).\nTo ignore a specific file, you can also add the tag `draft: true` to the frontmatter of a note.\n\n```markdown\n---\ntitle: Some Private Note\ndraft: true\n---\n...\n```\n\nMore details in [Hugo's documentation](https://gohugo.io/getting-started/configuration/#ignore-content-and-data-files-when-rendering).\n\n### Global Ignore\nHowever, just adding to the `ignoreFiles` will only prevent the page from being access through Quartz. If you want to prevent the file from being pushed to GitHub (for example if you have a public repository), you need to also add the path to the `.gitignore` file at the root of the repository.","lastmodified":"2023-08-17T19:42:53.944430458Z","tags":[]},"/private/notes/obsidian":{"title":"Obsidian Vault Integration","content":"\n## Setup\nObsidian is the preferred way to use Quartz. You can either create a new Obsidian Vault or link one that your already have.\n\n### New Vault\nIf you don't have an existing Vault, [download Obsidian](https://obsidian.md/) and create a new Vault in the `/content` folder that you created and cloned during the [setup](setup.md) step.\n\n### Linking an existing Vault\nThe easiest way to use an existing Vault is to copy all of your files (directory and hierarchies intact) into the `/content` folder.\n\n## Settings\nGreat, now that you have your Obsidian linked to your Quartz, let's fix some settings so that they play well.\n\n1. Under Options \u003e Files and Links, set the New link format to always use Absolute Path in Vault.\n2. Go to Settings \u003e Files \u0026 Links \u003e Turn \"on\" automatically update internal links.\n\n![Obsidian Settings](obsidian-settings.png)*Obsidian Settings*\n\n## Templates\nInserting front matter everytime you want to create a new Note gets annoying really quickly. Luckily, Obsidian supports templates which makes inserting new content really easily.\n\n**If you decide to overwrite the `/content` folder completely, don't remove the `/content/templates` folder!**\n\nHead over to Options \u003e Core Plugins and enable the Templates plugin. Then go to Options \u003e Hotkeys and set a hotkey for 'Insert Template' (I recommend `[cmd]+T`). That way, when you create a new note, you can just press the hotkey for a new template and be ready to go!\n\n\u003e 👀 Step 4: [Preview Quartz Changes](preview%20changes.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/philosophy":{"title":"Quartz Philosophy","content":"\n\u003e “[One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important.” — Richard Hamming\n\n## Why Quartz?\nHosting a public digital garden isn't easy. There are an overwhelming number of tutorials, resources, and guides for tools like [Notion](https://www.notion.so/), [Roam](https://roamresearch.com/), and [Obsidian](https://obsidian.md/), yet none of them have super easy to use *free* tools to publish that garden to the world.\n\nI've personally found that\n1. It's nice to access notes from anywhere\n2. Having a public digital garden invites open conversations\n3. It makes keeping personal notes and knowledge *playful and fun*\n\nI was really inspired by [Bianca](https://garden.bianca.digital/) and [Joel](https://joelhooks.com/digital-garden)'s digital gardens and wanted to try making my own.\n\n**The goal of Quartz is to make hosting your own public digital garden free and simple.** You don't even need your own website. Quartz does all of that for you and gives your own little corner of the internet.\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/preview-changes":{"title":"Preview Changes","content":"\nIf you'd like to preview what your Quartz site looks like before deploying it to the internet, here's exactly how to do that!\n\nNote that both of these steps need to be completed.\n\n## Install `hugo-obsidian`\nThis step will generate the list of backlinks for Hugo to parse. Ensure you have [Go](https://golang.org/doc/install) (\u003e= 1.16) installed.\n\n```bash\n# Install and link `hugo-obsidian` locally\ngo install github.com/jackyzha0/hugo-obsidian@latest\n```\n\nIf you are running into an error saying that `command not found: hugo-obsidian`, make sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize hugo-obsidian as an executable.\n\nAfterwards, start the Hugo server as shown above and your local backlinks and interactive graph should be populated!\n\n## Installing Hugo\nHugo is the static site generator that powers Quartz. [Install Hugo with \"extended\" Sass/SCSS version](https://gohugo.io/getting-started/installing/) first. Then,\n\n```bash\n# Navigate to your local Quartz folder\ncd \u003clocation-of-your-local-quartz\u003e\n\n# Start local server\nmake serve\n\n# View your site in a browser at http://localhost:1313/\n```\n\n\u003e 🌍 Step 5: [Hosting Quartz online!](hosting.md)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/search":{"title":"Search","content":"\nQuartz supports two modes of searching through content.\n\n## Full-text\nFull-text search is the default in Quartz. It produces results that *exactly* match the search query. This is easier to setup but usually produces lower quality matches.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: false\n```\n\n## Natural Language\nNatural language search is powered by [Operand](https://operand.ai/). It understands language like a person does and finds results that best match user intent. In this sense, it is closer to how Google Search works.\n\nNatural language search tends to produce higher quality results than full-text search.\n\nHere's how to set it up.\n\n1. Create an Operand Account on [their website](https://operand.ai/).\n2. Go to Dashboard \u003e Settings \u003e Integrations.\n3. Follow the steps to setup the GitHub integration. Operand needs access to GitHub in order to index your digital garden properly!\n4. Head over to Dashboard \u003e Objects and press `(Cmd + K)` to open the omnibar and select 'Create Collection'.\n\t1. Set the 'Collection Label' to something that will help you remember it.\n\t2. You can leave the 'Parent Collection' field empty.\n5. Click into your newly made Collection.\n\t1. Press the 'share' button that looks like three dots connected by lines.\n\t2. Set the 'Interface Type' to `object-search` and click 'Create'.\n\t3. This will bring you to a new page with a search bar. Ignore this for now.\n6. Go back to Dashboard \u003e Settings \u003e API Keys and find your Quartz-specific Operand API key under 'Other keys'.\n\t1. Copy the key (which looks something like `0e733a7f-9b9c-48c6-9691-b54fa1c8b910`).\n\t2. Open `data/config.yaml`. Set `enableSemanticSearch` to `true` and `operandApiKey` to your copied key.\n\n```yaml {title=\"data/config.yaml\"}\n# the default option\nenableSemanticSearch: true\noperandApiKey: \"0e733a7f-9b9c-48c6-9691-b54fa1c8b910\"\n```\n7. Make a commit and push your changes to GitHub. See the [[hosting|hosting]] page if you haven't done this already.\n\t1. This step is *required* for Operand to be able to properly index your content. \n\t2. Head over to Dashboard \u003e Objects and select the collection that you made earlier\n8. Press `(Cmd + K)` to open the omnibar again and select 'Create GitHub Repo'\n\t1. Set the 'Repository Label' to `Quartz`\n\t2. Set the 'Repository Owner' to your GitHub username\n\t3. Set the 'Repository Ref' to `master`\n\t4. Set the 'Repository Name' to the name of your repository (usually just `quartz` if you forked the repository without changing the name)\n\t5. Leave 'Root Path' and 'Root URL' empty\n9. Wait for your repository to index and enjoy natural language search in Quartz! Operand refreshes the index every 2h so all you need to do is just push to GitHub to update the contents in the search.","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/setup":{"title":"Setup","content":"\n## Making your own Quartz\nSetting up Quartz requires a basic understanding of `git`. If you are unfamiliar, [this resource](https://resources.nwplus.io/2-beginner/how-to-git-github.html) is a great place to start!\n\n### Forking\n\u003e A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project.\n\nNavigate to the GitHub repository for the Quartz project:\n\n📁 [Quartz Repository](https://github.com/jackyzha0/quartz)\n\nThen, Fork the repository into your own GitHub account. If you don't have an account, you can make on for free [here](https://github.com/join). More details about forking a repo can be found on [GitHub's documentation](https://docs.github.com/en/get-started/quickstart/fork-a-repo).\n\n### Cloning\nAfter you've made a fork of the repository, you need to download the files locally onto your machine. Ensure you have `git`, then type the following command replacing `YOUR-USERNAME` with your GitHub username.\n\n```shell\ngit clone https://github.com/YOUR-USERNAME/quartz\n```\n\n## Editing\nGreat! Now you have everything you need to start editing and growing your digital garden. If you're ready to start writing content already, check out the recommended flow for editing notes in Quartz.\n\n\u003e ✏️ Step 2: [Editing Notes in Quartz](editing.md)\n\nHaving problems? Checkout our [FAQ and Troubleshooting guide](troubleshooting.md).\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["setup"]},"/private/notes/showcase":{"title":"Showcase","content":"\nWant to see what Quartz can do? Here are some cool community gardens :)\n\n- [Quartz Documentation (this site!)](https://quartz.jzhao.xyz/)\n- [Jacky Zhao's Garden](https://jzhao.xyz/)\n- [Scaling Synthesis - A hypertext research notebook](https://scalingsynthesis.com/)\n- [AWAGMI Intern Notes](https://notes.awagmi.xyz/)\n- [Shihyu's PKM](https://shihyuho.github.io/pkm/)\n- [Chloe's Garden](https://garden.chloeabrasada.online/)\n- [SlRvb's Site](https://slrvb.github.io/Site/)\n- [Course notes for Information Technology Advanced Theory](https://a2itnotes.github.io/quartz/)\n- [Brandon Boswell's Garden](https://brandonkboswell.com)\n- [Siyang's Courtyard](https://siyangsun.github.io/courtyard/)\n\nIf you want to see your own on here, submit a [Pull Request adding yourself to this file](https://github.com/jackyzha0/quartz/blob/hugo/content/notes/showcase.md)!\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/troubleshooting":{"title":"Troubleshooting and FAQ","content":"\nStill having trouble? Here are a list of common questions and problems people encounter when installing Quartz.\n\nWhile you're here, join our [Discord](https://discord.gg/cRFFHYye7t) :)\n\n### Does Quartz have Latex support?\nYes! See [CJK + Latex Support (测试)](CJK%20+%20Latex%20Support%20(测试).md) for a brief demo.\n\n### Can I use \\\u003cObsidian Plugin\\\u003e in Quartz?\nUnless it produces direct Markdown output in the file, no. There currently is no way to bundle plugin code with Quartz.\n\nThe easiest way would be to add your own HTML partial that supports the functionality you are looking for.\n\n### My GitHub pages is just showing the README and not Quartz\nMake sure you set the source to deploy from `master` (and not `hugo`) using `/ (root)`! See more in the [hosting](hosting.md) guide\n\n### Some of my pages have 'January 1, 0001' as the last modified date\nThis is a problem caused by `git` treating files as case-insensitive by default and some of your posts probably have capitalized file names. You can turn this off in your Quartz by running this command.\n\n```shell\n# in the root of your Quartz (same folder as config.toml)\ngit config core.ignorecase true\n\n# or globally (not recommended)\ngit config --global core.ignorecase true\n```\n\n### Can I publish only a subset of my pages?\nYes! Quartz makes selective publishing really easy. Heres a guide on [excluding pages from being published](ignore%20notes.md).\n\n### Can I host this myself and not on GitHub Pages?\nYes! All built files can be found under `/public` in the `master` branch. More details under [hosting](hosting.md).\n\n### `command not found: hugo-obsidian`\nMake sure you set your `GOPATH` correctly! This will allow your terminal to correctly recognize `hugo-obsidian` as an executable.\n\n```shell\n# Add the following 2 lines to your ~/.bash_profile\nexport GOPATH=/Users/$USER/go\nexport PATH=$GOPATH/bin:$PATH\n\n# In your current terminal, to reload the session\nsource ~/.bash_profile\n```\n\n### How come my notes aren't being rendered?\nYou probably forgot to include front matter in your Markdown files. You can either setup [Obsidian](obsidian.md) to do this for you or you need to manually define it. More details in [the 'how to edit' guide](editing.md).\n\n### My custom domain isn't working!\nWalk through the steps in [the hosting guide](hosting.md) again. Make sure you wait 30 min to 1 hour for changes to take effect.\n\n### How do I setup Google Analytics?\nYou can edit it in `config.toml` and either use a V3 (UA-) or V4 (G-) tag.\n\n### How do I change the content on the home page?\nTo edit the main home page, open `/content/_index.md`.\n\n### How do I change the colours?\nYou can change the theme by editing `assets/custom.scss`. More details on customization and themeing can be found in the [customization guide](config.md).\n\n### How do I add images?\nYou can put images anywhere in the `/content` folder.\n\n```markdown\nExample image (source is in content/notes/images/example.png)\n![Example Image](/content/notes/images/example.png)\n```\n\n### My Interactive Graph and Backlinks aren't up to date\nBy default, the `linkIndex.json` (which Quartz needs to generate the Interactive Graph and Backlinks) are not regenerated locally. To set that up, see the guide on [local editing](editing.md)\n\n### Can I use React/Vue/some other framework?\nNot out of the box. You could probably make it work by editing `/layouts/_default/single.html` but that's not what Quartz is designed to work with. 99% of things you are trying to do with those frameworks you can accomplish perfectly fine using just vanilla HTML/CSS/JS.\n\n## Still Stuck?\nQuartz isn't perfect! If you're still having troubles, file an issue in the GitHub repo with as much information as you can reasonably provide. Alternatively, you can message me on [Twitter](https://twitter.com/_jzhao) and I'll try to get back to you as soon as I can.\n\n🐛 [Submit an Issue](https://github.com/jackyzha0/quartz/issues)","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/notes/updating":{"title":"Updating","content":"\nHaven't updated Quartz in a while and want all the cool new optimizations? On Unix/Mac systems you can run the following command for a one-line update! This command will show you a log summary of all commits since you last updated, press `q` to acknowledge this. Then, it will show you each change in turn and press `y` to accept the patch or `n` to reject it. Usually you should press `y` for most of these unless it conflicts with existing changes you've made! \n\n```shell\nmake update\n```\n\nOr, if you don't want the interactive parts and just want to force update your local garden (this assumed that you are okay with some of your personalizations been overriden!)\n\n```shell\nmake update-force\n```\n\nOr, manually checkout the changes yourself.\n\n\u003e [!warning] Warning!\n\u003e\n\u003e If you customized the files in `data/`, or anything inside `layouts/`, your customization may be overwritten!\n\u003e Make sure you have a copy of these changes if you don't want to lose them.\n\n\n```shell\n# add Quartz as a remote host\ngit remote add upstream git@github.com:jackyzha0/quartz.git\n\n# index and fetch changes\ngit fetch upstream\ngit checkout -p upstream/hugo -- layouts .github Makefile assets/js assets/styles/base.scss assets/styles/darkmode.scss config.toml data \n```\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":[]},"/private/requirements/overview":{"title":"Logos Network Requirements Overview","content":"\nThis document describes the requirements of the Logos Network.\n\n\u003e Network sovereignty is an extension of the collective sovereignty of the individuals within. \n\n\u003e Meaningful participation in the network should be acheivable by affordable and accessible consumer grade hardware.\n\n\u003e Privacy by default. \n\n\u003e A given CiC should have the option to gracefully exit the network and operate on its own.\n\n","lastmodified":"2023-08-17T19:42:53.948430527Z","tags":["requirements"]},"/private/roadmap/consensus/candidates/carnot/FAQ":{"title":"Frequently Asked Questions","content":"\n## Network Requirements and Assumptions\n\n### What assumptions do we need Waku to fulfill? - Corey\n\u003e `Moh:` Waku needs to fill the following requirements, taken from the Carnot paper:\n\n\u003e **Definition 3** (Probabilistic Reliable Dissemination). _After the GST, and when the leader is correct, all the correct nodes deliver the proposal sent by the leader (w.h.p)._\n\n\u003e **Definition 4** (Probabilistic Fulfillment). _After the GST, and when the current and previous leaders are correct, the number of votes collected by teh current leader is $2c+1$ (w.h.p)._\n\n## Tradeoffs\n\n### I think the main clear disadvantage of such a scheme is the added latency of the multiple layers. - Alvaro\n\n\u003e `Moh:` The added latency will be O(log(n/C)), where C is the committee size. But I guess it will be hard to avoid it. Though it also depends on how fast the network layer (potentially Waku) propagats msgs and also on execution time of the transaction as well.\n\n\u003e `Alvaro:` Well IIUC the only latency we are introducing is directly proportional to the levels of subcommitee nesting (ie the log(n/C)), which is understandably the price to pay. We have to make sure though that what we gain by introducing this is really worth the extra cost vs the typical comittee formation via randao or perhaps VDFs\n\n\u003e `Moh:` Again the Typical committee formation with randao can reduce their wait time value to match our latency, but then it becomes vulnerable and fail if the network latency becomes greater than their slot interval. If they keep it too large it may not fail but becomes slow. We won't have that problem. If an adversary has the power to slow down the network then their liveness will fail, whereas we won't have that issue.\n\n## How would you compare Aptos and Carnot? - Alvaro\n\n\u003e `Moh:` It is variant of DiemBFT, Sui is based on Nahrwal, both cannot scale to more than few hunderd of nodes. That is why they achieve that low latency.\n\n\u003e `Alvaro:` Yes, so they need to select a committee of that size in order to operate at that latency What's wrong with selecting a committee vs Carnot's solution? This I'm asking genuinely to understand and because everyone will ask this question when we release.\n\n\u003e `Moh:` When you select a committee you have to wait for a time slot to make sure the result of consensus has propagated. Again strong synchrony assumption (slot time), formation of forks, increase in PoS attack vector come into play\nWithin committee the protocol does not need a wait time but for its results to get propagated if scalability is to be achieved, then wait time has to be added or signatures have to be collected from thousands of nodes.\n\n\u003e `Alvaro:` Can you elaborate?\n\n\u003e `Moh:` Ethereum (and any other protocol who runs the consensus in a single committee selected from a large group on nodes) has wait time so that the output of the consenus propagates to all honest nodes before the next committee is selected. Else the next committee will fail or only forks will be formed and chain length won't increase. But since this wait time as stated, increases latency, makes the protocol vulnerable, Ethereum wants to avoid it to achieve responsivess. To avoid wait time (add responsiveness) a protocol has to collect attestation signatures from 2/3rd of all nodes (not a single committee) to move to the second round (Carnot is already responsive). But aggregating and verifying signatures thousands of signatures is expensive and time consuming. This is why they are working to improve BLS signatures. Instead we have changed the consensus protocol in such a way that a small number of signatures need to be aggregated and verified to achieve responsiveness and fast finality. We can further improve performance by using the improved BLS signatures.\n\n\u003e One cannot achieve fast finality while running the consensus in a small committee. Because attestation of a Block within the single committee is not enough. This block can be averted if the leader of the next committee has not seen it. Therefore, there should be enough delay so that all honest nodes can see it. This is why we have this wait/slot time. Another issue can be a malicious leader from the next chosen committee can also avert a block of honest leader and hence preventing honest leaders from getting rewards. If blocks of honest leaders are averted for long time, stake of malicious leaders will increase. Moreover, malicious leaders can delay blocks of honest nodes by making fork and averting them. Addressing these issues will further make the protocol complex, while still laking fast finality.\n\n## Data Distribution\n\n### How much failure rate of erasure code transmission are we expecting. Basically, what are the EC coding parameters that we expect to be sending such that we have some failure rate of transmission? Has that been looked into? - Dmitriy\n\u003e `Moh:` This is a great question and it points to the tension between the failure rate vs overhead. We have briefly looked into this (today me and Marcin @madxor discussed such cases), but we haven’t thoroughly analyzed this. In our case, the rate of failure also depends on committee size. We look into $10^{-3}$ to $10^{-6}$ probability of failure. And in this case, the coding overhead can be somewhere between 200%-500% approximately. This means for a committee size of 500 (while expecting receipt of messages from 251 correct nodes), for a failure rate of $10^{-6}$ a single node has to send \u003e 6Mb of data for a 1Mb of actual data. Though 5x overhead is large, it still prevent us from sending/receiving 500 Mb of data in return for a failure probability of 1 proposal out of 1 million. From the protocol perspective, we can address EC failures in multiple ways: a: Since the root committee only forwards the coded chunks only when they have successfully rebuilt the block. This means the root committee can be contacted to download additional coded chunks to decode the block. b: We allow this failure and let the leader be replaced but since there is proof that the failure is due to the reason that a decoder failed to reconstruct the block, therefore, the leader cannot be punished (if we chose to employ punishment in PoS). \n\n### How much data should a given block be. Are there limits on this and if so, what are they and what do they depend on? - Dmitriy\n\u003e `Moh:` This question can be answered during simulations and experiments over links of different bandwidths and latencies. We will test the protocol performances with different block sizes. As we know increasing the block size results in increased throughput as well as latency. What is the most appropriate block size can be determined once we observe the tradeoff between throughput vs latency.\n\n## Signature Propagation\n\n### Who sends the signatures up from a given committee? Do that have any leadered power within the committee? - Tanguy\n\u003e `Moh:` Each node in a committee multicasts its vote to all members of the parent committee. Since the size of the vote is small the bit complexity will be low. Introducing a leader within each committee will create a single point of failure within each committee. This is why we avoid maintaining a leader within each committee\n\n## Network Scale\n\n### What is our expected minimum number of nodes within the network? - Dmitriy\n\u003e `Moh:` For a small number of nodes we can have just a single committee. But I am not sure how many nodes will join our network \n\n## Byzantine Behavior\n\n### Can we also consider a flavor that adds attestation/attribution to misbehaving nodes? That will come at a price but there might be a set of use cases which would like to have lower performance with strong attribution. Not saying that it must be part of the initial design, but can be think-through/added later. - Marcin\n\u003e `Moh:` Attestation to misbehaving nodes is part of this protocol. For example, if a node sends an incorrect vote or if a leader proposes an invalid transaction, then this proof will be shared with the network to punish the misbehaving nodes (Though currently this is not part of pseudocode). But it is not possible to reliably prove the attestation of not participation.\n\n\u003e `Marcin:` Great, and definitely, we cannot attest that a node was not participating - I was not suggesting that;). But we can also think about extending the attestation for lazy-participants case (if it’s not already part of the protocol).\n\n\u003e `Moh:` OK, thanks for the clarification 😁 . Of course we can have this feature to forward the proof of participation of successor committees. In the first version of Carnot we had this feature as a sliding window. One could choose the size of the window (in terms of tree levels) for which a node should forward the proof of participation. In the most recent version the size of sliding window is 0. And it is 1 for the root committee. It means root committee members have to forward the proof of participation of their child committee members. Since I was able to prove protocol correctness without forwarding the proofs so we avoid it. But it can be part of the protocol without any significant changes in the protocol\n\n\u003e If the proof scheme is efficient ( as the results you presented) in practice and the cost of creating and verifying proofs is not significant then actually adding proofs can be good. But not required.\n\n### Also, how do you reward online validators / punish offline ones if you can't prove at the block level that someone attested or not? - Tanguy\n\u003e `Moh:` This is very tricky and so far no one has done it right (to my knowledge). Current reward mechanism for attestation, favours fast nodes.This means if malicious nodes in the network are fast, they can increase their stake in the network faster than the honest nodes and eventually take control of the network. Or in the case of Ethereum a Byzantine leader can include signature of malicious nodes more frequently in the proof of attestation, hence malicious nodes will be rewarded more frequently. Also let me add that I don't have definite answer to your question currently, but I think by revising the protocol assumptions, incentive mechanism and using a game theoretical approach this problem can be resolved.\n\n\u003e An honest node should wait for a specific number of children votes (to make sure everyone is voting on the same proposal) before voting but does not need to provide any cryptographic proof. Though we build a threshold signature from root committee members and it’s children but not from the whole tree. As long as enough number of nodes follow the the protocol we should be fine. I am working on protocol proofs. Also I think bugs should be discovered during development and testing phase. Changing protocol to detect potential bug might not be a good practice.\n\n### doesn't having randomly distributed malicious nodes (say there is a 20%) increase the odds that over a third of a committee end up being from those malicious ones? It seems intuitive: since a 20% at the global scale is always \u003c1/3, but when randomly distributed there is always non-zero chance they end up in a single group, thus affecting liveness more and more the closer we get to that global 1/3. Consequently, if I'm understanding the algorithm correctly, it would have worse liveness guarantees that classical pBFT, say with a randomly-selected commitee from the total set. - Alvaro\n\n\u003e `Alexander:` We assume that fraction of malicious nodes is $1/4$ and given we chooses comm. sizes, which will depend on total number of nodes, appropriately this guarantees that with high probability we are below $1/3$ in each committee.\n\n\u003e `Alvaro:` ok, but then both the global guarantee is below that current \"standard\" of 1/3 of malicious nodes and even then we are talking about non-zero probabilities that a comm has the power to slow down consensus via requiring reformation of comms (is this right?)\n\n\u003e `Alexander:` This is the price we pay to improve scalability. Also these probabilities of failure can be very low.\n\n### What happens in Carnot when one committee is taken over by \u003e1/3 intra-comm byzantine nodes? - Alvaro\n\n\u003e `Moh:` When there is a failure the overlay is recalculated. By gradually increasing the fault tolerance by a small value, the probability of failure of a committee slightly increases but upon recalculating the correct overlay, inactive nodes that caused the failure of previous overlay (when no committee has more than 1/3 Byzantine nodes) will be slashed.\n\n\n\n## Synchronicity\n\n### How to guarantee synchronicity. In particular how to avoid that in a big network different nodes see a proposal with 2c+1 votes but different votes and thus different random seed - Giacomo\n\n\u003e `Moh:` The assumption is that there exists some known finite time bound Δ and a special event called GST (Global Stabilization Time) such that:\n\n\u003e The adversary must cause the GST event to eventually happen after some unknown finite time. Any message sent at time x must be delivered by time $\\delta + \\text{max}(x,GST)$. In the Partial synchrony model, the system behaves asynchronously till GST and synchronously after GST.\n\n\u003e Moreover, votes travel one level at a time from tree leaves to the tree root. We only need the proof of votes of root+child committees to conclude with a high probability that the majority of nodes have voted.\n\n### That's a timeout? How does this work exactly without timing assumptions? Trying to find this in the document -Alvaro\n\n\u003e `Moh:` Each committee only verifies the votes of its child committees. Once a verified 2/3rd votes of its child members, it then sends it vote to its parent. In this way each layer of the tree verifies the votes (attests) the layer below. Thus, a node does not have to collect and verify 2/3rd of all thousands of votes (as done in other responsive BFTs) but only from its child nodes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["Carnot","consensus"]},"/private/roadmap/consensus/candidates/carnot/overview":{"title":"Carnot Overview","content":"\nCarnot (formerly LogosBFT) is a Byzantine Fault Tolerant (BFT) [consensus](roadmap/consensus/index.md) candidate for the Nomos Network that utilizes Fountain Codes and a committees tree structure to optimize message propagation in the presence of a large number of nodes, while maintaining high througput and fast finality. More specifically, these are the research contributions in Carnot. To our knowledge, Carnot is the first consensus protocol that can achieve together all of these properties:\n\n1. Scalability: Carnot is highly scalable, scaling to thousands of nodes.\n2. Responsiveness: The ability of a protocol to operate with the speed of a wire but not a maximum delay (block delay, slot time, etc.) is called responsiveness. Responsiveness reduces latency and helps the Carnot achieve Fast Finality. Moreover, it improves Carnot's resilience against adversaries that can slow down network traffic. \n3. Fork avoidance: Carnot avoids the formation of forks in a happy path. Forks formation has the following adverse consequences that the Carnot avoids.\n 1. Wastage of resources on orphan blocks and reduced throughput with increased latency for transactions in orphan blocks\n 2. Increased attack vector on PoS as attackers can employ a strategy to force the network to accept their fork resulting in increased stake for adversaries.\n\n- [FAQ](FAQ.md): Here is a page that tracks various questions people have around Carnot.\n\n## Work Streams\n\n### Current State of the Art\nAn ongoing survey of the current state of the art around Consensus Mechanisms and their peripheral dependencies is being conducted by Tuanir, and can be found in the following WIP Overleaf document: \n- [WIP Consensus SoK](https://www.overleaf.com/project/633acc1acaa6ffe456d1ab1f)\n\n### Committee Tree Overlay\nThe basis of Carnot is dependent upon establishing an committee overlay tree structure for message distribution. \n\nAn overview video can be found in the following link: \n- [Carnot Overview by Moh during Offsite](https://drive.google.com/file/d/17L0JPgC0L1ejbjga7_6ZitBfHUe3VO11/view?usp=sharing)\n\nThe details of this are being worked on by Moh and Alexander and can be found in the following overleaf documents: \n- [Moh's draft](https://www.overleaf.com/project/6341fb4a3cf4f20f158afad3)\n- [Alexander's notes on the statistical properties of committees](https://www.overleaf.com/project/630c7e20e56998385e7d8416)\n- [Alexander's python code for computing committee sizes](https://github.com/AMozeika/committees)\n\nA simulation notebook is being worked on by Corey to investigate the properties of various tree overlay structures and estimate their practical performance:\n- [Corey's Overlay Jupyter Notebook](https://github.com/logos-co/scratch/tree/main/corpetty/committee_sim)\n\n#### Failure Recovery\nThere exists a timeout that triggers an overlay reconfiguration. Currently work is being done to calculate the probabilities of another failure based on a given percentage of byzantine nodes within the network. \n- [Recovery Failure Probabilities]() - LINK TO WORK HERE\n\n### Random Beacon\nA random beacon is required to choose a leader and establish a seed for defining the overlay tree. Marcin is working on the various avenues. His previous presentations can be found in the following presentation slides (in chronological order):\n- [Intro to Multiparty Random Beacons](https://cloud.logos.co/index.php/s/b39EmQrZRt5rrfL)\n- [Circles of Trust](https://cloud.logos.co/index.php/s/NXJZX8X8pHg6akw)\n- [Compact Certificates of Knowledge](https://cloud.logos.co/index.php/s/oSJ4ykR4A55QHkG)\n\n### Erasure Coding (LT Codes / Fountain Codes / Raptor Codes)\nIn order to reduce message complexity during propagation, we are investigating the use of Luby Transform (LT) codes, more specifically [Fountain Codes](https://en.wikipedia.org/wiki/Fountain_code), to break up the block to be propagated to validators and recombined by local peers within a committee. \n- [LT Code implementation in Rust](https://github.com/chrido/fountain) - unclear about legal status of LT or Raptor Codes, it is currently under investigation.\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","Carnot"]},"/private/roadmap/consensus/candidates/claro":{"title":"Claro: Consensus Candidate","content":"\n\n\n**Claro** (formerly Glacier) is a consensus candidate for the Logos network that aims to be an improvement to the Avalanche family of consensus protocols. \n\n\n### Implementations\nThe protocol has been implemented in multiple languages to facilitate learning and testing. The individual code repositories can be found in the following links:\n- Rust (reference)\n- Python\n- Common Lisp\n\n### Simulations/Experiments/Analysis\nIn order to test the performance of the protocol, and how it stacked up to the Avalanche family of protocols, we have performed a multitude of simulations and experiments under various assumptions. \n- [Alvaro's initial Python implementations and simulation code](https://github.com/status-im/consensus-models)\n\n### Specification\nCurrently the Claro consensus protocol is being drafted into a specification so that other implementations can be created. It's draft resides under [Vac](https://vac.dev) and can be tracked [here](https://github.com/vacp2p/rfc/pull/512/)\n\n### Additional Information\n\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","candidate","claro"]},"/private/roadmap/consensus/development/overview":{"title":"Development Work","content":"","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/development/prototypes":{"title":"Consensus Prototypes","content":"\nConsensus Prototypes is a collection of Rust implementations of the [Consensus Candidates](tags/candidates)\n\n## Tiny Node\n\n\n## Required Roles\n- Lead Developer (filled)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","development"]},"/private/roadmap/consensus/overview":{"title":"Consensus Work","content":"\nConsensus is the foundation of the network. It is how a group of peer-to-peer nodes understands how to agree on information in a distributed way, particuluarly in the presence of byzantine actors. \n\n## Consensus Roadmap\n### Consensus Candidates\n- [Carnot](private/roadmap/consensus/candidates/carnot/overview.md) - Carnot is the current leading consensus candidate for the Nomos network. It is designed to maximize efficiency of message dissemination while supoorting hundreds of thousands of full validators. It gets its name from the thermodynamic concept of the [Carnot Cycle](https://en.wikipedia.org/wiki/Carnot_cycle), which defines maximal efficiency of work from heat through iterative gas expansions and contractions. \n- [Claro](claro.md) - Claro is a variant of the Avalanche Snow family of protocols, designed to be more efficient at the decision making process by leveraging the concept of \"confidence\" across peer responses. \n\n\n### Theoretical Analysis\n- [snow-family](snow-family.md)\n\n### Development\n- [prototypes](prototypes.md)\n\n## Open Roles\n- [distributed-systems-researcher](distributed-systems-researcher.md)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus"]},"/private/roadmap/consensus/theory/overview":{"title":"Consensus Theory Work","content":"\nThis track of work is dedicated to creating theoretical models of distributed consensus in order to evaluate them from a mathematical standpoint. \n\n## Navigation\n- [Snow Family Analysis](snow-family.md)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory"]},"/private/roadmap/consensus/theory/snow-family":{"title":"Theoretical Analysis of the Snow Family of Consensus Protocols","content":"\nIn order to evaluate the properties of the Avalanche family of consensus protocols more rigorously than the original [whitepapers](), we work to create an analytical framework to explore and better understand the theoretical boundaries of the underlying protocols, and under what parameterization they will break vs a set of adversarial strategies","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["consensus","theory","snow"]},"/private/roadmap/networking/carnot-waku-specification":{"title":"A Specification proposal for using Waku for Carnot Consensus","content":"\n##### Definition Reference \n- $k$ - size of a given committee\n- $n_C$ - number of committees in the overlay, or nodes in the tree\n- $d$ - depth of the overlay tree\n- $n_d$ - number of committees at a given depth of the tree\n\n## Motivation\nIn #Carnot, an overlay is created to facilitate message distribution and voting aggregation. This document will focus on the differentiated channels of communication for message distribution. Whether or not voting aggregation and subsequenty traversal back up the tree can utilize the same channels will be investigated later. \n\nThe overlay is described as a binary tree of committees, where a individual in each committee propogates messages to an assigned node in their two children committees of the tree, until the leaf nodes have recieved enough information to reconstitute the proposal block. \n\nThis communication protocol will naturally form \"pools of information streams\" that people will need to listen to in order to do their assigned work:\n- inner committee communication\n- parent-child chain communication\n- intitial leader distribution\n\n### **inner committee communication** \nall members of a given committee will need to gossip with each other in order to reform the initial proposal block\n- This results in $n_C$ pools of $k$-sized communication pools.\n\n### **parent-child chain communication** \nThe formation of the committee and the lifecycle of a chunk of erasure coded data forms a number of \"parent-child\" chains. \n- If we completely minimize the communcation between committees, then this results in $k$ number of $n_C$-sized communication pools.\n- It is not clear if individual levels of the tree needs to \"execute\" the message to their children, or if the root committee can broadcast to everyone within its assigned parent-chain communcation pool at the same time.\n- It is also unclear if individual levels of the tree need to send independant messages to each of their children, or if a unified communication pool can be leveraged at the tree-level. This results in $d$ communication pools of $n_d$-size. \n\n### **initial leader distribution**\nFor each proposal, a leader needs to distribute the erasure coded proposal block to the root committee\n- This results in a single communication pool of size $k(+1)$.\n- the $(+1)$ above is the leader, who could also be a part of the root committee. The leader changes with each block proposal, and we seek to minimize the time between leader selection and a round start. Thusly, this results in a requirement that each node in the network must maintain a connection to every node in the root committee. \n\n## Proposal\nThis part of the document will attempt to propose using various aspects of Waku, to facilitate both the setup of the above-mentioned communication pools as well as encryption schemes to add a layer of privacy (and hopefully efficiency) to message distribution. \n\nWe seek to minimize the availability of data such that an individual has only the information to do his job and nothing more.\n\nWe also seek to minimize the amount of messages being passed such that eventually everyone can reconstruct the initial proposal block\n\n`???` for Waku-Relay, 6 connections is optimal, resulting in latency ???\n\n`???` Is it better to have multiple pubsub topics with a simple encryption scheme or a single one with a complex encryption scheme\n\nAs there seems to be a lot of dynamic change from one proposal to the next, I would expect [`noise`](https://vac.dev/wakuv2-noise) to be a quality candidate to facilitate the creation of secure ephemeral keys in the to-be proposed encryption scheme. \n\nIt is also of interest how [`contentTopics`](https://rfc.vac.dev/spec/23/) can be leveraged to optimize the communication pools. \n\n## Whiteboard diagram and notes\n![Whiteboard Diagram](images/Overlay-Communications-Brainstorm.png)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku","carnot","networking","consensus"]},"/private/roadmap/networking/overview":{"title":"P2P Networking Overview","content":"\nThis page summarizes the work around the P2P networking layer of the Nomos project.\n\n## Waku\n[Waku](https://waku.org) is an privacy-preserving, ephemeral, peer-to-peer (P2P) messaging suite of protocols which is developed under [Vac](https://vac.dev) and maintained/productionized by the [Logos Collective](https://logos.co). \n\nIt is hopeful that Nomos can leverage the work of the Waku project to provide the P2P networking layer and peripheral services associated with passing messages around the network. Below is a list of the associated work to investigate the use of Waku within the Nomos Project. \n\n### Scalability and Fault-Tolerance Studies\nCurrently, the amount of research and analysis of the scalability of Waku is not sufficient to give enough confidence that Waku can serve as the networking layer for the Nomos project. Thusly, it is our effort to push this analysis forward by investigating the various boundaries of scale for Waku. Below is a list of endeavors in this direction which we hope serves the broader community: \n- [Status' use of Waku study w/ Kurtosis](status-waku-kurtosis.md)\n- [Using Waku for Carnot Overlay](carnot-waku-specification.md)\n\n### Rust implementations\nWe have created and maintain a stop-gap solution to using Waku with the Rust programming language, which is wrapping the [go-waku](https://github.com/status-im/go-waku) library in Rust and publishing it as a crate. This library allows us to do tests with our [Tiny Node](roadmap/development/prototypes.md#Tiny-Node) implementation more quickly while also providing other projects in the ecosystem to leverage Waku within their Rust codebases more quickly. \n\nIt is desired that we implement a more robust and efficient Rust library for Waku, but this is a significant amount of work. \n\nLinks:\n- [Rust bindings to go-waku repo](https://github.com/waku-org/waku-rust-bindings)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","overview"]},"/private/roadmap/networking/status-network-agents":{"title":"Status Network Agents Breakdown","content":"\nThis page creates a model to describe the impact of the various clients within the Status ecosystem by describing their individual contribution to the messages within the Waku network they leverage. \n\nThis model will serve to create a realistic network topology while also informing the appropriate _dimensions of scale_ that are relevant to explore in the [Status Waku scalability study](status-waku-kurtosis.md)\n\nStatus has three main clients that users interface with (in increasing \"network weight\" ordering):\n- Status Web\n- Status Mobile\n- Status Desktop\n\nEach of these clients has differing (on average) resources available to them, and thusly, provides and consumes different Waku protocols and services within the Status network. Here we will detail their associated messaging impact to the network using the following model:\n\n```\nAgent\n - feature\n - protocol\n - contentTopic, messageType, payloadSize, frequency\n```\n\nBy describing all `Agents` and their associated feature list, we should be able do the following:\n\n- Estimate how much impact per unit time an individual `Agent` impacts the Status network\n- Create a realistic network topology and usage within a simulation framework (_e.g._ Kurtosis)\n- Facilitate a Status Specification of `Agents`\n- Set an example for future agent based modeling and simulation work for the Waku protocol suite \n\n## Status Web\n\n## Status Mobile\n\n## Status Desktop\nStatus Desktop serves as the backbone for the Status Network, as the software runs on hardware that is has more available resources, typically has more stable network and robust connections, and generally has a drastically lower churn (or none at all). This results in it running the most Waku protocols for longer periods of time, resulting int he heaviest usage of the Waku network w.r.t. messaging. \n\nHere is the model breakdown of its usage:\n```\nStatus Desktop\n - Prekey bundle broadcast\n - Account sync\n - Historical message melivery\n - Waku-Relay (answering message queries)\n - Message propogation\n - Waku-Relay\n - Waku-Lightpush (receiving)\n```","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["status","waku","scalability"]},"/private/roadmap/networking/status-waku-kurtosis":{"title":"Status' use of Waku - A Scalability Study","content":"\n[Status](https://status.im) is the largest consumer of the Waku protocol, leveraging it for their entire networking stack. THeir upcoming release of Status Desktop and the associated Communities product will heavily push the limits of what Waku can do. As mentioned in the [Networking Overview](private/roadmap/networking/overview.md) page, rigorous scalability studies have yet to be conducted of Waku (v2). \n\nWhile these studies most immediately benefit the Status product suite, it behooves the Nomos Project to assist as the lessons learned immediately inform us the limits of what the Waku protocol suite can handle, and how that fits within our [Technical Requirements](private/requirements/overview.md).\n\nThis work has been kicked off as a partnership with the [Kurtosis](https://kurtosis.com) distributed systems development platform. It is our hope that the experience and accumen gained during this partnership and study will serve us in the future with respect to Nomos developme, and more broadly, all projects under the Logos Collective. \n\nAs such, here is an overview of the various resources towards this endeavor:\n- [Status Network Agent Breakdown](status-network-agents.md) - A document that describes the archetypal agents that participate in the Status Network and their associated Waku consumption.\n- [Wakurtosis repo](https://github.com/logos-co/wakurtosis) - A Kurtosis module to run scalability studies\n- [Waku Topology Test repo](https://github.com/logos-co/Waku-topology-test) - a Python script that facilitates setting up a reasonable network topology for the purpose of injecting the network configuration into the above Kurtosis repo\n- [Initial Vac forum post introducing this work](https://forum.vac.dev/t/waku-v2-scalability-studies/142)\n- [Waku Github Issue detailing work progression](https://github.com/waku-org/pm/issues/2)\n - this is also a place to maintain communications of progress\n- [Initial Waku V2 theoretical scalability study](https://vac.dev/waku-v1-v2-bandwidth-comparison)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["networking","scalability","waku"]},"/private/roadmap/virtual-machines/overview":{"title":"overview","content":"\n## Motivation\nLogos seeks to use a privacy-first virtual machine for transaction execution. We believe this can only be acheived through zero-knowledge. The majority of current work in the field focuses more towards the aggregation and subsequent verification of transactions. This leads us to explore the researching and development of a privacy-first virtual machine. \n\nLINK TO APPROPRIATE NETWORK REQUIREMENTS HERE\n\n#### Educational Resources\n- primer on Zero Knowledge Virtual Machines - [link](https://youtu.be/GRFPGJW0hic)\n\n### Implementations:\n- TinyRAM - link\n- CairoVM\n- zkSync\n- Hermes\n- [MIDEN](https://polygon.technology/solutions/polygon-miden/) (Polygon)\n- RISC-0\n\t- RISC-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General Building Blocks of a ZK-VM\n- CPU\n\t- modeled with \"execution trays\"\n- RAM\n\t- overhead to look out for\n\t\t- range checks\n\t\t- bitwise operations\n\t\t- hashing\n- Specialized circuits\n- Recursion\n\n## Approaches\n- zk-WASM\n- zk-EVM\n- RISC-0\n\t- RISK-0 Rust Starter Repository - [link](https://github.com/risc0/risc0-rust-starter)\n\t- targets RISC-V architecture\n\t- benefits:\n\t\t- a lot of languages already compile to RISC-V\n\t\t- https://youtu.be/2MXHgUGEsHs - Why use the RISC Zero zkVM?\n\t- negatives:\n\t\t- not optimized or EVM where most tooling exists currently\n\n## General workstreams\n- bytecode compiler\n- zero-knowledge circuit design\n- opcode architecture (???)\n- engineering\n- required proof system\n- control flow\n\t- MAST (as used in MIDEN)\n\n## Roles\n- [ZK Research Engineer](zero-knowledge-research-engineer.md)\n- Senior Rust Developer\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["virtual machines","zero knowledge"]},"/private/roles/distributed-systems-researcher":{"title":"Open Role: Distributed Systems Researcher","content":"\n\n## About Status\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception. Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n \n\n## Who are we?\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the Status Network. We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality\n\n## The job\n\n**Responsibilities:**\n- This role is dedicated to pure research\n- Primarily, ensuring that solutions are sound and diving deeper into their formal definition.\n- Additionally, he/she would be regularly going through papers, bringing new ideas and staying up-to-date.\n- Designing, specifying and verifying distributed systems by leveraging formal and experimental techniques.\n- Conducting theoretical and practical analysis of the performance of distributed systems.\n- Designing and analysing incentive systems.\n- Collaborating with both internal and external customers and the teams responsible for the actual implementation.\n- Researching new techniques for designing, analysing and implementing dependable distributed systems.\n- Publishing and presenting research results both internally and externally.\n\n \n**Ideally you will have:**\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]\n- Strong background in Computer Science and Math, or a related area.\n- Academic background (The ability to analyze, digest and improve the State of the Art in our fields of interest. Specifically, familiarity with formal proofs and/or the scientific method.)\n- Distributed Systems with a focus on Blockchain\n- Analysis of algorithms\n- Familiarity with Python and/or complex systems modeling software\n- Deep knowledge of algorithms (much more academic, such as have dealt with papers, moving from research to pragmatic implementation)\n- Experience in analysing the correctness and security of distributed systems.\n- Familiarity with the application of formal method techniques. \n- Comfortable with “reverse engineering” code in a number of languages including Java, Go, Rust, etc. Even if no experience in these languages, the ability to read and \"reverse engineer\" code of other projects is important.\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Capable of deep and creative thinking.\n- Passionate about blockchain technology in general.\n- Able to manage the uncertainties and ambiguities associated with working in a remote-first, distributed, decentralised environment.\n- A strong alignment to our principles: https://status.im/about/#our-principles\n\n\n**Bonus points:**\n- Experience working remotely. \n- Experience working for an open source organization. \n- TLA+/PRISM would be desirable.\n- PhD in Computer Science, Mathematics, or a related area. \n- Experience Multi-Party Computation and Zero-Knowledge Proofs\n- Track record of scientific publications.\n- Previous experience in remote or globally distributed teams.\n\n## Hiring process\n\nThe hiring process for this role will be:\n- Interview with our People Ops team\n- Interview with Alvaro (Team Lead)\n- Interview with Corey (Chief Security Officer)\n- Interview with Jarrad (Cofounder) or Daniel \n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n \n\n## Compensation\n\nWe are happy to pay salaries in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: https://people-ops.status.im/tag/perks/\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role"]},"/private/roles/rust-developer":{"title":"Rust Developer","content":"\n# Role: Rust Developer\nat Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is an organization building the tools and infrastructure for the advancement of a secure, private, and open web3. We have been completely distributed since inception. Our team is currently 100+ core contributors strong and welcomes a growing number of community members from all walks of life, scattered all around the globe. We care deeply about open source, and our organizational structure has a minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**About Logos**\n\nA group of Status Contributors is also involved in a new community lead project, called Logos, and this particular role will enable you to also focus on this project. Logos is a grassroots movement to provide trust-minimized, corruption-resistant governing services and social institutions to underserved citizens. \n\nLogos’ infrastructure will provide a base for the provisioning of the next-generation of governing services and social institutions - paving the way to economic opportunities for those who need them most, whilst respecting basic human rights through the network’s design.You can read more about Logos here: [in this small handbook](https://github.com/acid-info/public-assets/blob/master/logos-manual.pdf) for mindful readers like yourself.\n\n**Who are we?**\n\nWe are the Blockchain Infrastructure Team, and we are building the foundation used by other projects at the [Status Network](https://statusnetwork.com/). We are researching consensus algorithms, Multi-Party Computation techniques, ZKPs and other cutting-edge solutions with the aim to take the blockchain technology to the next level of security, decentralization and scalability for a wide range of use cases. We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning it into a reality.\n\n**Responsibilities:**\n\n- Develop and maintenance of internal rust libraries\n- 1st month: comfortable with dev framework, simulation app. Improve python lib?\n- 2th-3th month: Start dev of prototype node services\n\n**Ideally you will have:**\n\n- “Extensive” Rust experience (Async programming is a must) \n Ideally they have some GitHub projects to show\n- Experience with Python\n- Strong competency in developing and maintaining complex libraries or applications\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles) \n \n\n**Bonus points if**\n\n-  E.g. Comfortable working remotely and asynchronously\n-  Experience working for an open source organization.  \n-  Peer-to-peer or networking experience\n\n_[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role!]_\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)\n\n**Hiring Process** \n\nThe hiring process for this role will be:\n\n1. Interview with Maya (People Ops team)\n2. Interview with Corey (Logos Program Owner)\n3. Interview with Daniel (Engineering Lead)\n4. Interview with Jarrad (Cofounder)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["role","engineering","rust"]},"/private/roles/zero-knowledge-research-engineer":{"title":"Zero Knowledge Research Engineer","content":"at Status\n\nRemote, Worldwide\n\n**About Status**\n\nStatus is building the tools and infrastructure for the advancement of a secure, private, and open web3. \n\nWith the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.\n\nAs an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. \n\nAs a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of DApps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol (a continuation of Whisper).\n\nAs a team, Status has been completely distributed since inception.  Our team is currently 100+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. \n\nWe care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.\n\n**Who are we**\n\n[Vac](http://vac.dev/) **builds** [public good](https://en.wikipedia.org/wiki/Public_good) protocols for the decentralized web.\n\nWe do applied research based on which we build protocols, libraries and publications. Custodians of protocols that reflect [a set of principles](http://vac.dev/principles) - liberty, privacy, etc.\n\nYou can see a sample of some of our work here: [Vac, Waku v2 and Ethereum Messaging](https://vac.dev/waku-v2-ethereum-messaging), [Privacy-preserving p2p economic spam protection in Waku v2](https://vac.dev/rln-relay), [Waku v2 RFC](https://rfc.vac.dev/spec/10/). Our attitude towards ZK: [Vac \u003c3 ZK](https://forum.vac.dev/t/vac-3-zk/97).\n\n**The role**\n\nThis role will be part of a new team that will make a provable and private WASM engine that runs everywhere. As a research engineer, you will be responsible for researching, designing, analyzing and implementing circuits that allow for proving private computation of execution in WASM. This includes having a deep understanding of relevant ZK proof systems and tooling (zk-SNARK, Circom, Plonk/Halo 2, zk-STARK, etc), as well as different architectures (zk-EVM Community Effort, Polygon Hermez and similar) and their trade-offs. You will collaborate with the Vac Research team, and work with requirements from our new Logos program. As one of the first hires of a greenfield project, you are expected to take on significant responsibility,  while collaborating with other research engineers, including compiler engineers and senior Rust engineers. \n \n\n**Key responsibilities** \n\n- Research, analyze and design proof systems and architectures for private computation\n- Be familiar and adapt to research needs zero-knowledge circuits written in Rust Design and implement zero-knowledge circuits in Rust\n- Write specifications and communicate research findings through write-ups\n- Break down complex problems, and know what can and what can’t be dealt with later\n- Perform security analysis, measure performance of and debug circuits\n\n**You ideally will have**\n\n- Very strong academic or engineering background (PhD-level or equivalent in industry); relevant research experience\n- Experience with low level/strongly typed languages (C/C++/Go/Rust or Java/C#)\n- Experience with Open Source software\n- Deep understanding of Zero-Knowledge proof systems (zk-SNARK, circom, Plonk/Halo2, zk-STARK), elliptic curve cryptography, and circuit design\n- Keen communicator, eager to share your work in a wide variety of contexts, like internal and public presentations, blog posts and academic papers.\n- Experience in, and passion for, blockchain technology.\n- A strong alignment to our principles: [https://status.im/about/#our-principles](https://status.im/about/#our-principles)\n\n**Bonus points if** \n\n- Experience in provable and/or private computation (zkEVM, other ZK VM)\n- Rust Zero Knowledge tooling\n- Experience with WebAssemblyWASM\n\n[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role. Just explain to us why in your cover letter].\n\n**Hiring process** \n\nThe hiring process for this role will be:\n\n1. Interview with Angel/Maya from our Talent team\n2. Interview with team member from the Vac team\n3. Pair programming task with the Vac team\n4. Interview with Oskar, the Vac team lead\n5. Interview with Jacek, Program lead\n\nThe steps may change along the way if we see it makes sense to adapt the interview stages, so please consider the above as a guideline.\n\n**Compensation**\n\nWe are happy to pay in either 100% fiat or any mix of fiat and/or crypto. For more information regarding benefits at Status: [https://people-ops.status.im/tag/perks/](https://people-ops.status.im/tag/perks/)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["engineering","role","zero knowledge"]},"/roadmap/acid/milestones-overview":{"title":"Comms Milestones Overview","content":"\n- [Comms Roadmap](https://www.notion.so/eb0629444f0a431b85f79c569e1ca91b?v=76acbc1631d4479cbcac04eb08138c19)\n- [Comms Projects](https://www.notion.so/b9a44ea08d2a4d2aaa9e51c19b476451?v=f4f6184e49854fe98d61ade0bf02200d)\n- [Comms planner deadlines](https://www.notion.so/2585646d01b24b5fbc79150e1aa92347?v=feae1d82810849169b06a12c849d8088)","lastmodified":"2023-08-21T15:49:54.901241828Z","tags":["milestones"]},"/roadmap/acid/updates/2023-08-02":{"title":"2023-08-02 Acid weekly","content":"\n## Leads roundup - acid\n\n**Al / Comms**\n\n- Status app relaunch comms campaign plan in the works. Approx. date for launch 31.08.\n- Logos comms + growth plan post launch is next up TBD.\n- Will be waiting for specs for data room, raise etc.\n- Hires: split the role for content studio to be more realistic in getting top level talent.\n\n**Matt / Copy**\n\n- Initiative updating old documentation like CC guide to reflect broader scope of BUs\n- Brand guidelines/ modes of presentation are in process\n- Wikipedia entry on network states and virtual states is live on \n\n**Eddy / Digital Comms**\n\n- Logos Discord will be completed by EOD.\n- Codex Discord will be done tomorrow.\n - LPE rollout plan, currently working on it, will be ready EOW\n- Podcast rollout needs some\n- Overarching BU plan will be ready in next couple of weeks as things on top have taken priority.\n\n**Amir / Studio**\n\n- Started execution of LPE for new requirements, broken down in smaller deliveries. Looking to have it working and live by EOM.\n- Hires: still looking for 3 positions with main focus on developer side. \n\n**Jonny / Podcast**\n\n- Podcast timelines are being set. In production right now. Nick delivered graphics for HiO but we need a full pack.\n- First HiO episode is in the works. Will be ready in 2 weeks to fit in the rollout of the LPE.\n\n**Louisa / Events**\n\n- Global strategy paper for wider comms plan.\n- Template for processes and executions when preparing events.\n- Decision made with Carl to move Network State event to November in satellite of other events. Looking into ETH Lisbon / Staking Summit etc.\n - Seoul Q4 hackathon is already in the works. Needs bounty planning.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/acid/updates/2023-08-09":{"title":"2023-08-09 Acid weekly","content":"\n## **Top level priorities:**\n\nLogos Growth Plan\nStatus Relaunch\nLaunch of LPE\nPodcasts (Target: Every week one podcast out)\nHiring: TD studio and DC studio roles\n\n## **Movement Building:**\n\n- Logos collective comms plan skeleton ready - will be applied for all BUs as next step\n- Goal is to have plan + overview to set realistic KPIs and expectations\n- Discord Server update on various views\n- Status relaunch comms plan is ready for input from John et al.\n- Reach out to BUs for needs and deliverables\n\n## **TD Studio**\n\nFull focus on LPE:\n- On track, target of end of august\n- review of options, more diverse landscape of content\n- Episodes page proposals\n- Players in progress\n- refactoring from prev code base\n- structure of content ready in GDrive\n\n## **Copy**\n\n- Content around LPE\n- Content for podcast launches\n- Status launch - content requirements to receive\n- Organization of doc sites review\n- TBD what type of content and how the generation workflows will look like\n\n## **Podcast**\n\n- Good state in editing and producing the shows\n- First interview edited end to end with XMTP is ready. 2 weeks with social assets and all included. \n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n- 3 recorded for HIO, motion graphics in progress\n- First E2E podcast ready in 2 weeks for LPE\n- LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n\n## **DC Studio**\n\n- Brand guidelines for HiO are ready and set. Thanks `Shmeda`!\n- Logos State branding assets are being developed\n- Presentation templates update\n\n## **Events**\n\n- Network State event probably in Istanbul in November re: Devconnect will confirm shortly.\n- Program elements and speakers are top priority\n- Hackathon in Seoul in Q1 2024 - late Febuary probably\n- Jarrad will be speaking at HCPP and EthRome\n- Global event strategy written and in review\n- Lou presented social media and event KPIs on Paris event\n\n## **CRM \u0026 Marketing tool**\n\n- Get feedback from stakeholders and users\n- PM implementation to be planned (+- 3 month time TBD) with working group\n- LPE KPI: Collecting email addresses of relevant people\n- Careful on how we manage and use data, important for BizDev\n- Careful on which segments of the project to manage using the CRM as it can be very off brand","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["acid-updates"]},"/roadmap/codex/milestones-overview":{"title":"Codex Milestones Overview","content":"\n## Milestones\n- [Zenhub Tracker](https://app.zenhub.com/workspaces/engineering-62cee4c7a335690012f826fa/roadmap)\n- [Miro Tracker](https://miro.com/app/board/uXjVOtZ40xI=/?share_link_id=33106977104)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones-overview"]},"/roadmap/codex/updates/2023-07-21":{"title":"2023-07-21 Codex weekly","content":"\n## Codex update 07/12/2023 to 07/21/2023\n\nOverall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc...\n\nOur main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. A lot of grunt work is being done to make that possible. Progress is steady, but there are lots of stabilization and testing \u0026 infra related work going on.\n\nWe're also onboarding several new members to the team (4 to be precise), this will ultimately accelerate our progress, but it requires some upfront investment from some of the more experienced team members.\n\n### DevOps/Infrastructure:\n\n- Adopted nim-codex Docker builds for Dist Tests.\n- Ordered Dedicated node on Hetzner.\n- Configured Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Created Geth and Prometheus Docker images for Dist-Tests.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Set up Ingress Controller in Dist-Tests cluster.\n\n### Testing:\n\n- Set up deployer to gather metrics.\n- Debugging and identifying potential deadlock in the Codex client.\n- Added metrics, built image, and ran tests.\n- Updated dist-test log for Kibana compatibility.\n- Ran dist-tests on a new master image.\n- Debugging continuous tests.\n\n### Development:\n\n- Worked on codex-dht nimble updates and fixing key format issue.\n- Updated CI and split Windows CI tests to run on two CI machines.\n- Continued updating dependencies in codex-dht.\n- Fixed decoding large manifests ([PR #479](https://github.com/codex-storage/nim-codex/pull/497)).\n- Explored the existing implementation of NAT Traversal techniques in `nim-libp2p`.\n\n### Research\n\n- Exploring additional directions for remote verification techniques and the interplay of different encoding approaches and cryptographic primitives\n - https://eprint.iacr.org/2021/1500.pdf\n - https://dankradfeist.de/ethereum/2021/06/18/pcs-multiproofs.html\n - https://eprint.iacr.org/2021/1544.pdf\n- Onboarding Balázs as our ZK researcher/engineer\n- Continued research in DAS related topics\n - Running simulation on newly setup infrastructure\n- Devised a new direction to reduce metadata overhead and enable remote verification https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n- Looked into NAT Traversal ([issue #166](https://github.com/codex-storage/nim-codex/issues/166)).\n\n### Cross-functional (Combination of DevOps/Testing/Development):\n\n- Fixed discovery related issues.\n- Planned Codex Demo update for the Logos event and prepared environment for the demo.\n- Described requirements for Dist Tests logs format.\n- Configured new Logs shipper and Grafana in Dist-Tests cluster.\n- Dist Tests logs adoption requirements - Updated log format for Kibana compatibility.\n- Hetzner Dedicated server was configured.\n- Set up Hetzner StorageBox for local backup on Dedicated server.\n- Configured new Logs shipper in Dist-Tests cluster.\n- Setup Grafana in Dist-Tests cluster.\n- Created a separate codex-contracts-eth Docker image for Dist-Tests.\n- Setup Ingress Controller in Dist-Tests cluster.\n\n---\n\n#### Conversations\n1. zk_id _—_ 07/24/2023 11:59 AM\n\u003e \n\u003e We've explored VDI for rollups ourselves in the last week, curious to know your thoughts\n2. dryajov _—_ 07/25/2023 1:28 PM\n\u003e \n\u003e It depends on what you mean, from a high level (A)VID is probably the closest thing to DAS in academic research, in fact DAS is probably either a subset or a superset of VID, so it's definitely worth digging into. But I'm not sure what exactly you're interested in, in the context of rollups...\n1. zk_id _—_ 07/25/2023 3:28 PM\n \n The part of the rollups seems to be the base for choosing proofs that scale linearly with the amount of nodes (which makes it impractical for large numbers of nodes). The protocol is very simple, and would only need to instead provide constant proofs with the Kate commitments (at the cost of large computational resources is my understanding). This was at least the rationale that I get from reading the paper and the conversation with Bunz, one of the founders of the Espresso shared sequencer (which is where I found the first reference to this paper). I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal. You still would need the sampling for the light clients though, of course.\n \n2. dryajov _—_ 07/25/2023 8:31 PM\n \n \u003e I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn't need to do that for the agreement of the dispersal.\n \n Yeah, great question. What follows is strictly IMO, as I haven't seen this formally contrasted anywhere, so my reasoning can be wrong in subtle ways.\n \n - (A)VID - **dispersing** and storing data in a verifiable manner\n - Sampling - verifying already **dispersed** data\n \n tl;dr Sampling allows light nodes to protect against dishonest majority attacks. In other words, a light node cannot be tricked to follow an incorrect chain by a dishonest validator majority that withholds data. More details are here - [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\") ------------- First, DAS implies (A)VID, as there is an initial phase where data is distributed to some subset of nodes. Moreover, these nodes, usually the validators, attest that they received the data and that it is correct. If a majority of validators accepts, then the block is considered correct, otherwise it is rejected. This is the verifiable dispersal part. But what if the majority of validators are dishonest? Can you prevent them from tricking the rest of the network from following the chain?\n \n Dankrad Feist\n \n [Data availability checks](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html)\n \n Primer on data availability checks\n \n3. _[_8:31 PM_]_\n \n ## Dealing with dishonest majorities\n \n This is easy if all the data is downloaded by all nodes all the time, but we're trying to avoid just that. But lets assume, for the sake of the argument, that there are full nodes in the network that download all the data and are able to construct fraud proofs for missing data, can this mitigate the problem? It turns out that it can't, because proving data (un)availability isn't a directly attributable fault - in other words, you can observe/detect it but there is no way you can prove it to the rest of the network reliably. More details here [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\") So, if there isn't much that can be done by detecting that a block isn't available, what good is it for? Well nodes can still avoid following the unavailable chain and thus be tricked by a dishonest majority. However, simply attesting that data has been publishing is not enough to prevent a dishonest majority from attacking the network. (edited)\n \n4. dryajov _—_ 07/25/2023 9:06 PM\n \n To complement, the relevant quote from [https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding](https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding \"https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding\"), is:\n \n \u003e Here, fraud proofs are not a solution, because not publishing data is not a uniquely attributable fault - in any scheme where a node (\"fisherman\") has the ability to \"raise the alarm\" about some piece of data not being available, if the publisher then publishes the remaining data, all nodes who were not paying attention to that specific piece of data at that exact time cannot determine whether it was the publisher that was maliciously withholding data or whether it was the fisherman that was maliciously making a false alarm.\n \n The relevant quote from from [https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html \"https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html\"), is:\n \n \u003e There is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a consensus supermajority has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? Honest full nodes, obviously, will not follow this chain, as they can’t download the data. But light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.\n \n Both articles are a bit old, but the intuitions still hold.\n \n\nJuly 26, 2023\n\n6. zk_id _—_ 07/26/2023 10:42 AM\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n7. _[_10:45 AM_]_\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n8. zk_id\n \n Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it's not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n \n ### dryajov _—_ 07/26/2023 4:42 PM\n \n Great! Glad to help anytime \n \n9. zk_id\n \n The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n \n dryajov _—_ 07/26/2023 4:43 PM\n \n Yes, I'd argue it is crucial in a network with distributed validation, where all nodes are either fully light or partially light nodes.\n \n10. _[_4:46 PM_]_\n \n Btw, there is probably more we can share/compare notes on in this problem space, we're looking at similar things, perhaps from a slightly different perspective in Codex's case, but the work done on DAS with the EF directly is probably very relevant for you as well \n \n\nJuly 27, 2023\n\n12. zk_id _—_ 07/27/2023 3:05 AM\n \n I would love to. Do you have those notes somewhere?\n \n13. zk_id _—_ 07/27/2023 4:01 AM\n \n all the links you have, anything, would be useful\n \n14. zk_id\n \n I would love to. Do you have those notes somewhere?\n \n dryajov _—_ 07/27/2023 4:50 PM\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n\nJuly 28, 2023\n\n16. zk_id _—_ 07/28/2023 5:47 AM\n \n Would love to see anything that is possible\n \n17. _[_5:47 AM_]_\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n18. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n dryajov _—_ 07/28/2023 4:07 PM\n \n Yes, we're also working in this direction as this is crucial for us as well. There should be some result coming soon(tm), now that @bkomuves is helping us with this part.\n \n19. zk_id\n \n Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n \n bkomuves _—_ 07/28/2023 4:44 PM\n \n my current view (it's changing pretty often :) is that there is tension between:\n \n - commitment cost\n - proof cost\n - and verification cost\n \n the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n\nJuly 29, 2023\n\n21. bkomuves\n \n my current view (it's changing pretty often :) is that there is tension between: \n \n - commitment cost\n - proof cost\n - and verification cost\n \n  the holy grail which is the best for all of them doesn't seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what's possible, there are external restrictions)\n \n zk_id _—_ 07/29/2023 4:23 AM\n \n I agree. That's also my understanding (although surely much more superficial).\n \n22. _[_4:24 AM_]_\n \n There is also the dimension of computation vs size cost\n \n23. _[_4:25 AM_]_\n \n ie the VID scheme (of the paper that kickstarted this conversation) has all the properties we need, but it scales n^2 in message complexity which makes it lose the properties we are looking for after 1k nodes. We need to scale confortably to 10k nodes.\n \n24. _[_4:29 AM_]_\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is:\n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n\nAugust 1, 2023\n\n26. dryajov\n \n A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n \n Leobago _—_ 08/01/2023 1:13 PM\n \n Note much public write-ups yet. You can find some content here:\n \n - [https://blog.codex.storage/data-availability-sampling/](https://blog.codex.storage/data-availability-sampling/ \"https://blog.codex.storage/data-availability-sampling/\")\n \n - [https://github.com/codex-storage/das-research](https://github.com/codex-storage/das-research \"https://github.com/codex-storage/das-research\")\n \n \n We also have a few Jupiter notebooks but they are not public yet. As soon as that content is out we can let you know ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n Codex Storage Blog\n \n [Data Availability Sampling](https://blog.codex.storage/data-availability-sampling/)\n \n The Codex team is busy building a new web3 decentralized storage platform with the latest advances in erasure coding and verification systems. Part of the challenge of deploying decentralized storage infrastructure is to guarantee that the data that has been stored and is available for retrieval from the beginning until\n \n GitHub\n \n [GitHub - codex-storage/das-research: This repository hosts all the ...](https://github.com/codex-storage/das-research)\n \n This repository hosts all the research on DAS for the collaboration between Codex and the EF. - GitHub - codex-storage/das-research: This repository hosts all the research on DAS for the collabora...\n \n [](https://opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research)\n \n ![GitHub - codex-storage/das-research: This repository hosts all the ...](https://images-ext-2.discordapp.net/external/DxXI-YBkzTrPfx_p6_kVpJzvVe6Ix6DrNxgrCbcsjxo/https/opengraph.githubassets.com/39769464ebae80ca62c111bf2acb6af95fde1b9dc6e3c5a9eb56316ea363e3d8/codex-storage/das-research?width=400\u0026height=200)\n \n27. zk_id\n \n So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is: \n \n - Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\n - If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don't think we will pursue this, but we will have to if this scheme doesn't scale with the first option.\n \n dryajov _—_ 08/01/2023 1:55 PM\n \n This might interest you as well - [https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a \"https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a\")\n \n Medium\n \n [Combining KZG and erasure coding](https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a)\n \n The Hitchhiker’s Guide to Subspace  — Episode II\n \n [](https://miro.medium.com/v2/resize:fit:1200/0*KGb5QHFQEd0cvPeP.png)\n \n ![Combining KZG and erasure coding](https://images-ext-2.discordapp.net/external/LkoJxMEskKGMwVs8XTPVQEEu0senjEQf42taOjAYu0k/https/miro.medium.com/v2/resize%3Afit%3A1200/0%2AKGb5QHFQEd0cvPeP.png?width=400\u0026height=200)\n \n28. _[_1:56 PM_]_\n \n This is a great analysis of the current state of the art in structure of data + commitment and the interplay. I would also recoment reading the first article of the series which it also links to\n \n29. zk_id _—_ 08/01/2023 3:04 PM\n \n Thanks @dryajov @Leobago ! Much appreciated!\n \n30. _[_3:05 PM_]_\n \n Very glad that we can discuss these things with you. Maybe I have some specific questions once I finish reading a huge pile of pending docs that I'm tackling starting today...\n \n31. zk_id _—_ 08/01/2023 6:34 PM\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n32. zk_id\n \n @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n \n Leobago _—_ 08/01/2023 6:36 PM\n \n Yes, checkout the visual branch and make sure to enable plotting in the config file, it should produce a bunch of figures ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg)\n \n33. _[_6:37 PM_]_\n \n You might find also some bugs here and there on that branch ![😅](https://discord.com/assets/b45af785b0e648fe2fb7e318a6b8010c.svg)\n \n34. zk_id _—_ 08/01/2023 7:44 PM\n \n Thanks!","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-01":{"title":"2023-08-01 Codex weekly","content":"\n# Codex update Aug 1st\n\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial design writeup https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md\n - Work break down and review for Ben and Tomasz (epic coming up)\n - This is required to integrate the proving system\n\n### Milestone: Block discovery and retrieval\n\n- Some initial work break down and milestones here - https://docs.google.com/document/d/1hnYWLvFDgqIYN8Vf9Nf5MZw04L2Lxc9VxaCXmp9Jb3Y/edit\n - Initial analysis of block discovery - https://rpubs.com/giuliano_mega/1067876\n - Initial block discovery simulator - https://gmega.shinyapps.io/block-discovery-sim/\n\n### Milestone: Distributed Client Testing\n\n- Lots of work around log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - This is a first try of running against an L2\n - Mostly done, waiting on related fixes to land before merge - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Reservations and slot management\n\n- Lots of work around slot reservation and queuing https://github.com/codex-storage/nim-codex/pull/455\n\n## Remote auditing\n\n### Milestone: Implement Poseidon2\n\n- First pass at an implementation by Balazs\n - private repo, but can give access if anyone is interested\n\n### Milestone: Refine proving system\n\n- Lost of thinking around storage proofs and proving systems\n - private repo, but can give access if anyone is interested\n\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator.\n- Implemented logical error-rates and delays to interactions between DHT clients.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/codex/updates/2023-08-11":{"title":"2023-08-11 Codex weekly","content":"\n\n# Codex update August 11\n\n---\n## Client\n\n### Milestone: Merkelizing block data\n\n- Initial Merkle Tree implementation - https://github.com/codex-storage/nim-codex/pull/504\n- Work on persisting/serializing Merkle Tree is underway, PR upcoming\n\n### Milestone: Block discovery and retrieval\n\n- Continued analysis of block discovery and retrieval - https://hackmd.io/_KOAm8kNQamMx-lkQvw-Iw?both=#fn5\n - Reviewing papers on peers sampling and related topics\n - [Wormhole Peer Sampling paper](http://publicatio.bibl.u-szeged.hu/3895/1/p2p13.pdf)\n - [Smoothcache](https://dl.acm.org/doi/10.1145/2713168.2713182)\n- Starting work on simulations based on the above work\n\n### Milestone: Distributed Client Testing\n\n- Continuing working on log collection/analysis and monitoring\n - Details here https://github.com/codex-storage/cs-codex-dist-tests/pull/41\n - More related issues/PRs:\n - https://github.com/codex-storage/infra-codex/pull/20\n - https://github.com/codex-storage/infra-codex/pull/20\n- Testing and debugging Condex in continuous testing environment\n - Debugging continuous tests [cs-codex-dist-tests/pull/44](https://github.com/codex-storage/cs-codex-dist-tests/pull/44)\n - pod labeling [cs-codex-dist-tests/issues/39](https://github.com/codex-storage/cs-codex-dist-tests/issues/39)\n\n---\n## Infra\n\n### Milestone: Kubernetes Configuration and Management\n- Move Dist-Tests cluster to OVH and define naming conventions\n- Configure Ingress Controller for Kibana/Grafana\n- **Create documentation for Kubernetes management**\n- **Configure Dist/Continuous-Tests Pods logs shipping**\n\n### Milestone: Continuous Testing and Labeling\n- Watch the Continuous tests demo\n- Implement and configure Dist-Tests labeling\n- Set up logs shipping based on labels\n- Improve Docker workflows and add 'latest' tag\n\n### Milestone: CI/CD and Synchronization\n- Set up synchronization by codex-storage\n- Configure Codex Storage and Demo CI/CD environments\n\n---\n## Marketplace\n\n### Milestone: L2\n\n- Taiko L2 integration\n - Done but merge is blocked by a few issues - https://github.com/codex-storage/nim-codex/pull/483\n\n### Milestone: Marketplace Sales\n\n- Lots of cleanup and refactoring\n - Finished refactoring state machine PR [link](https://github.com/codex-storage/nim-codex/pull/469)\n - Added support for loading node's slots during Sale's module start [link](https://github.com/codex-storage/nim-codex/pull/510)\n\n---\n## DAS\n\n### Milestone: DHT simulations\n\n- Implementing a DHT in Python for the DAS simulator - https://github.com/cortze/py-dht.\n\n\nNOTE: Several people are/where out during the last few weeks, so some milestones are paused until they are back","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["codex-updates"]},"/roadmap/innovation_lab/milestones-overview":{"title":"Innovation Lab Milestones Overview","content":"\niLab Milestones can be found on the [Notion Page](https://www.notion.so/Logos-Innovation-Lab-dcff7b7a984b4f9e946f540c16434dc9?pvs=4)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/innovation_lab/updates/2023-07-12":{"title":"2023-07-12 Innovation Lab Weekly","content":"\n**Logos Lab** 12th of July\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\n**Milestone**: deliver the first transactional Waku Object called Payggy (attached some design screenshots).\n\nIt is now possible to make transactions on the blockchain and the objects send notifications over the messaging layer (e.g. Waku) to the other participants. What is left is the proper transaction status management and some polishing.\n\nThere is also work being done on supporting external objects, this enables creating the objects with any web technology. This work will guide the separation of the interfaces between the app and the objects and lead us to release it as an SDK.\n\n**Next milestone**: group chat support\n\nThe design is already done for the group chat functionality. There is ongoing design work for a new Waku Object that would showcase what can be done in a group chat context.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nLink to Payggy design files:\nhttps://scene.zeplin.io/project/64ae9e965652632169060c7d\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/UtVHf2EU\n\n--- \n\n#### Conversation\n\n1. petty _—_ 07/15/2023 5:49 AM\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n2. petty\n \n the `waku-objects` repo is empty. Where is the code storing that part vs the playground that is using them?\n \n3. attila🍀 _—_ 07/15/2023 6:18 AM\n \n at the moment most of the code is in the `waku-objects-playground` repo later we may split it to several repos here is the link: [https://github.com/logos-innovation-lab/waku-objects-playground](https://github.com/logos-innovation-lab/waku-objects-playground \"https://github.com/logos-innovation-lab/waku-objects-playground\")","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-02":{"title":"2023-08-02 Innovation Lab weekly","content":"\n**Logos Lab** 2nd of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nThe last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite. \n\nStill, a lot of progress were made and the team released the first version of a color system in the form of an npm package, which lets the users to choose any color they like to customize their app. It is based on grayscale design and uses luminance, hence the name of the library. Try it in the Playground app or check the links below.\n\n**Milestone**: group chat support\n\nThere is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions.\n\nDeployed version of the main branch:\nhttps://waku-objects-playground.vercel.app/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nGrayscale design:\nhttps://grayscale.design/\n\nLuminance package on npm:\nhttps://www.npmjs.com/package/@waku-objects/luminance\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/ZMU4yyWG\n\n--- \n\n### Conversation\n\n1. fryorcraken _—_ Yesterday at 10:58 PM\n \n \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n \n While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n\nAugust 3, 2023\n\n2. fryorcraken\n \n \u003e \u003e There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation. While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n \n3. attila🍀 _—_ Today at 4:21 AM\n \n This is great news and I think it will help with adoption. I did not find a JS API for status (maybe I was looking at the wrong places), the closest was the `status-js-api` project but that still uses whisper and the repo recommends to use `js-waku` instead ![🙂](https://discord.com/assets/da3651e59d6006dfa5fa07ec3102d1f3.svg) [https://github.com/status-im/status-js-api](https://github.com/status-im/status-js-api \"https://github.com/status-im/status-js-api\") Also I also found the `56/STATUS-COMMUNITIES` spec: [https://rfc.vac.dev/spec/56/](https://rfc.vac.dev/spec/56/ \"https://rfc.vac.dev/spec/56/\") It seems to be quite a complete solution for community management with all the bells and whistles. However our use case is a private group chat for your existing contacts, so it seems to be a bit overkill for that.\n \n4. fryorcraken _—_ Today at 5:32 AM\n \n The repo is status-im/status-web\n \n5. _[_5:33 AM_]_\n \n Spec is [https://rfc.vac.dev/spec/55/](https://rfc.vac.dev/spec/55/ \"https://rfc.vac.dev/spec/55/\")\n \n6. fryorcraken\n \n The repo is status-im/status-web\n \n7. attila🍀 _—_ Today at 6:05 AM\n \n As constructive feedback I can tell you that it is not trivial to find it and use it in other projects It is presented as a React component without documentation and by looking at the code it seems to provide you the whole chat UI of the desktop app, which is not necessarily what you need if you want to embed it in your app It seems to be using this package: [https://www.npmjs.com/package/@status-im/js](https://www.npmjs.com/package/@status-im/js \"https://www.npmjs.com/package/@status-im/js\") Which also does not have documentation I assume that package is built from this: [https://github.com/status-im/status-web/tree/main/packages/status-js](https://github.com/status-im/status-web/tree/main/packages/status-js \"https://github.com/status-im/status-web/tree/main/packages/status-js\") This looks promising, but again there is no documentation. Of course you can use the code to figure out things, but at least I would be interested in what are the requirements and high level architecture (does it require an ethereum RPC endpoint, where does it store data, etc.) so that I can evaluate if this is the right approach for me. So maybe a lesson here is to put effort in the documentation and the presentation as well and if you have the budget then have someone on the team whose main responsibility is that (like a devrel or dev evangelist role)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["ilab-updates"]},"/roadmap/innovation_lab/updates/2023-08-11":{"title":"2023-08-17 \u003cTEAM\u003e weekly","content":"\n\n# **Logos Lab** 11th of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\n\nWe merged the group chat but it surfaced plenty of issues that were not a problem with 1on1 chats, both with our Waku integration and from product perspective as well. Spent the bigger part of the week with fixing these. We also registered a new domain, wakuplay.im where the latest version is deployed. It uses the Gnosis chain for transactions and currently the xDai and Gno tokens are supported, but it is easy to add other ERC-20 tokens now.\n\n**Next milestone**: Splitter Waku Object supporting group chats and smart contracts\n\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions. The design is ready and the implementaton has started.\n\n**Next milestone**: Basic Waku Objects website\n\nWork started toward having a structure for a website and the content is shaping up nicely. The implementation has been started on it as well.\n\nDeployed version of the main branch:\nhttps://www.wakuplay.im/\n\nMain development repo:\nhttps://github.com/logos-innovation-lab/waku-objects-playground\n\nContact:\nYou can find us at https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at https://discord.gg/eaYVgSUG","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["\u003cTEAM\u003e-updates"]},"/roadmap/nomos/milestones-overview":{"title":"Nomos Milestones Overview","content":"\n[Milestones Overview Notion Page](https://www.notion.so/ec57b205d4b443aeb43ee74ecc91c701?v=e782d519939f449c974e53fa3ab6978c)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/nomos/updates/2023-07-24":{"title":"2023-07-24 Nomos weekly","content":"\n**Research**\n\n- Milestone 1: Understanding Data Availability (DA) Problem\n - High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris.\n - Explored the necessity and key challenges associated with DA.\n - In-depth study of Verifiable Information Dispersal (VID) as it relates to data availability.\n - **Blocker:** The experimental tests for our specific EC scheme are pending, which is blocking progress to make final decision on KZG + commitments for our architecture.\n- Milestone 2: Privacy for Proof of Stake (PoS)\n - Analyzed the capabilities and limitations of mixnets, specifically within the context of timing attacks in private PoS.\n - Invested time in understanding timing attacks and how Nym mixnet caters to these challenges.\n - Reviewed the Crypsinous paper to understand its privacy vulnerabilities, notably the issue with probabilistic leader election and the vulnerability of anonymous broadcast channels to timing attacks.\n\n**Development**\n\n- Milestone 1: Mixnet and Networking\n - Initiated integration of libp2p to be used as the full node's backend, planning to complete in the next phase.\n - Begun planning for the next steps for mixnet integration, with a focus on understanding the components of the Nym mixnet, its problem-solving mechanisms, and the potential for integrating some of its components into our codebase.\n- Milestone 2: Simulation Application\n - Completed pseudocode for Carnot Simulator, created a test pseudocode, and provided a detailed description of the simulation. The relevant resources can be found at the following links:\n - Carnot Simulator pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/carnot_simulation_psuedocode.py)\n - Test pseudocode (https://github.com/logos-co/nomos-specs/blob/Carnot-Simulation/carnot/test_carnot_simulation.py)\n - Description of the simulation (https://www.notion.so/Carnot-Simulation-c025dbab6b374c139004aae45831cf78)\n - Implemented simulation network fixes and warding improvements, and increased the run duration of integration tests. The corresponding pull requests can be accessed here:\n - Simulation network fix (https://github.com/logos-co/nomos-node/pull/262)\n - Vote tally fix (https://github.com/logos-co/nomos-node/pull/268)\n - Increased run duration of integration tests (https://github.com/logos-co/nomos-node/pull/263)\n - Warding improvements (https://github.com/logos-co/nomos-node/pull/269)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-07-31":{"title":"2023-07-31 Nomos weekly","content":"\n**Nomos 31st July**\n\n[Network implementation and Mixnet]:\n\nResearch\n- Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder.\n- Considered the use of a new NetworkInterface in the simulation to mimic the mixnet, but currently, no significant benefits from doing so have been identified.\nDevelopment\n- Fixes were made on the Overlay interface.\n- Near completion of the libp2p integration with all tests passing so far, a PR is expected to be opened soon.\n- Link to libp2p PRs: https://github.com/logos-co/nomos-node/pull/278, https://github.com/logos-co/nomos-node/pull/279, https://github.com/logos-co/nomos-node/pull/280, https://github.com/logos-co/nomos-node/pull/281\n- Started working on the foundation of the libp2p-mixnet transport.\n\n[Private PoS]:\n\nResearch\n- Discussions were held on the Privacy PoS (PPoS) proposal, aligning a general direction of team members.\n- Reviews on the PPoS proposal were done.\n- A proposal to merge the PPoS proposal with the efficient one was made, in order to have both privacy and efficiency.\n- Discussions on merging Efficient PoS (EPoS) with PPoS are in progress.\n\n[Carnot]:\n\nResearch\n- Analyzing Bribery attack scenarios, which seem to make Carnot more vulnerable than expected.\n\n\n**Development**\n\n- Improved simulation application to meet test scale requirements (https://github.com/logos-co/nomos-node/pull/274).\n- Created a strategy to solve the large message sending issue in the simulation application.\n\n[Data Availability Sampling (or VID)]:\n\nResearch\n- Conducted an analysis of stored data \"degradation\" problem for data availability, modeling fractions of nodes which leave the system at regular time intervals\n- Continued literature reading on Verifiable Information Dispersal (VID) for DA problem, as well as encoding/commitment schemes.","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-07":{"title":"2023-08-07 Nomos weekly","content":"\nNomos weekly report\n================\n\n### Network implementation and Mixnet:\n#### Research\n- Researched the Nym mixnet architecture in depth in order to design our prototype architecture.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1661386628)\n- Discussions about how to manage the mixnet topology.\n (Link: https://github.com/logos-co/nomos-node/issues/273#issuecomment-1665101243)\n#### Development\n- Implemented a prototype for building a Sphinx packet, mixing packets at the first hop of gossipsub with 3 mixnodes (+ encryption + delay), raw TCP connections between mixnodes, and the static entire mixnode topology.\n (Link: https://github.com/logos-co/nomos-node/pull/288)\n- Added support for libp2p in tests.\n (Link: https://github.com/logos-co/nomos-node/pull/287)\n- Added support for libp2p in nomos node.\n (Link: https://github.com/logos-co/nomos-node/pull/285)\n\n### Private PoS:\n#### Research\n- Worked on PPoS design and addressed potential metadata leakage due to staking and rewarding.\n- Focus on potential bribery attacks and privacy reasoning, but not much progress yet.\n- Stopped work on Accountability mechanism and PPoS efficiency due to prioritizing bribery attacks.\n\n### Carnot:\n#### Research\n- Addressed two solutions for the bribery attack. Proposals pending.\n- Work on accountability against attacks in Carnot including Slashing mechanism for attackers is paused at the moment.\n- Modeled data decimation using a specific set of parameters and derived equations related to it.\n- Proposed solutions to address bribery attacks without compromising the protocol's scalability.\n\n### Data Availability Sampling (VID):\n#### Research\n- Analyzed data decimation in data availability problem.\n (Link: https://www.overleaf.com/read/gzqvbbmfnxyp)\n- DA benchmarks and analysis for data commitments and encoding. This confirms that (for now), we are on the right path.\n- Explored the idea of node sharding: https://arxiv.org/abs/1907.03331 (taken from Celestia), but discarded it because it doesn't fit our architecture.\n\n#### Testing and Node development:\n- Fixes and enhancements made to nomos-node.\n (Link: https://github.com/logos-co/nomos-node/pull/282)\n (Link: https://github.com/logos-co/nomos-node/pull/289)\n (Link: https://github.com/logos-co/nomos-node/pull/293)\n (Link: https://github.com/logos-co/nomos-node/pull/295)\n- Ran simulations with 10K nodes.\n- Updated integration tests in CI to use waku or libp2p network.\n (Link: https://github.com/logos-co/nomos-node/pull/290)\n- Fix for the node throughput during simulations.\n (Link: https://github.com/logos-co/nomos-node/pull/295)","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/nomos/updates/2023-08-14":{"title":"2023-08-17 Nomos weekly","content":"\n\n# **Nomos weekly report 14th August**\n---\n\n## **Network Privacy and Mixnet**\n\n### Research\n- Mixnet architecture discussions. Potential agreement on architecture not very different from PoC\n- Mixnet preliminary design [https://www.notion.so/Mixnet-Architecture-613f53cf11a245098c50af6b191d31d2]\n### Development\n- Mixnet PoC implementation starting [https://github.com/logos-co/nomos-node/pull/302]\n- Implementation of mixnode: a core module for implementing a mixnode binary\n- Implementation of mixnet-client: a client library for mixnet users, such as nomos-node\n\n### **Private PoS**\n- No progress this week.\n\n---\n## **Data Availability**\n### Research\n- Continued analysis of node decay in data availability problem\n- Improved upper bound on the probability of the event that data is no longer available given by the (K,N) erasure ECC scheme [https://www.overleaf.com/read/gzqvbbmfnxyp]\n\n### Development\n- Library survey: Library used for the benchmarks is not yet ready for requirements, looking for alternatives\n- RS \u0026 KZG benchmarking for our use case https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450\n- Study documentation on Danksharding and set of questions for Leonardo [https://www.notion.so/2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450]\n\n---\n## **Testing, CI and Simulation App**\n\n### Development\n- Sim fixes/improvements [https://github.com/logos-co/nomos-node/pull/299], [https://github.com/logos-co/nomos-node/pull/298], [https://github.com/logos-co/nomos-node/pull/295]\n- Simulation app and instructions shared [https://github.com/logos-co/nomos-node/pull/300], [https://github.com/logos-co/nomos-node/pull/291], [https://github.com/logos-co/nomos-node/pull/294]\n- CI: Updated and merged [https://github.com/logos-co/nomos-node/pull/290]\n- Parallel node init for improved simulation run times [https://github.com/logos-co/nomos-node/pull/300]\n- Implemented branch overlay for simulating 100K+ nodes [https://github.com/logos-co/nomos-node/pull/291]\n- Sequential builds for nomos node features updated in CI [https://github.com/logos-co/nomos-node/pull/290]","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["nomos-updates"]},"/roadmap/vac/milestones-overview":{"title":"Vac Milestones Overview","content":"\n[Overview Notion Page](https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632?pvs=4) - Information copied here for now\n\n## Info\n### Structure of milestone names:\n\n`vac:\u003cunit\u003e:\u003ctag\u003e:\u003cfor_project\u003e:\u003ctitle\u003e_\u003ccounter\u003e`\n- `vac` indicates it is a vac milestone\n- `unit` indicates the vac unit `p2p`, `dst`, `tke`, `acz`, `sc`, `zkvm`, `dr`, `rfc`\n- `tag` tags a specific area / project / epic within the respective vac unit, e.g. `nimlibp2p`, or `zerokit`\n- `for_project` indicates which Logos project the milestone is mainly for `nomos`, `waku`, `codex`, `nimbus`, `status`; or `vac` (meaning it is internal / helping all projects as a base layer)\n- `title` the title of the milestone\n- `counter` an optional counter; `01` is implicit; marked with a `02` onward indicates extensions of previous milestones\n\n## Vac Unit Roadmaps\n- [Roadmap: P2P](https://www.notion.so/Roadmap-P2P-a409c34cb95b4b81af03f60cbf32f9c1?pvs=21)\n- [Roadmap: Token Economics](https://www.notion.so/Roadmap-Token-Economics-e91f1cb58ebc4b1eb46b074220f535d0?pvs=21)\n- [Roadmap: Distributed Systems Testing (DST))](https://www.notion.so/Roadmap-Distributed-Systems-Testing-DST-4ef0d8694d3e40d6a0cfe706855c43e6?pvs=21)\n- [Roadmap: Applied Cryptography and ZK (ACZ)](https://www.notion.so/Roadmap-Applied-Cryptography-and-ZK-ACZ-00b3ba101fae4a099a2d7af2144ca66c?pvs=21)\n- [Roadmap: Smart Contracts (SC)](https://www.notion.so/Roadmap-Smart-Contracts-SC-e60e0103cad543d5832144d5dd4611a0?pvs=21)\n- [Roadmap: zkVM](https://www.notion.so/Roadmap-zkVM-59cb588bd2404e659633e008101310b5?pvs=21)\n- [Roadmap: Deep Research (DR)](https://www.notion.so/Roadmap-Deep-Research-DR-561a864c890549c3861bf52ab979d7ab?pvs=21)\n- [Roadmap: RFC Process](https://www.notion.so/Roadmap-RFC-Process-f8516d19132b41a0beb29c24510ebc09?pvs=21)","lastmodified":"2023-08-17T20:15:32.290291458Z","tags":["milestones"]},"/roadmap/vac/updates/2023-07-10":{"title":"2023-07-10 Vac Weekly","content":"- *vc::Deep Research*\n - refined deep research roadmaps https://github.com/vacp2p/research/issues/190, https://github.com/vacp2p/research/issues/192\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Prepared Paris talks\n - Implemented perf protocol to compare the performances with other libp2ps https://github.com/status-im/nim-libp2p/pull/925\n- *vsu::Tokenomics*\n - Fixing bugs on the SNT staking contract;\n - Definition of the first formal verification tests for the SNT staking contract;\n - Slides for the Paris off-site\n- *vsu::Distributed Systems Testing*\n - Replicated message rate issue (still on it)\n - First mockup of offline data\n - Nomos consensus test working\n- *vip::zkVM*\n - hiring\n - onboarding new researcher\n - presentation on ECC during Logos Research Call (incl. preparation)\n - more research on nova, considering additional options\n - Identified 3 research questions to be taken into consideration for the ZKVM and the publication\n - Researched Poseidon implementation for Nova, Nova-Scotia, Circom\n- *vip::RLNP2P*\n - finished rln contract for waku product - https://github.com/waku-org/rln-contract\n - fixed homebrew issue that prevented zerokit from building - https://github.com/vacp2p/zerokit/commit/8a365f0c9e5c4a744f70c5dd4904ce8d8f926c34\n - rln-relay: verify proofs based upon bandwidth usage - https://github.com/waku-org/nwaku/commit/3fe4522a7e9e48a3196c10973975d924269d872a\n - RLN contract audit cont' https://hackmd.io/@blockdev/B195lgIth\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-17":{"title":"2023-07-17 Vac weekly","content":"\n**Last week**\n- *vc*\n - Vac day in Paris (13th)\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - Paris offsite Paris (all CCs)\n- *vsu::Tokenomics*\n - Bugs found and solved in the SNT staking contract\n - attend events in Paris\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - QoS on all four infras\n - Continue work on theoretical gossipsub analysis (varying regular graph sizes)\n - Peer extraction using WLS (almost finished)\n - Discv5 testing\n - Wakurtosis CI improvements\n - Provide offline data\n- *vip::zkVM*\n - onboarding new researcher\n - Prepared and presented ZKVM work during VAC offsite\n - Deep research on Nova vs Stark in terms of performance and related open questions\n - researching Sangria\n - Worked on NEscience document (https://www.notion.so/Nescience-WIP-0645c738eb7a40869d5650ae1d5a4f4e)\n - zerokit:\n - worked on PR for arc-circom\n- *vip::RLNP2P*\n - offsite Paris\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - working on comprehensive current/related work study on Validator Privacy\n - working on PoC of Tor push in Nimbus\n - working towards comprehensive current/related work study on gossipsub scaling\n- *vsu::P2P*\n - EthCC \u0026 Logos event Paris (all CCs)\n- *vsu::Tokenomics*\n - Attend EthCC and side events in Paris\n - Integrate staking contracts with radCAD model\n - Work on a new approach for Codex collateral problem\n- *vsu::Distributed Systems Testing*\n - Events in Paris\n - Finish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report\n - Restructure the Analysis script and start modelling Status control messages\n - Split Wakurtosis analysis module into separate repository (delayed)\n - Deliver simulation results (incl fixing discv5 error with new Kurtosis version)\n - Second iteration Nomos CI\n- *vip::zkVM*\n - Continue researching on Nova open questions and Sangria\n - Draft the benchmark document (by the end of the week)\n - research hardware for benchmarks\n - research Halo2 cont'\n - zerokit:\n - merge a PR for deployment of arc-circom\n - deal with arc-circom master fail\n- *vip::RLNP2P*\n - offsite paris\n- *blockers*\n - *vip::zkVM:zerokit*: ark-circom deployment to crates io; contact to ark-circom team","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-24":{"title":"2023-08-03 Vac weekly","content":"\nNOTE: This is a first experimental version moving towards the new reporting structure:\n\n**Last week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - related work section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - basic torpush encode/decode ( https://github.com/vacp2p/nim-libp2p-experimental/pull/1 )\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - (focus on Tor-push PoC)\n- *vsu::P2P*\n - admin/misc\n - EthCC (all CCs)\n- *vsu::Tokenomics*\n - admin/misc\n - Attended EthCC and side events in Paris\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - Kicked off a new approach for Codex collateral problem\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - Integrated SNT staking contracts with Python\n - milestone (50%, 2023/07/14) SNT litepaper\n - (delayed)\n - milestone(30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - milestone (95%, 2023/07/31) Wakurtosis Waku Report\n - Add timout to injection async call in WLS to avoid further issues (PR #139 https://github.com/vacp2p/wakurtosis/pull/139)\n - Plotting \u0026 analyse 100 msg/s off line Prometehus data\n - milestone (90%, 2023/07/31) Nomos CI testing\n - fixed errors in Nomos consensus simulation\n - milestone (30%, ...) gossipsub model analysis\n - add config options to script, allowing to load configs that can be directly compared to Wakurtosis results\n - added support for small world networks\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - (write ups will be available here: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Solved the open questions on Nova adn completed the document (will update the page)\n - Reviewed Nescience and working on a document\n - Reviewed partly the write up on FHE\n - writeup for Nova and Sangria; research on super nova\n - reading a new paper revisiting Nova (https://eprint.iacr.org/2023/969)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - zkvm\n - Researching Nova to understand the folding technique for ZKVM adaptation\n - zerokit\n - Rostyslav became circom-compat maintainer\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro\n - completed\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - admin/misc\n - EthCC + offsite\n\n**This week**\n- *vc*\n- *vc::Deep Research*\n - milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n - working on contributions section, based on https://hackmd.io/X1DoBHtYTtuGqYg0qK4zJw\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - working on establishing a connection via nim-libp2p tor-transport\n - setting up goerli test node (cont')\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - continue working on paper\n- *vsu::P2P*\n - milestone (...)\n - Implement ChokeMessage for GossipSub\n - Continue \"limited flood publishing\" (https://github.com/status-im/nim-libp2p/pull/911)\n- *vsu::Tokenomics*\n - admin/misc:\n - (3 CC days off)\n - Catch up with EthCC talks that we couldn't attend (schedule conflicts)\n - milestone (50%, 2023/07/14) SNT litepaper\n - Start building the SNT agent-based simulation\n- *vsu::Distributed Systems Testing*\n - milestone (100%, 2023/07/31) Wakurtosis Waku Report\n - finalize simulations\n - finalize report\n - milestone (100%, 2023/07/31) Nomos CI testing\n - finalize milestone\n - milestone (30%, ...) gossipsub model analysis\n - Incorporate Status control messages\n - admin/misc\n - Interviews \u0026 reports for SE and STA positions\n - EthCC (1 CC)\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - Refine the Nescience WIP and FHE documents\n - research HyperNova\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Continue exploring Nova and other ZKPs and start technical writing on Nova benchmarks\n - zkvm\n - zerokit\n - circom: reach an agreement with other maintainers on master branch situation\n- *vip::RLNP2P*\n - maintenance\n - investigate why docker builds of nwaku are failing [zerokit dependency related]\n - documentation on how to use rln for projects interested (https://discord.com/channels/864066763682218004/1131734908474236968/1131735766163267695)(https://ci.infra.status.im/job/nim-waku/job/manual/45/console)\n - milestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n - revert rln bandwidth reduction based on offsite discussion, move to different validator\n- *blockers*","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-07-31":{"title":"2023-07-31 Vac weekly","content":"\n- *vc::Deep Research*\n - milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission\n - proposed solution section\n - milestone (15%, 2023/08/31) Nimbus Tor-push PoC\n - establishing torswitch and testing code\n - milestone (15%, 2023/11/30) paper on Tor push validator privacy\n - addressed feedback on current version of paper\n- *vsu::P2P*\n - nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH's EIP-4844\n - Merged IDontWant (https://github.com/status-im/nim-libp2p/pull/934) \u0026 Limit flood publishing (https://github.com/status-im/nim-libp2p/pull/911) 𝕏\n - This wraps up the \"mandatory\" optimizations for 4844. We will continue working on stagger sending and other optimizations\n - nim-libp2p: (70%, 2023/07/31) WebRTC transport\n- *vsu::Tokenomics*\n - admin/misc\n - 2 CCs off for the week\n - milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n - milestone (50%, 2023/08/30) SNT staking smart contract\n - milestone (50%, 2023/07/14) SNT litepaper\n - milestone (30%, 2023/09/29) Nomos Token: requirements and constraints\n- *vsu::Distributed Systems Testing*\n - admin/misc\n - Analysis module extracted from wakurtosis repo (https://github.com/vacp2p/wakurtosis/pull/142, https://github.com/vacp2p/DST-Analysis)\n - hiring\n - milestone (99%, 2023/07/31) Wakurtosis Waku Report\n - Re-run simulations\n - merge Discv5 PR (https://github.com/vacp2p/wakurtosis/pull/129).\n - finalize Wakurtosis Tech Report v2\n - milestone (100%, 2023/07/31) Nomos CI testing\n - delivered first version of Nomos CI integration (https://github.com/vacp2p/wakurtosis/pull/141)\n - milestone (30%, 2023/08/31 gossipsub model: Status control messages\n - Waku model is updated to model topics/content-topics\n- *vip::zkVM*\n - milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria...)\n - achievment :: nova questions answered (see document in Project: https://www.notion.so/zkVM-cd358fe429b14fa2ab38ca42835a8451)\n - Nescience WIP done (to be delivered next week, priority)\n - FHE review (lower prio)\n - milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n - Working on discoveries about other benchmarks done on plonky2, starky, and halo2\n - zkvm\n - zerokit\n - fixed ark-circom master \n - achievment :: publish ark-circom https://crates.io/crates/ark-circom\n - achievment :: publish zerokit_utils https://crates.io/crates/zerokit_utils\n - achievment :: publish rln https://crates.io/crates/rln (𝕏 jointly with RLNP2P)\n- *vip::RLNP2P*\n - milestone (100%, 2023/07/31) RLN-Relay Waku production readiness\n - Updated rln-contract to be more modular - and downstreamed to waku fork of rln-contract - https://github.com/vacp2p/rln-contract and http://github.com/waku-org/waku-rln-contract\n - Deployed to sepolia\n - Fixed rln enabled docker image building in nwaku - https://github.com/waku-org/nwaku/pull/1853\n - zerokit:\n - achievement :: zerokit v0.3.0 release done - https://github.com/vacp2p/zerokit/releases/tag/v0.3.0 (𝕏 jointly with zkVM)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-07":{"title":"2023-08-07 Vac weekly","content":"\n\nMore info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week):\nhttps://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n**Vac week 32** August 7th\n- *vsu::P2P*\n - `vac:p2p:nim-libp2p:vac:maintenance`\n - Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n - `vac:p2p:nim-chronos:vac:maintenance`\n - Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n - Investigate flaky test using REUSE_PORT\n- *vsu::Tokenomics*\n - (...)\n- *vsu::Distributed Systems Testing*\n - `vac:dst:wakurtosis:waku:techreport`\n - delivered: Wakurtosis Tech Report v2 (https://docs.google.com/document/d/1U3bzlbk_Z3ZxN9tPAnORfYdPRWyskMuShXbdxCj4xOM/edit?usp=sharing)\n - `vac:dst:wakurtosis:vac:rlog`\n - working on research log post on Waku Wakurtosis simulations\n - `vac:dst:gsub-model:status:control-messages`\n - delivered: the analytical model can now handle Status messages; status analysis now has a separate cli and config; handles top 5 message types (by expected bandwidth consumption)\n - `vac:dst:gsub-model:vac:refactoring`\n - Refactoring and bug fixes\n - introduced and tested 2 new analytical models\n - `vac:dst:wakurtosis:waku:topology-analysis`\n - delivered: extracted into separate module, independent of wls message\n - `vac:dst:wakurtosis:nomos:ci-integration_02`\n - planning\n - `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n - planning; check usage of new codex simulator tool (https://github.com/codex-storage/cs-codex-dist-tests)\n- *vip::zkVM*\n - `vac:zkvm::vac:research-existing-proof-systems`\n - 90% Nescience WIP done – to be reviewed carefully since no other follow up documents were giiven to me\n - 50% FHE review - needs to be refined and summarized\n - finished SuperNova writeup ( https://www.notion.so/SuperNova-research-document-8deab397f8fe413fa3a1ef3aa5669f37 )\n - researched starky\n - 80% Halo2 notes ( https://www.notion.so/halo2-fb8d7d0b857f43af9eb9f01c44e76fb9 )\n - `vac:zkvm::vac:proof-system-benchmarks`\n - More discoveries on benchmarks done on ZK-snarks and ZK-starks but all are high level\n - Viewed some circuits on Nova and Poseidon\n - Read through Halo2 code (and Poseidon code) from Axiom\n- *vip::RLNP2P*\n - `vac:acz:rlnp2p:waku:production-readiness`\n - Waku rln contract registry - https://github.com/waku-org/waku-rln-contract/pull/3\n - mark duplicated messages as spam - https://github.com/waku-org/nwaku/pull/1867\n - use waku-org/waku-rln-contract as a submodule in nwaku - https://github.com/waku-org/nwaku/pull/1884\n - `vac:acz:zerokit:vac:maintenance`\n - Fixed atomic_operation ffi edge case error - https://github.com/vacp2p/zerokit/pull/195\n - docs cleanup - https://github.com/vacp2p/zerokit/pull/196\n - fixed version tags - https://github.com/vacp2p/zerokit/pull/194\n - released zerokit v0.3.1 - https://github.com/vacp2p/zerokit/pull/198\n - marked all functions as virtual in rln-contract for inheritors - https://github.com/vacp2p/rln-contract/commit/a092b934a6293203abbd4b9e3412db23ff59877e\n - make nwaku use zerokit v0.3.1 - https://github.com/waku-org/nwaku/pull/1886\n - rlnp2p implementers draft - https://hackmd.io/@rymnc/rln-impl-w-waku\n - `vac:acz:zerokit:vac:zerokit-v0.4`\n - zerokit v0.4.0 release planning - https://github.com/vacp2p/zerokit/issues/197\n- *vc::Deep Research*\n - `vac:dr:valpriv:vac:tor-push-poc`\n - redesigned the torpush integration in nimbus https://github.com/vacp2p/nimbus-eth2-experimental/pull/2\n - `vac:dr:valpriv:vac:tor-push-relwork`\n - Addressed further comments in paper, improved intro, added source level variation approach\n - `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n - cont' work on the document","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/vac/updates/2023-08-14":{"title":"2023-08-17 Vac weekly","content":"\n\nVac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\n\n# Vac week 33 August 14th\n\n---\n## *vsu::P2P*\n### `vac:p2p:nim-libp2p:vac:maintenance`\n- Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920\n- delivered: Perf protocol https://github.com/status-im/nim-libp2p/pull/925\n- delivered: Test-plans for the perf protocol https://github.com/lchenut/test-plans/tree/perf-nim\n- Bandwidth estimate as a parameter (waiting for final review) https://github.com/status-im/nim-libp2p/pull/941\n### `vac:p2p:nim-chronos:vac:maintenance`\n- delivered: Remove hard-coded ports from test https://github.com/status-im/nim-chronos/pull/429\n- delivered: fixed flaky test using REUSE_PORT https://github.com/status-im/nim-chronos/pull/438\n\n---\n## *vsu::Tokenomics*\n - admin/misc:\n - (5 CC days off)\n### `vac:tke::codex:economic-analysis`\n- Filecoin economic structure and Codex token requirements\n### `vac:tke::status:SNT-staking`\n- tests with the contracts\n### `vac:tke::nomos:economic-analysis`\n- resume discussions with Nomos team\n\n---\n## *vsu::Distributed Systems Testing (DST)*\n### `vac:dst:wakurtosis:waku:techreport`\n- 1st Draft of Wakurtosis Research Blog (https://github.com/vacp2p/vac.dev/pull/123)\n- Data Process / Analysis of Non-Discv5 K13 Simulations (Wakurtosis Tech Report v2.5)\n### `vac:dst:shadow:vac:basic-shadow-simulation`\n- Basic Shadow Simulation of a gossipsub node (Setup, 5nodes)\n### `vac:dst:10ksim:vac:10ksim-bandwidth-test`\n- Try and plan on how to refactor/generalize testing tool from Codex.\n- Learn more about Kubernetes\n### `vac:dst:wakurtosis:nomos:ci-integration_02`\n- Enable subnetworks\n- Plan how to use wakurtosis with fixed version\n### `vac:dst:eng:vac:bundle-simulation-data`\n- Run requested simulations\n\n---\n## *vsu:Smart Contracts (SC)*\n### `vac:sc::vac:secureum-upskilling`\n - Learned about \n - cold vs warm storage reads and their gas implications\n - UTXO vs account models\n - `DELEGATECALL` vs `CALLCODE` opcodes, `CREATE` vs `CREATE2` opcodes; Yul Assembly\n - Unstructured proxies https://eips.ethereum.org/EIPS/eip-1967\n - C3 Linearization https://forum.openzeppelin.com/t/solidity-diamond-inheritance/2694) (Diamond inheritance and resolution)\n - Uniswap deep dive\n - Finished Secureum slot 2 and 3\n### `vac:sc::vac:maintainance/misc`\n - Introduced Vac's own `foundry-template` for smart contract projects\n - Goal is to have the same project structure across projects\n - Github repository: https://github.com/vacp2p/foundry-template\n\n---\n## *vsu:Applied Cryptogarphy \u0026 ZK (ACZ)*\n - `vac:acz:zerokit:vac:maintenance`\n - PR reviews https://github.com/vacp2p/zerokit/pull/200, https://github.com/vacp2p/zerokit/pull/201\n\n---\n## *vip::zkVM*\n### `vac:zkvm::vac:research-existing-proof-systems`\n- delivered Nescience WIP doc\n- delivered FHE review\n- delivered Nova vs Sangria done - Some discussions during the meeting\n- started HyperNova writeup\n- started writing a trimmed version of FHE writeup\n- researched CCS (for HyperNova)\n- Research Protogalaxy https://eprint.iacr.org/2023/1106 and Protostar https://eprint.iacr.org/2023/620.\n### `vac:zkvm::vac:proof-system-benchmarks`\n- More work on benchmarks is ongoing\n- Putting down a document that explains the differences\n\n---\n## *vc::Deep Research*\n### `vac:dr:valpriv:vac:tor-push-poc`\n- revised the code for PR\n### `vac:dr:valpriv:vac:tor-push-relwork`\n- added section for mixnet, non-Tor/non-onion routing-based anonymity network\n### `vac:dr:gsub-scaling:vac:gossipsub-simulation`\n- Used shadow simulator to run first GossibSub simulation\n### `vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report`\n- Finalized 1st draft of the GossipSub scaling article","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["vac-updates"]},"/roadmap/waku/milestone-waku-10-users":{"title":"Milestone: Waku Network supports 10k Users","content":"\n```mermaid\n%%{ \n init: { \n 'theme': 'base', \n 'themeVariables': { \n 'primaryColor': '#BB2528', \n 'primaryTextColor': '#fff', \n 'primaryBorderColor': '#7C0000', \n 'lineColor': '#F8B229', \n 'secondaryColor': '#006100', \n 'tertiaryColor': '#fff' \n } \n } \n}%%\ngantt\n\tdateFormat YYYY-MM-DD \n\tsection Scaling\n\t\t10k Users :done, 2023-01-20, 2023-07-31\n```\n\n## Completion Deliverable\nTBD\n\n## Epics\n- [Github Issue Tracker](https://github.com/waku-org/pm/issues/12)\n","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/milestones-overview":{"title":"Waku Milestones Overview","content":"\n- 90% - [Waku Network support for 10k users](roadmap/waku/milestone-waku-10-users.md)\n- 80% - Waku Network support for 1MM users\n- 65% - Restricted-run (light node) protocols are production ready\n- 60% - Peer management strategy for relay and light nodes are defined and implemented\n- 10% - Quality processes are implemented for `nwaku` and `go-waku`\n- 80% - Define and track network and community metrics for continuous monitoring improvement\n- 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties)\n- 15% - Dogfooding of RLN by platforms has started\n- 06% - First protocol to incentivize operators has been defined","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":[]},"/roadmap/waku/updates/2023-07-24":{"title":"2023-07-24 Waku weekly","content":"\nDisclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones.\n\n---\n\n## Docs\n\n### **Milestone**: Foundation for Waku docs (done)\n\n#### _achieved_:\n- overall layout\n- concept docs\n- community/showcase pages\n\n### **Milestone**: Foundation for node operator docs (done)\n#### _achieved_:\n- nodes overview page\n- guide for running nwaku (binaries, source, docker)\n- peer discovery config guide\n- reference docs for config methods and options\n\n### **Milestone**: Foundation for js-waku docs\n#### _achieved_:\n- js-waku overview + installation guide\n- lightpush + filter guide\n- store guide\n- @waku/create-app guide\n\n#### _next:_\n- improve @waku/react guide\n\n#### _blocker:_\n- polyfills issue with [js-waku](https://github.com/waku-org/js-waku/issues/1415)\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n### **Milestone**: Running nwaku in the cloud\n### **Milestone**: Add Waku guide to learnweb3.io\n### **Milestone**: Encryption docs for js-waku\n### **Milestone**: Advanced node operator doc (postgres, WSS, monitoring, common config)\n### **Milestone**: Foundation for go-waku docs\n### **Milestone**: Foundation for rust-waku-bindings docs\n### **Milestone**: Waku architecture docs\n### **Milestone**: Waku detailed roadmap and milestones\n### **Milestone**: Explain RLN\n\n---\n\n## Eco Dev (WIP)\n\n### **Milestone**: EthCC Logos side event organisation (done)\n### **Milestone**: Community Growth\n#### _achieved_: \n- Wrote several bounties, improved template; setup onboarding flow in Discord.\n\n#### _next_: \n- Review template, publish on GitHub\n\n### **Milestone**: Business Development (continuous)\n#### _achieved_: \n- Discussions with various leads in EthCC\n#### _next_: \n- Booking calls with said leads\n\n### **Milestone**: Setting Up Content Strategy for Waku\n\n#### _achieved_: \n- Discussions with Comms Hubs re Waku Blog \n- expressed needs and intent around future blog post and needed amplification\n- discuss strategies to onboard/involve non-dev and potential CTAs.\n\n### **Milestone**: Web3Conf (dates)\n### **Milestone**: DeCompute conf\n\n---\n\n## Research (WIP)\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- rendezvous hashing \n- weighting function \n- updated LIGHTPUSH to handle autosharding\n\n#### _next:_\n- update FILTER \u0026 STORE for autosharding\n\n---\n\n## nwaku (WIP)\n\n### **Milestone**: Postgres integration.\n#### _achieved:_\n- nwaku can store messages in a Postgres database\n- we started to perform stress tests\n\n#### _next:_\n- Analyse why some messages are not stored during stress tests happened in both sqlite and Postgres, so maybe the issue isn't directly related to _store_.\n\n### **Milestone**: nwaku as a library (C-bindings)\n#### _achieved:_\n- The integration is in progress through N-API framework\n\n#### _next:_\n- Make the nodejs to properly work by running the _nwaku_ node in a separate thread.\n\n---\n\n## go-waku (WIP)\n\n\n---\n\n## js-waku (WIP)\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved: \n- spec test for connection manager\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n### **Milestone**: Static Sharding\n#### _next_: \n- start implementation of static sharding in js-waku\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- js-lip2p upgrade to remove usage of polyfills (draft PR)\n\n#### _next_: \n- merge and release js-libp2p upgrade\n\n### **Milestone**: Waku Relay in the Browser\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-07-31":{"title":"2023-07-31 Waku weekly","content":"\n## Docs\n\n### **Milestone**: Docs general improvement/incorporating feedback (continuous)\n#### _next:_ \n- rewrite docs in British English\n### **Milestone**: Running nwaku in the cloud\n#### _next:_ \n- publish guides for Digital Ocean, Oracle, Fly.io\n\n---\n## Eco Dev (WIP)\n\n---\n## Research\n\n### **Milestone**: Detailed network requirements and task breakdown\n#### _achieved:_ \n- gathering rough network requirements\n#### _next:_ \n- detailed task breakdown per milestone and effort allocation\n\n### **Milestone**: [Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)\n#### _achieved:_ \n- update FILTER \u0026 STORE for autosharding\n#### _next:_ \n- RFC review \u0026 updates \n- code review \u0026 updates\n\n---\n## nwaku\n\n### **Milestone**: nwaku release process automation\n#### _next_:\n- setup automation to test/simulate current `master` to prevent/limit regressions\n- expand target architectures and platforms for release artifacts (e.g. arm64, Win...)\n### **Milestone**: HTTP Rest API for protocols\n#### _next:_ \n- Filter API added \n- tests to complete.\n\n---\n## go-waku\n\n### **Milestone**: Increase Maintability Score. Refer to [CodeClimate report](https://codeclimate.com/github/waku-org/go-waku)\n#### _next:_ \n- define scope on which issues reported by CodeClimate should be fixed. Initially it should be limited to reduce code complexity and duplication.\n\n### **Milestone**: RLN updates, refer [issue](https://github.com/waku-org/go-waku/issues/608).\n_achieved_:\n- expose `set_tree`, `key_gen`, `seeded_key_gen`, `extended_seeded_keygen`, `recover_id_secret`, `set_leaf`, `init_tree_with_leaves`, `set_metadata`, `get_metadata` and `get_leaf` \n- created an example on how to use RLN with go-waku\n- service node can pass in index to keystore credentials and can verify proofs based on bandwidth usage\n#### _next_: \n- merkle tree batch operations (in progress) \n- usage of persisted merkle tree db\n\n### **Milestone**: Improve test coverage for functional tests of all protocols. Refer to [CodeClimate report]\n#### _next_: \n- define scope on which code sections should be covered by tests\n\n### **Milestone**: C-Bindings\n#### _next_: \n- update API to match nwaku's (by using callbacks instead of strings that require freeing)\n\n---\n## js-waku\n\n### **Milestone**: [Peer management](https://github.com/waku-org/js-waku/issues/914)\n#### _achieved_: \n- extend ConnectionManager with EventEmitter and dispatch peers tagged with their discovery + make it public on the Waku interface\n#### _next_: \n- fallback improvement for peer connect rejection\n\n### **Milestone**: [Peer Exchange](https://github.com/waku-org/js-waku/issues/1429)\n#### _next_: \n- robusting support around peer-exchange for examples\n### **Milestone**: Static Sharding\n#### _achieved_: \n- WIP implementation of static sharding in js-waku\n#### _next_: \n- investigation around gauging connection loss;\n\n### **Milestone**: Developer Experience\n#### _achieved_: \n- improve \u0026 update @waku/react \n- merge and release js-libp2p upgrade\n\n#### _next:_\n- update examples to latest release + make sure no old/unused packages there\n\n### **Milestone**: Maintenance\n#### _achieved_: \n- update to libp2p@0.46.0\n#### _next_:\n- suit of optional tests in pipeline\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-06":{"title":"2023-08-06 Waku weekly","content":"\nMilestones for current works are created and used. Next steps are:\n1) Refine scope of [research work](https://github.com/waku-org/research/issues/3) for rest of the year and create matching milestones for research and waku clients\n2) Review work not coming from research and setting dates\nNote that format matches the Notion page but can be changed easily as it's scripted\n\n\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n- _blocker_: \n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Docker compose with `nwaku` + `postgres` + `prometheus` + `grafana` + `postgres_exporter` https://github.com/alrevuelta/nwaku-compose/pull/3\n- _next_: Carry on with stress testing\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: feedback/update cycles for FILTER \u0026 LIGHTPUSH\n- _next_: New fleet, updating ENR from live subscriptions and merging\n- _blocker_: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app.\n\n**[Move Waku v1 and Waku-Bridge to new repos](https://github.com/waku-org/nwaku/issues/1767)** {E:2023-qa}\n\n- _achieved_: Removed v1 and wakubridge code from nwaku repo\n- _next_: Remove references to `v2` from nwaku directory structure and documents\n\n**[nwaku c-bindings](https://github.com/waku-org/nwaku/issues/1332)** {E:2023-many-platforms}\n\n- _achieved_:\n - Moved the Waku execution into a secondary working thread. Essential for NodeJs.\n - Adapted the NodeJs example to use the `libwaku` with the working-thread approach. The example had been receiving relay messages during a weekend. The memory was stable without crashing. \n- _next_: start applying the thread-safety recommendations https://github.com/waku-org/nwaku/issues/1878\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node.\n\n---\n## js-waku\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: robustness around peer-exchange, and highlight discovery vs connections for PX on the web-chat example\n- _next_: saving successfully connected PX peers to local storage for easier connections on reload\n\n**[Waku Relay scalability in the Browser](https://github.com/waku-org/js-waku/issues/905)** {NO EPIC}\n\n- _achieved_: draft of direct browser-browser RTC example https://github.com/waku-org/js-waku-examples/pull/260 \n- _next_: improve the example (connection re-usage), work on contentTopic based RTC example\n\n---\n## go-waku\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: updated c-bindings to use callbacks\n- _next_: refactor v1 encoding functions and update RFC\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Enabled -race flag and ran all unit tests to identify data races.\n- _next_: Fix issues reported by the data race detector tool\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: use zerokit batch insert/delete for members, exposed function to retrieve data from merkle tree, modified zerokit and go-zerokit-rln to pass merkle tree persistance configuration settings\n- _next_: resume onchain sync from persisted tree db\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: Basic peer management to ensure standard in/out ratio for relay peers.\n- _next_: add service slots to peer manager\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: production of swags and marketing collaterals for web3conf completed\n- _next_: web3conf talk and side event production. various calls with commshub for preparing marketing collaterals.\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)** {E:2023-eco-growth}\n\n- _next_: create guide on `@waku/react` and debugging js-waku web apps\n\n**[Docs general improvement/incorporating feedback (2023)](https://github.com/waku-org/docs.waku.org/issues/102)** {E:2023-eco-growth}\n\n- _achieved_: rewrote the docs in UK English\n- _next_: update docs terms, announce js-waku docs\n\n**[Foundation of js-waku docs](https://github.com/waku-org/docs.waku.org/issues/101)** {E:2023-eco-growth}\n\n_achieved_: added guide on js-waku bootstrapping\n\n---\n## Research\n\n**[1.1 Network requirements and task breakdown](https://github.com/waku-org/research/issues/6)** {E:2023-1mil-users}\n\n- _achieved_: Setup project management tools; determined number of shards to 8; some conversations on RLN memberships\n- _next_: Breakdown and assign tasks under each milestone for the 1 million users/public Waku Network epic.\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]},"/roadmap/waku/updates/2023-08-14":{"title":"2023-08-14 Waku weekly","content":"\n\n# 2023-08-14 Waku weekly\n---\n## Epics\n\n**[Waku Network Can Support 10K Users](https://github.com/waku-org/pm/issues/12)** {E:2023-10k-users}\n\nAll software has been delivered. Pending items are:\n- Running stress testing on PostgreSQL to confirm performance gain https://github.com/waku-org/nwaku/issues/1894\n- Setting up a staging fleet for Status to try static sharding\n- Running simulations for Store protocol: [Will confirm with Vac/DST on dates/commitment](https://github.com/vacp2p/research/issues/191#issuecomment-1672542165) and probably move this to [1mil epic](https://github.com/waku-org/pm/issues/31)\n\n---\n## Eco Dev\n\n**[Aug 2023](https://github.com/waku-org/internal-waku-outreach/issues/103)** {E:2023-eco-growth}\n\n- _achieved_: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub\n- _next_: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning\n- _blocker_: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel\n\n---\n## Docs\n\n**[Advanced docs for js-waku](https://github.com/waku-org/docs.waku.org/issues/104)**\n\n- _next_: document notes/recommendations for NodeJS, begin docs on `js-waku` encryption\n\n---\n## nwaku\n\n**[Release Process Improvements](https://github.com/waku-org/nwaku/issues/1889)** {E:2023-qa}\n\n- _achieved_: minor CI fixes and improvements\n- _next_: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n\n**[PostgreSQL](https://github.com/waku-org/nwaku/issues/1888)** {E:2023-10k-users}\n\n- _achieved_: Learned that the insertion rate is constrained by the `relay` protocol. i.e. the maximum insert rate is limited by `relay` so I couldn't push the \"insert\" operation to a limit from a _Postgres_ point of view. For example, if 25 clients publish messages concurrently, and each client publishes 300 msgs, all the messages are correctly stored. If repeating the same operation but with 50 clients, then many messages are lost because the _relay_ protocol doesn't process all of them.\n- _next_: Carry on with stress testing. Analyze the performance differences between _Postgres_ and _SQLite_ regarding the _read_ operations.\n\n**[Autosharding v1](https://github.com/waku-org/nwaku/issues/1846)** {E:2023-1mil-users}\n\n- _achieved_: many feedback/update cycles for FILTER, LIGHTPUSH, STORE \u0026 RFC\n- _next_: updating ENR for live subscriptions\n\n**[HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs](https://github.com/waku-org/nwaku/issues/1076)** {E:2023-many-platforms}\n\n- _achieved_: Legacy Filter - v1 - interface Rest Api support added.\n- _next_: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node. Add more tests.\n\n---\n## js-waku\n\n**[Maintenance](https://github.com/waku-org/js-waku/issues/1455)** {E:2023-qa}\n\n- achieved: upgrade libp2p \u0026 chainsafe deps to libp2p 0.46.3 while removing deprecated libp2p standalone interface packages (new breaking change libp2p w/ other deps), add tsdoc for referenced types, setting up/fixing prettier/eslint conflict \n\n**[Developer Experience (2023)](https://github.com/waku-org/js-waku/issues/1453)** {E:2023-eco-growth}\n\n- _achieved_: non blocking pipeline step (https://github.com/waku-org/js-waku/issues/1411)\n\n**[Peer Exchange is supported and used by default](https://github.com/waku-org/js-waku/issues/1429)** {E:2023-light-protocols}\n\n- _achieved_: close the \"fallback mechanism for peer rejections\", refactor peer-exchange compliance test\n- _next_: peer-exchange to be included with default discovery, action peer-exchange browser feedback\n\n---\n## go-waku\n\n**[Maintenance](https://github.com/waku-org/go-waku/issues/634)** {E:2023-qa}\n\n- _achieved_: improved keep alive logic for identifying if machine is waking up; added vacuum feature to sqlite and postgresql; made migrations optional; refactored db and migration code, extracted code to generate node key to its own separate subcommand\n\n**[C-Bindings Improvement: Callbacks and Duplications](https://github.com/waku-org/go-waku/issues/629)** {E:2023-many-platforms}\n\n- _achieved_: PR for updating the RFC to use callbacks, and refactored the encoding functions\n\n**[Improve Test Coverage](https://github.com/waku-org/go-waku/issues/620)** {E:2023-qa}\n\n- _achieved_: Fixed issues reported by the data race detector tool.\n- _next_: identify areas where test coverage needs improvement.\n\n**[RLN: Post-Testnet3 Improvements](https://github.com/waku-org/go-waku/issues/605)** {E:2023-rln}\n\n- _achieved_: exposed merkle tree configuration, removed embedded resources from go-zerokit-rln, fixed nwaku / go-waku rlnKeystore compatibility, added merkle tree persistence and modified zerokit to print to stderr any error obtained while executing functions via FFI.\n- _next_: interop with nwaku\n\n**[Introduce Peer Management](https://github.com/waku-org/go-waku/issues/594)** {E:2023-peer-mgmt}\n\n- _achieved_: add service slots to peer manager.\n- _next_: implement relay connectivity loop, integrate gossipsub scoring for peer disconnections\n\n---","lastmodified":"2023-08-17T19:42:53.952430596Z","tags":["waku-updates"]}} \ No newline at end of file diff --git a/indices/linkIndex.4aa205a552c456d43d65dac37acc8f8e.min.json b/indices/linkIndex.4aa205a552c456d43d65dac37acc8f8e.min.json deleted file mode 100644 index 82e222e8d..000000000 --- a/indices/linkIndex.4aa205a552c456d43d65dac37acc8f8e.min.json +++ /dev/null @@ -1 +0,0 @@ -{"index":{"links":{"/":[{"source":"/","target":"/roadmap/waku/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/waku-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/codex/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/codex-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/nomos/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/nomos-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/vac/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/vac-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/innovation_lab/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/ilab-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/acid/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/acid-updates","text":"weekly updates"}],"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":[{"source":"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","target":"/config","text":"config"}],"/private/notes/config":[{"source":"/private/notes/config","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/config","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"}],"/private/notes/editing":[{"source":"/private/notes/editing","target":"/obsidian","text":"How to setup your Obsidian Vault to work with Quartz"},{"source":"/private/notes/editing","target":"/preview-changes","text":"Preview Quartz Changes"},{"source":"/private/notes/editing","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/editing","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"}],"/private/notes/hosting":[{"source":"/private/notes/hosting","target":"/custom-Domain","text":"Learn how to set it up with Quartz"},{"source":"/private/notes/hosting","target":"/ignore-notes","text":"Excluding pages from being published"},{"source":"/private/notes/hosting","target":"/config","text":"Customizing Quartz"},{"source":"/private/notes/hosting","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"}],"/private/notes/obsidian":[{"source":"/private/notes/obsidian","target":"/setup","text":"setup"},{"source":"/private/notes/obsidian","target":"/preview-changes","text":"Preview Quartz Changes"}],"/private/notes/preview-changes":[{"source":"/private/notes/preview-changes","target":"/hosting","text":"Hosting Quartz online!"}],"/private/notes/search":[{"source":"/private/notes/search","target":"/hosting","text":"hosting"}],"/private/notes/setup":[{"source":"/private/notes/setup","target":"/editing","text":"Editing Notes in Quartz"},{"source":"/private/notes/setup","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"}],"/private/notes/troubleshooting":[{"source":"/private/notes/troubleshooting","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/ignore-notes","text":"excluding pages from being published"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/obsidian","text":"Obsidian"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"the 'how to edit' guide"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"the hosting guide"},{"source":"/private/notes/troubleshooting","target":"/config","text":"customization guide"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"local editing"}],"/private/roadmap/consensus/candidates/carnot/overview":[{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/roadmap/consensus/index","text":"consensus"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/FAQ","text":"FAQ"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/","text":"Recovery Failure Probabilities"}],"/private/roadmap/consensus/development/prototypes":[{"source":"/private/roadmap/consensus/development/prototypes","target":"/tags/candidates","text":"Consensus Candidates"}],"/private/roadmap/consensus/overview":[{"source":"/private/roadmap/consensus/overview","target":"/private/roadmap/consensus/candidates/carnot/overview","text":"Carnot"},{"source":"/private/roadmap/consensus/overview","target":"/claro","text":"Claro"},{"source":"/private/roadmap/consensus/overview","target":"/snow-family","text":"snow-family"},{"source":"/private/roadmap/consensus/overview","target":"/prototypes","text":"prototypes"},{"source":"/private/roadmap/consensus/overview","target":"/distributed-systems-researcher","text":"distributed-systems-researcher"}],"/private/roadmap/consensus/theory/overview":[{"source":"/private/roadmap/consensus/theory/overview","target":"/snow-family","text":"Snow Family Analysis"}],"/private/roadmap/consensus/theory/snow-family":[{"source":"/private/roadmap/consensus/theory/snow-family","target":"/","text":"whitepapers"}],"/private/roadmap/networking/overview":[{"source":"/private/roadmap/networking/overview","target":"/status-waku-kurtosis","text":"Status' use of Waku study w/ Kurtosis"},{"source":"/private/roadmap/networking/overview","target":"/carnot-waku-specification","text":"Using Waku for Carnot Overlay"},{"source":"/private/roadmap/networking/overview","target":"/roadmap/development/prototypes","text":"Tiny Node"}],"/private/roadmap/networking/status-network-agents":[{"source":"/private/roadmap/networking/status-network-agents","target":"/status-waku-kurtosis","text":"Status Waku scalability study"}],"/private/roadmap/networking/status-waku-kurtosis":[{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/roadmap/networking/overview","text":"Networking Overview"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/requirements/overview","text":"Technical Requirements"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/status-network-agents","text":"Status Network Agent Breakdown"}],"/private/roadmap/virtual-machines/overview":[{"source":"/private/roadmap/virtual-machines/overview","target":"/zero-knowledge-research-engineer","text":"ZK Research Engineer"}],"/roadmap/waku/milestones-overview":[{"source":"/roadmap/waku/milestones-overview","target":"/roadmap/waku/milestone-waku-10-users","text":"Waku Network support for 10k users"}]},"backlinks":{"/":[{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/","text":"Recovery Failure Probabilities"},{"source":"/private/roadmap/consensus/theory/snow-family","target":"/","text":"whitepapers"}],"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":[{"source":"/private/notes/config","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"},{"source":"/private/notes/troubleshooting","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"}],"/FAQ":[{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/FAQ","text":"FAQ"}],"/carnot-waku-specification":[{"source":"/private/roadmap/networking/overview","target":"/carnot-waku-specification","text":"Using Waku for Carnot Overlay"}],"/claro":[{"source":"/private/roadmap/consensus/overview","target":"/claro","text":"Claro"}],"/config":[{"source":"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","target":"/config","text":"config"},{"source":"/private/notes/hosting","target":"/config","text":"Customizing Quartz"},{"source":"/private/notes/troubleshooting","target":"/config","text":"customization guide"}],"/custom-Domain":[{"source":"/private/notes/hosting","target":"/custom-Domain","text":"Learn how to set it up with Quartz"}],"/distributed-systems-researcher":[{"source":"/private/roadmap/consensus/overview","target":"/distributed-systems-researcher","text":"distributed-systems-researcher"}],"/editing":[{"source":"/private/notes/setup","target":"/editing","text":"Editing Notes in Quartz"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"the 'how to edit' guide"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"local editing"}],"/hosting":[{"source":"/private/notes/editing","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/preview-changes","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/search","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"the hosting guide"}],"/ignore-notes":[{"source":"/private/notes/hosting","target":"/ignore-notes","text":"Excluding pages from being published"},{"source":"/private/notes/troubleshooting","target":"/ignore-notes","text":"excluding pages from being published"}],"/obsidian":[{"source":"/private/notes/editing","target":"/obsidian","text":"How to setup your Obsidian Vault to work with Quartz"},{"source":"/private/notes/troubleshooting","target":"/obsidian","text":"Obsidian"}],"/preview-changes":[{"source":"/private/notes/editing","target":"/preview-changes","text":"Preview Quartz Changes"},{"source":"/private/notes/obsidian","target":"/preview-changes","text":"Preview Quartz Changes"}],"/private/requirements/overview":[{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/requirements/overview","text":"Technical Requirements"}],"/private/roadmap/consensus/candidates/carnot/overview":[{"source":"/private/roadmap/consensus/overview","target":"/private/roadmap/consensus/candidates/carnot/overview","text":"Carnot"}],"/private/roadmap/networking/overview":[{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/roadmap/networking/overview","text":"Networking Overview"}],"/prototypes":[{"source":"/private/roadmap/consensus/overview","target":"/prototypes","text":"prototypes"}],"/roadmap/acid/milestones-overview":[{"source":"/","target":"/roadmap/acid/milestones-overview","text":"Milestones"}],"/roadmap/codex/milestones-overview":[{"source":"/","target":"/roadmap/codex/milestones-overview","text":"Milestones"}],"/roadmap/consensus/index":[{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/roadmap/consensus/index","text":"consensus"}],"/roadmap/development/prototypes":[{"source":"/private/roadmap/networking/overview","target":"/roadmap/development/prototypes","text":"Tiny Node"}],"/roadmap/innovation_lab/milestones-overview":[{"source":"/","target":"/roadmap/innovation_lab/milestones-overview","text":"Milestones"}],"/roadmap/nomos/milestones-overview":[{"source":"/","target":"/roadmap/nomos/milestones-overview","text":"Milestones"}],"/roadmap/vac/milestones-overview":[{"source":"/","target":"/roadmap/vac/milestones-overview","text":"Milestones"}],"/roadmap/waku/milestone-waku-10-users":[{"source":"/roadmap/waku/milestones-overview","target":"/roadmap/waku/milestone-waku-10-users","text":"Waku Network support for 10k users"}],"/roadmap/waku/milestones-overview":[{"source":"/","target":"/roadmap/waku/milestones-overview","text":"Milestones"}],"/setup":[{"source":"/private/notes/obsidian","target":"/setup","text":"setup"}],"/snow-family":[{"source":"/private/roadmap/consensus/overview","target":"/snow-family","text":"snow-family"},{"source":"/private/roadmap/consensus/theory/overview","target":"/snow-family","text":"Snow Family Analysis"}],"/status-network-agents":[{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/status-network-agents","text":"Status Network Agent Breakdown"}],"/status-waku-kurtosis":[{"source":"/private/roadmap/networking/overview","target":"/status-waku-kurtosis","text":"Status' use of Waku study w/ Kurtosis"},{"source":"/private/roadmap/networking/status-network-agents","target":"/status-waku-kurtosis","text":"Status Waku scalability study"}],"/tags/acid-updates":[{"source":"/","target":"/tags/acid-updates","text":"weekly updates"}],"/tags/candidates":[{"source":"/private/roadmap/consensus/development/prototypes","target":"/tags/candidates","text":"Consensus Candidates"}],"/tags/codex-updates":[{"source":"/","target":"/tags/codex-updates","text":"weekly updates"}],"/tags/ilab-updates":[{"source":"/","target":"/tags/ilab-updates","text":"weekly updates"}],"/tags/nomos-updates":[{"source":"/","target":"/tags/nomos-updates","text":"weekly updates"}],"/tags/vac-updates":[{"source":"/","target":"/tags/vac-updates","text":"weekly updates"}],"/tags/waku-updates":[{"source":"/","target":"/tags/waku-updates","text":"weekly updates"}],"/troubleshooting":[{"source":"/private/notes/config","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/editing","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/hosting","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/setup","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"}],"/zero-knowledge-research-engineer":[{"source":"/private/roadmap/virtual-machines/overview","target":"/zero-knowledge-research-engineer","text":"ZK Research Engineer"}]}},"links":[{"source":"/","target":"/roadmap/waku/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/waku-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/codex/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/codex-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/nomos/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/nomos-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/vac/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/vac-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/innovation_lab/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/ilab-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/acid/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/acid-updates","text":"weekly updates"},{"source":"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","target":"/config","text":"config"},{"source":"/private/notes/config","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/config","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"},{"source":"/private/notes/editing","target":"/obsidian","text":"How to setup your Obsidian Vault to work with Quartz"},{"source":"/private/notes/editing","target":"/preview-changes","text":"Preview Quartz Changes"},{"source":"/private/notes/editing","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/editing","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/hosting","target":"/custom-Domain","text":"Learn how to set it up with Quartz"},{"source":"/private/notes/hosting","target":"/ignore-notes","text":"Excluding pages from being published"},{"source":"/private/notes/hosting","target":"/config","text":"Customizing Quartz"},{"source":"/private/notes/hosting","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/obsidian","target":"/setup","text":"setup"},{"source":"/private/notes/obsidian","target":"/preview-changes","text":"Preview Quartz Changes"},{"source":"/private/notes/preview-changes","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/search","target":"/hosting","text":"hosting"},{"source":"/private/notes/setup","target":"/editing","text":"Editing Notes in Quartz"},{"source":"/private/notes/setup","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/troubleshooting","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/ignore-notes","text":"excluding pages from being published"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/obsidian","text":"Obsidian"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"the 'how to edit' guide"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"the hosting guide"},{"source":"/private/notes/troubleshooting","target":"/config","text":"customization guide"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"local editing"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/roadmap/consensus/index","text":"consensus"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/FAQ","text":"FAQ"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/","text":"Recovery Failure Probabilities"},{"source":"/private/roadmap/consensus/development/prototypes","target":"/tags/candidates","text":"Consensus Candidates"},{"source":"/private/roadmap/consensus/overview","target":"/private/roadmap/consensus/candidates/carnot/overview","text":"Carnot"},{"source":"/private/roadmap/consensus/overview","target":"/claro","text":"Claro"},{"source":"/private/roadmap/consensus/overview","target":"/snow-family","text":"snow-family"},{"source":"/private/roadmap/consensus/overview","target":"/prototypes","text":"prototypes"},{"source":"/private/roadmap/consensus/overview","target":"/distributed-systems-researcher","text":"distributed-systems-researcher"},{"source":"/private/roadmap/consensus/theory/overview","target":"/snow-family","text":"Snow Family Analysis"},{"source":"/private/roadmap/consensus/theory/snow-family","target":"/","text":"whitepapers"},{"source":"/private/roadmap/networking/overview","target":"/status-waku-kurtosis","text":"Status' use of Waku study w/ Kurtosis"},{"source":"/private/roadmap/networking/overview","target":"/carnot-waku-specification","text":"Using Waku for Carnot Overlay"},{"source":"/private/roadmap/networking/overview","target":"/roadmap/development/prototypes","text":"Tiny Node"},{"source":"/private/roadmap/networking/status-network-agents","target":"/status-waku-kurtosis","text":"Status Waku scalability study"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/roadmap/networking/overview","text":"Networking Overview"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/requirements/overview","text":"Technical Requirements"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/status-network-agents","text":"Status Network Agent Breakdown"},{"source":"/private/roadmap/virtual-machines/overview","target":"/zero-knowledge-research-engineer","text":"ZK Research Engineer"},{"source":"/roadmap/waku/milestones-overview","target":"/roadmap/waku/milestone-waku-10-users","text":"Waku Network support for 10k users"}]} \ No newline at end of file diff --git a/indices/linkIndex.6cf348040d2d17fddcc3c5b88898e318.min.json b/indices/linkIndex.6cf348040d2d17fddcc3c5b88898e318.min.json deleted file mode 100644 index 0a9475fa2..000000000 --- a/indices/linkIndex.6cf348040d2d17fddcc3c5b88898e318.min.json +++ /dev/null @@ -1 +0,0 @@ -{"index":{"links":{"/":[{"source":"/","target":"/roadmap/waku/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/waku-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/codex/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/codex-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/nomos/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/nomos-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/vac/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/vac-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/innovation_lab/milestones_overview","text":"Milestones"},{"source":"/","target":"/tags/ilab-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/acid/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/acid-updates","text":"weekly updates"}],"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":[{"source":"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","target":"/config","text":"config"}],"/private/notes/config":[{"source":"/private/notes/config","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/config","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"}],"/private/notes/editing":[{"source":"/private/notes/editing","target":"/obsidian","text":"How to setup your Obsidian Vault to work with Quartz"},{"source":"/private/notes/editing","target":"/preview-changes","text":"Preview Quartz Changes"},{"source":"/private/notes/editing","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/editing","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"}],"/private/notes/hosting":[{"source":"/private/notes/hosting","target":"/custom-Domain","text":"Learn how to set it up with Quartz"},{"source":"/private/notes/hosting","target":"/ignore-notes","text":"Excluding pages from being published"},{"source":"/private/notes/hosting","target":"/config","text":"Customizing Quartz"},{"source":"/private/notes/hosting","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"}],"/private/notes/obsidian":[{"source":"/private/notes/obsidian","target":"/setup","text":"setup"},{"source":"/private/notes/obsidian","target":"/preview-changes","text":"Preview Quartz Changes"}],"/private/notes/preview-changes":[{"source":"/private/notes/preview-changes","target":"/hosting","text":"Hosting Quartz online!"}],"/private/notes/search":[{"source":"/private/notes/search","target":"/hosting","text":"hosting"}],"/private/notes/setup":[{"source":"/private/notes/setup","target":"/editing","text":"Editing Notes in Quartz"},{"source":"/private/notes/setup","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"}],"/private/notes/troubleshooting":[{"source":"/private/notes/troubleshooting","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/ignore-notes","text":"excluding pages from being published"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/obsidian","text":"Obsidian"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"the 'how to edit' guide"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"the hosting guide"},{"source":"/private/notes/troubleshooting","target":"/config","text":"customization guide"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"local editing"}],"/private/roadmap/consensus/candidates/carnot/overview":[{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/roadmap/consensus/index","text":"consensus"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/FAQ","text":"FAQ"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/","text":"Recovery Failure Probabilities"}],"/private/roadmap/consensus/development/prototypes":[{"source":"/private/roadmap/consensus/development/prototypes","target":"/tags/candidates","text":"Consensus Candidates"}],"/private/roadmap/consensus/overview":[{"source":"/private/roadmap/consensus/overview","target":"/private/roadmap/consensus/candidates/carnot/overview","text":"Carnot"},{"source":"/private/roadmap/consensus/overview","target":"/claro","text":"Claro"},{"source":"/private/roadmap/consensus/overview","target":"/snow-family","text":"snow-family"},{"source":"/private/roadmap/consensus/overview","target":"/prototypes","text":"prototypes"},{"source":"/private/roadmap/consensus/overview","target":"/distributed-systems-researcher","text":"distributed-systems-researcher"}],"/private/roadmap/consensus/theory/overview":[{"source":"/private/roadmap/consensus/theory/overview","target":"/snow-family","text":"Snow Family Analysis"}],"/private/roadmap/consensus/theory/snow-family":[{"source":"/private/roadmap/consensus/theory/snow-family","target":"/","text":"whitepapers"}],"/private/roadmap/networking/overview":[{"source":"/private/roadmap/networking/overview","target":"/status-waku-kurtosis","text":"Status' use of Waku study w/ Kurtosis"},{"source":"/private/roadmap/networking/overview","target":"/carnot-waku-specification","text":"Using Waku for Carnot Overlay"},{"source":"/private/roadmap/networking/overview","target":"/roadmap/development/prototypes","text":"Tiny Node"}],"/private/roadmap/networking/status-network-agents":[{"source":"/private/roadmap/networking/status-network-agents","target":"/status-waku-kurtosis","text":"Status Waku scalability study"}],"/private/roadmap/networking/status-waku-kurtosis":[{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/roadmap/networking/overview","text":"Networking Overview"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/requirements/overview","text":"Technical Requirements"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/status-network-agents","text":"Status Network Agent Breakdown"}],"/private/roadmap/virtual-machines/overview":[{"source":"/private/roadmap/virtual-machines/overview","target":"/zero-knowledge-research-engineer","text":"ZK Research Engineer"}],"/roadmap/waku/milestones-overview":[{"source":"/roadmap/waku/milestones-overview","target":"/roadmap/waku/milestone-waku-10-users","text":"Waku Network support for 10k users"}]},"backlinks":{"/":[{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/","text":"Recovery Failure Probabilities"},{"source":"/private/roadmap/consensus/theory/snow-family","target":"/","text":"whitepapers"}],"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95":[{"source":"/private/notes/config","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"},{"source":"/private/notes/troubleshooting","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"}],"/FAQ":[{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/FAQ","text":"FAQ"}],"/carnot-waku-specification":[{"source":"/private/roadmap/networking/overview","target":"/carnot-waku-specification","text":"Using Waku for Carnot Overlay"}],"/claro":[{"source":"/private/roadmap/consensus/overview","target":"/claro","text":"Claro"}],"/config":[{"source":"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","target":"/config","text":"config"},{"source":"/private/notes/hosting","target":"/config","text":"Customizing Quartz"},{"source":"/private/notes/troubleshooting","target":"/config","text":"customization guide"}],"/custom-Domain":[{"source":"/private/notes/hosting","target":"/custom-Domain","text":"Learn how to set it up with Quartz"}],"/distributed-systems-researcher":[{"source":"/private/roadmap/consensus/overview","target":"/distributed-systems-researcher","text":"distributed-systems-researcher"}],"/editing":[{"source":"/private/notes/setup","target":"/editing","text":"Editing Notes in Quartz"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"the 'how to edit' guide"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"local editing"}],"/hosting":[{"source":"/private/notes/editing","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/preview-changes","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/search","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"the hosting guide"}],"/ignore-notes":[{"source":"/private/notes/hosting","target":"/ignore-notes","text":"Excluding pages from being published"},{"source":"/private/notes/troubleshooting","target":"/ignore-notes","text":"excluding pages from being published"}],"/obsidian":[{"source":"/private/notes/editing","target":"/obsidian","text":"How to setup your Obsidian Vault to work with Quartz"},{"source":"/private/notes/troubleshooting","target":"/obsidian","text":"Obsidian"}],"/preview-changes":[{"source":"/private/notes/editing","target":"/preview-changes","text":"Preview Quartz Changes"},{"source":"/private/notes/obsidian","target":"/preview-changes","text":"Preview Quartz Changes"}],"/private/requirements/overview":[{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/requirements/overview","text":"Technical Requirements"}],"/private/roadmap/consensus/candidates/carnot/overview":[{"source":"/private/roadmap/consensus/overview","target":"/private/roadmap/consensus/candidates/carnot/overview","text":"Carnot"}],"/private/roadmap/networking/overview":[{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/roadmap/networking/overview","text":"Networking Overview"}],"/prototypes":[{"source":"/private/roadmap/consensus/overview","target":"/prototypes","text":"prototypes"}],"/roadmap/acid/milestones-overview":[{"source":"/","target":"/roadmap/acid/milestones-overview","text":"Milestones"}],"/roadmap/codex/milestones-overview":[{"source":"/","target":"/roadmap/codex/milestones-overview","text":"Milestones"}],"/roadmap/consensus/index":[{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/roadmap/consensus/index","text":"consensus"}],"/roadmap/development/prototypes":[{"source":"/private/roadmap/networking/overview","target":"/roadmap/development/prototypes","text":"Tiny Node"}],"/roadmap/innovation_lab/milestones_overview":[{"source":"/","target":"/roadmap/innovation_lab/milestones_overview","text":"Milestones"}],"/roadmap/nomos/milestones-overview":[{"source":"/","target":"/roadmap/nomos/milestones-overview","text":"Milestones"}],"/roadmap/vac/milestones-overview":[{"source":"/","target":"/roadmap/vac/milestones-overview","text":"Milestones"}],"/roadmap/waku/milestone-waku-10-users":[{"source":"/roadmap/waku/milestones-overview","target":"/roadmap/waku/milestone-waku-10-users","text":"Waku Network support for 10k users"}],"/roadmap/waku/milestones-overview":[{"source":"/","target":"/roadmap/waku/milestones-overview","text":"Milestones"}],"/setup":[{"source":"/private/notes/obsidian","target":"/setup","text":"setup"}],"/snow-family":[{"source":"/private/roadmap/consensus/overview","target":"/snow-family","text":"snow-family"},{"source":"/private/roadmap/consensus/theory/overview","target":"/snow-family","text":"Snow Family Analysis"}],"/status-network-agents":[{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/status-network-agents","text":"Status Network Agent Breakdown"}],"/status-waku-kurtosis":[{"source":"/private/roadmap/networking/overview","target":"/status-waku-kurtosis","text":"Status' use of Waku study w/ Kurtosis"},{"source":"/private/roadmap/networking/status-network-agents","target":"/status-waku-kurtosis","text":"Status Waku scalability study"}],"/tags/acid-updates":[{"source":"/","target":"/tags/acid-updates","text":"weekly updates"}],"/tags/candidates":[{"source":"/private/roadmap/consensus/development/prototypes","target":"/tags/candidates","text":"Consensus Candidates"}],"/tags/codex-updates":[{"source":"/","target":"/tags/codex-updates","text":"weekly updates"}],"/tags/ilab-updates":[{"source":"/","target":"/tags/ilab-updates","text":"weekly updates"}],"/tags/nomos-updates":[{"source":"/","target":"/tags/nomos-updates","text":"weekly updates"}],"/tags/vac-updates":[{"source":"/","target":"/tags/vac-updates","text":"weekly updates"}],"/tags/waku-updates":[{"source":"/","target":"/tags/waku-updates","text":"weekly updates"}],"/troubleshooting":[{"source":"/private/notes/config","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/editing","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/hosting","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/setup","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"}],"/zero-knowledge-research-engineer":[{"source":"/private/roadmap/virtual-machines/overview","target":"/zero-knowledge-research-engineer","text":"ZK Research Engineer"}]}},"links":[{"source":"/","target":"/roadmap/waku/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/waku-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/codex/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/codex-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/nomos/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/nomos-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/vac/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/vac-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/innovation_lab/milestones_overview","text":"Milestones"},{"source":"/","target":"/tags/ilab-updates","text":"weekly updates"},{"source":"/","target":"/roadmap/acid/milestones-overview","text":"Milestones"},{"source":"/","target":"/tags/acid-updates","text":"weekly updates"},{"source":"/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","target":"/config","text":"config"},{"source":"/private/notes/config","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/config","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"},{"source":"/private/notes/editing","target":"/obsidian","text":"How to setup your Obsidian Vault to work with Quartz"},{"source":"/private/notes/editing","target":"/preview-changes","text":"Preview Quartz Changes"},{"source":"/private/notes/editing","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/editing","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/hosting","target":"/custom-Domain","text":"Learn how to set it up with Quartz"},{"source":"/private/notes/hosting","target":"/ignore-notes","text":"Excluding pages from being published"},{"source":"/private/notes/hosting","target":"/config","text":"Customizing Quartz"},{"source":"/private/notes/hosting","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/obsidian","target":"/setup","text":"setup"},{"source":"/private/notes/obsidian","target":"/preview-changes","text":"Preview Quartz Changes"},{"source":"/private/notes/preview-changes","target":"/hosting","text":"Hosting Quartz online!"},{"source":"/private/notes/search","target":"/hosting","text":"hosting"},{"source":"/private/notes/setup","target":"/editing","text":"Editing Notes in Quartz"},{"source":"/private/notes/setup","target":"/troubleshooting","text":"FAQ and Troubleshooting guide"},{"source":"/private/notes/troubleshooting","target":"/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95","text":"CJK + Latex Support (测试)"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/ignore-notes","text":"excluding pages from being published"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"hosting"},{"source":"/private/notes/troubleshooting","target":"/obsidian","text":"Obsidian"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"the 'how to edit' guide"},{"source":"/private/notes/troubleshooting","target":"/hosting","text":"the hosting guide"},{"source":"/private/notes/troubleshooting","target":"/config","text":"customization guide"},{"source":"/private/notes/troubleshooting","target":"/editing","text":"local editing"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/roadmap/consensus/index","text":"consensus"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/FAQ","text":"FAQ"},{"source":"/private/roadmap/consensus/candidates/carnot/overview","target":"/","text":"Recovery Failure Probabilities"},{"source":"/private/roadmap/consensus/development/prototypes","target":"/tags/candidates","text":"Consensus Candidates"},{"source":"/private/roadmap/consensus/overview","target":"/private/roadmap/consensus/candidates/carnot/overview","text":"Carnot"},{"source":"/private/roadmap/consensus/overview","target":"/claro","text":"Claro"},{"source":"/private/roadmap/consensus/overview","target":"/snow-family","text":"snow-family"},{"source":"/private/roadmap/consensus/overview","target":"/prototypes","text":"prototypes"},{"source":"/private/roadmap/consensus/overview","target":"/distributed-systems-researcher","text":"distributed-systems-researcher"},{"source":"/private/roadmap/consensus/theory/overview","target":"/snow-family","text":"Snow Family Analysis"},{"source":"/private/roadmap/consensus/theory/snow-family","target":"/","text":"whitepapers"},{"source":"/private/roadmap/networking/overview","target":"/status-waku-kurtosis","text":"Status' use of Waku study w/ Kurtosis"},{"source":"/private/roadmap/networking/overview","target":"/carnot-waku-specification","text":"Using Waku for Carnot Overlay"},{"source":"/private/roadmap/networking/overview","target":"/roadmap/development/prototypes","text":"Tiny Node"},{"source":"/private/roadmap/networking/status-network-agents","target":"/status-waku-kurtosis","text":"Status Waku scalability study"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/roadmap/networking/overview","text":"Networking Overview"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/private/requirements/overview","text":"Technical Requirements"},{"source":"/private/roadmap/networking/status-waku-kurtosis","target":"/status-network-agents","text":"Status Network Agent Breakdown"},{"source":"/private/roadmap/virtual-machines/overview","target":"/zero-knowledge-research-engineer","text":"ZK Research Engineer"},{"source":"/roadmap/waku/milestones-overview","target":"/roadmap/waku/milestone-waku-10-users","text":"Waku Network support for 10k users"}]} \ No newline at end of file diff --git a/js/callouts.7723cac461d613d118ee8bb8216b9838.min.js b/js/callouts.7723cac461d613d118ee8bb8216b9838.min.js deleted file mode 100644 index bf38e787e..000000000 --- a/js/callouts.7723cac461d613d118ee8bb8216b9838.min.js +++ /dev/null @@ -1 +0,0 @@ -const addCollapsibleCallouts=()=>{const e=document.querySelectorAll("blockquote.callout-collapsible");e.forEach(e=>e.addEventListener("click",e=>{e.currentTarget.classList.toggle("callout-collapsed")}))} \ No newline at end of file diff --git a/js/clipboard.c20857734e53a3fb733b7443879efa61.min.js b/js/clipboard.c20857734e53a3fb733b7443879efa61.min.js deleted file mode 100644 index 58cb84b05..000000000 --- a/js/clipboard.c20857734e53a3fb733b7443879efa61.min.js +++ /dev/null @@ -1,2 +0,0 @@ -const svgCopy='',svgCheck='',addCopyButtons=()=>{let e=document.getElementsByClassName("highlight");for(let n=0;n{navigator.clipboard.writeText(o.innerText.replace(/\n\n/g,` -`)).then(()=>{t.blur(),t.innerHTML=svgCheck,setTimeout(()=>{t.innerHTML=svgCopy,t.style.borderColor=""},2e3)},e=>t.innerHTML="Error")});let i=e[n].getElementsByClassName("chroma")[0];e[n].insertBefore(t,i)}} \ No newline at end of file diff --git a/js/code-title.b35124ad8db0ba37162b886afb711cbc.min.js b/js/code-title.b35124ad8db0ba37162b886afb711cbc.min.js deleted file mode 100644 index 1cf740cd5..000000000 --- a/js/code-title.b35124ad8db0ba37162b886afb711cbc.min.js +++ /dev/null @@ -1 +0,0 @@ -function addTitleToCodeBlocks(){for(var t=document.getElementsByClassName("highlight"),e=0;e{e.target.checked?(document.documentElement.setAttribute("saved-theme","dark"),localStorage.setItem("theme","dark"),syntaxTheme.href="https://roadmap.logos.co/styles/_dark_syntax.bec558461529f0dd343a0b008c343934.min.css"):(document.documentElement.setAttribute("saved-theme","light"),localStorage.setItem("theme","light"),syntaxTheme.href="https://roadmap.logos.co/styles/_light_syntax.86a48a52faebeaaf42158b72922b1c90.min.css")};window.addEventListener("DOMContentLoaded",()=>{const e=document.querySelector("#darkmode-toggle");e.addEventListener("change",switchTheme,!1),currentTheme==="dark"&&(e.checked=!0)}) \ No newline at end of file diff --git a/js/graph.abd4bc2af3869a96524d7d23b76152c7.js b/js/graph.abd4bc2af3869a96524d7d23b76152c7.js deleted file mode 100644 index c89877b9a..000000000 --- a/js/graph.abd4bc2af3869a96524d7d23b76152c7.js +++ /dev/null @@ -1,270 +0,0 @@ -async function drawGraph(baseUrl, isHome, pathColors, graphConfig) { - - let { - depth, - enableDrag, - enableLegend, - enableZoom, - opacityScale, - scale, - repelForce, - fontSize} = graphConfig; - - const container = document.getElementById("graph-container") - const { index, links, content } = await fetchData - - // Use .pathname to remove hashes / searchParams / text fragments - const cleanUrl = window.location.origin + window.location.pathname - - const curPage = cleanUrl.replace(/\/$/g, "").replace(baseUrl, "") - - const parseIdsFromLinks = (links) => [ - ...new Set(links.flatMap((link) => [link.source, link.target])), - ] - - // Links is mutated by d3. We want to use links later on, so we make a copy and pass that one to d3 - // Note: shallow cloning does not work because it copies over references from the original array - const copyLinks = JSON.parse(JSON.stringify(links)) - - const neighbours = new Set() - const wl = [curPage || "/", "__SENTINEL"] - if (depth >= 0) { - while (depth >= 0 && wl.length > 0) { - // compute neighbours - const cur = wl.shift() - if (cur === "__SENTINEL") { - depth-- - wl.push("__SENTINEL") - } else { - neighbours.add(cur) - const outgoing = index.links[cur] || [] - const incoming = index.backlinks[cur] || [] - wl.push(...outgoing.map((l) => l.target), ...incoming.map((l) => l.source)) - } - } - } else { - parseIdsFromLinks(copyLinks).forEach((id) => neighbours.add(id)) - } - - const data = { - nodes: [...neighbours].map((id) => ({ id })), - links: copyLinks.filter((l) => neighbours.has(l.source) && neighbours.has(l.target)), - } - - const color = (d) => { - if (d.id === curPage || (d.id === "/" && curPage === "")) { - return "var(--g-node-active)" - } - - for (const pathColor of pathColors) { - const path = Object.keys(pathColor)[0] - const colour = pathColor[path] - if (d.id.startsWith(path)) { - return colour - } - } - - return "var(--g-node)" - } - - const drag = (simulation) => { - function dragstarted(event, d) { - if (!event.active) simulation.alphaTarget(1).restart() - d.fx = d.x - d.fy = d.y - } - - function dragged(event, d) { - d.fx = event.x - d.fy = event.y - } - - function dragended(event, d) { - if (!event.active) simulation.alphaTarget(0) - d.fx = null - d.fy = null - } - - const noop = () => {} - return d3 - .drag() - .on("start", enableDrag ? dragstarted : noop) - .on("drag", enableDrag ? dragged : noop) - .on("end", enableDrag ? dragended : noop) - } - - const height = Math.max(container.offsetHeight, isHome ? 500 : 250) - const width = container.offsetWidth - - const simulation = d3 - .forceSimulation(data.nodes) - .force("charge", d3.forceManyBody().strength(-100 * repelForce)) - .force( - "link", - d3 - .forceLink(data.links) - .id((d) => d.id) - .distance(40), - ) - .force("center", d3.forceCenter()) - - const svg = d3 - .select("#graph-container") - .append("svg") - .attr("width", width) - .attr("height", height) - .attr('viewBox', [-width / 2 * 1 / scale, -height / 2 * 1 / scale, width * 1 / scale, height * 1 / scale]) - - if (enableLegend) { - const legend = [{ Current: "var(--g-node-active)" }, { Note: "var(--g-node)" }, ...pathColors] - legend.forEach((legendEntry, i) => { - const key = Object.keys(legendEntry)[0] - const colour = legendEntry[key] - svg - .append("circle") - .attr("cx", -width / 2 + 20) - .attr("cy", height / 2 - 30 * (i + 1)) - .attr("r", 6) - .style("fill", colour) - svg - .append("text") - .attr("x", -width / 2 + 40) - .attr("y", height / 2 - 30 * (i + 1)) - .text(key) - .style("font-size", "15px") - .attr("alignment-baseline", "middle") - }) - } - - // draw links between nodes - const link = svg - .append("g") - .selectAll("line") - .data(data.links) - .join("line") - .attr("class", "link") - .attr("stroke", "var(--g-link)") - .attr("stroke-width", 2) - .attr("data-source", (d) => d.source.id) - .attr("data-target", (d) => d.target.id) - - // svg groups - const graphNode = svg.append("g").selectAll("g").data(data.nodes).enter().append("g") - - // calculate radius - const nodeRadius = (d) => { - const numOut = index.links[d.id]?.length || 0 - const numIn = index.backlinks[d.id]?.length || 0 - return 2 + Math.sqrt(numOut + numIn) - } - - // draw individual nodes - const node = graphNode - .append("circle") - .attr("class", "node") - .attr("id", (d) => d.id) - .attr("r", nodeRadius) - .attr("fill", color) - .style("cursor", "pointer") - .on("click", (_, d) => { - // SPA navigation - window.Million.navigate(new URL(`${baseUrl}${decodeURI(d.id).replace(/\s+/g, "-")}/`), ".singlePage") - }) - .on("mouseover", function (_, d) { - d3.selectAll(".node").transition().duration(100).attr("fill", "var(--g-node-inactive)") - - const neighbours = parseIdsFromLinks([ - ...(index.links[d.id] || []), - ...(index.backlinks[d.id] || []), - ]) - const neighbourNodes = d3.selectAll(".node").filter((d) => neighbours.includes(d.id)) - const currentId = d.id - window.Million.prefetch(new URL(`${baseUrl}${decodeURI(d.id).replace(/\s+/g, "-")}/`)) - const linkNodes = d3 - .selectAll(".link") - .filter((d) => d.source.id === currentId || d.target.id === currentId) - - // highlight neighbour nodes - neighbourNodes.transition().duration(200).attr("fill", color) - - // highlight links - linkNodes.transition().duration(200).attr("stroke", "var(--g-link-active)") - - const bigFont = fontSize*1.5 - - // show text for self - d3.select(this.parentNode) - .raise() - .select("text") - .transition() - .duration(200) - .attr('opacityOld', d3.select(this.parentNode).select('text').style("opacity")) - .style('opacity', 1) - .style('font-size', bigFont+'em') - .attr('dy', d => nodeRadius(d) + 20 + 'px') // radius is in px - }) - .on("mouseleave", function (_, d) { - d3.selectAll(".node").transition().duration(200).attr("fill", color) - - const currentId = d.id - const linkNodes = d3 - .selectAll(".link") - .filter((d) => d.source.id === currentId || d.target.id === currentId) - - linkNodes.transition().duration(200).attr("stroke", "var(--g-link)") - - d3.select(this.parentNode) - .select("text") - .transition() - .duration(200) - .style('opacity', d3.select(this.parentNode).select('text').attr("opacityOld")) - .style('font-size', fontSize+'em') - .attr('dy', d => nodeRadius(d) + 8 + 'px') // radius is in px - }) - .call(drag(simulation)) - - // draw labels - const labels = graphNode - .append("text") - .attr("dx", 0) - .attr("dy", (d) => nodeRadius(d) + 8 + "px") - .attr("text-anchor", "middle") - .text((d) => content[d.id]?.title || d.id.replace("-", " ")) - .style('opacity', (opacityScale - 1) / 3.75) - .style("pointer-events", "none") - .style('font-size', fontSize+'em') - .raise() - .call(drag(simulation)) - - // set panning - - if (enableZoom) { - svg.call( - d3 - .zoom() - .extent([ - [0, 0], - [width, height], - ]) - .scaleExtent([0.25, 4]) - .on("zoom", ({ transform }) => { - link.attr("transform", transform) - node.attr("transform", transform) - const scale = transform.k * opacityScale; - const scaledOpacity = Math.max((scale - 1) / 3.75, 0) - labels.attr("transform", transform).style("opacity", scaledOpacity) - }), - ) - } - - // progress the simulation - simulation.on("tick", () => { - link - .attr("x1", (d) => d.source.x) - .attr("y1", (d) => d.source.y) - .attr("x2", (d) => d.target.x) - .attr("y2", (d) => d.target.y) - node.attr("cx", (d) => d.x).attr("cy", (d) => d.y) - labels.attr("x", (d) => d.x).attr("y", (d) => d.y) - }) -} diff --git a/js/popover.37b1455b8f0603154072b9467132c659.min.js b/js/popover.37b1455b8f0603154072b9467132c659.min.js deleted file mode 100644 index e4a0231b2..000000000 --- a/js/popover.37b1455b8f0603154072b9467132c659.min.js +++ /dev/null @@ -1,9 +0,0 @@ -function htmlToElement(e){const t=document.createElement("template");return e=e.trim(),t.innerHTML=e,t.content.firstChild}function initPopover(e,t,n){const s=e.replace(window.location.origin,"");fetchData.then(({content:e})=>{const o=[...document.getElementsByClassName("internal-link")];o.filter(e=>e.dataset.src||e.dataset.idx&&t).forEach(t=>{var o;if(t.dataset.ctx){const n=e[t.dataset.src],s=`
-

${n.title}

-

${highlight(removeMarkdown(n.content),t.dataset.ctx)}...

-

${new Date(n.lastmodified).toLocaleDateString()}

-
`;o=htmlToElement(s)}else{const n=e[t.dataset.src.replace(/\/$/g,"").replace(s,"")];if(n){const e=`
-

${n.title}

-

${removeMarkdown(n.content).split(" ",20).join(" ")}...

-

${new Date(n.lastmodified).toLocaleDateString()}

-
`;o=htmlToElement(e)}}o&&(t.appendChild(o),n&&renderMathInElement(o,{delimiters:[{left:"$$",right:"$$",display:!1},{left:"$",right:"$",display:!1},{left:"\\(",right:"\\)",display:!1},{left:"\\[",right:"\\]",display:!1}],throwOnError:!1}),t.addEventListener("mouseover",()=>{window.FloatingUIDOM.computePosition(t,o,{middleware:[window.FloatingUIDOM.offset(10),window.FloatingUIDOM.inline(),window.FloatingUIDOM.shift()]}).then(({x:e,y:t})=>{Object.assign(o.style,{left:`${e}px`,top:`${t}px`})}),o.classList.add("visible")}),t.addEventListener("mouseout",()=>{o.classList.remove("visible")}))})})} \ No newline at end of file diff --git a/js/router.9d4974281069e9ebb189f642ae1e3ca2.min.js b/js/router.9d4974281069e9ebb189f642ae1e3ca2.min.js deleted file mode 100644 index 66a4cc2d7..000000000 --- a/js/router.9d4974281069e9ebb189f642ae1e3ca2.min.js +++ /dev/null @@ -1 +0,0 @@ -import{apply,navigate,prefetch,router,}from"https://unpkg.com/million@1.11.5/dist/router.mjs";export const attachSPARouting=(e,t)=>{window.Million={apply,navigate,prefetch,router};const n=()=>requestAnimationFrame(t);window.addEventListener("DOMContentLoaded",()=>{apply(t=>e(t)),e(),router(".singlePage"),n()}),window.addEventListener("million:navigate",n)} \ No newline at end of file diff --git a/js/semantic-search.d4032d4a6a967938235ae76d08a55b46.min.js b/js/semantic-search.d4032d4a6a967938235ae76d08a55b46.min.js deleted file mode 100644 index 46ce737ff..000000000 --- a/js/semantic-search.d4032d4a6a967938235ae76d08a55b46.min.js +++ /dev/null @@ -1 +0,0 @@ -const apiKey="e1ec9cdc-56d2-420e-a5bd-c6019af4be58";async function searchContents(e){const t=await fetch("https://prod.operand.ai/v3/search/objects",{method:"POST",headers:{"Content-Type":"application/json",Authorization:apiKey},body:JSON.stringify({query:e,max:10})});return await t.json()}function debounce(e,t=200){let n;return(...s)=>{clearTimeout(n),n=setTimeout(()=>{e.apply(this,s)},t)}}registerHandlers(debounce(e=>{term=e.target.value,term!==""&&searchContents(term).then(e=>e.results.map(e=>({url:e.object.properties.url,content:e.snippet,title:e.object.metadata.title}))).then(e=>displayResults(e))})) \ No newline at end of file diff --git a/js/util.9825137f5e7825e8553c68ce39ac9e44.min.js b/js/util.9825137f5e7825e8553c68ce39ac9e44.min.js deleted file mode 100644 index f276ed36b..000000000 --- a/js/util.9825137f5e7825e8553c68ce39ac9e44.min.js +++ /dev/null @@ -1,13 +0,0 @@ -const removeMarkdown=(e,t={listUnicodeChar:!1,stripListLeaders:!0,gfm:!0,useImgAltText:!1,preserveLinks:!1})=>{let n=e||"";n=n.replace(/^(-\s*?|\*\s*?|_\s*?){3,}\s*$/gm,"");try{t.stripListLeaders&&(t.listUnicodeChar?n=n.replace(/^([\s\t]*)([*\-+]|\d+\.)\s+/gm,t.listUnicodeChar+" $1"):n=n.replace(/^([\s\t]*)([*\-+]|\d+\.)\s+/gm,"$1")),t.gfm&&(n=n.replace(/\n={2,}/g,` -`).replace(/~{3}.*\n/g,"").replace(/~~/g,"").replace(/`{3}.*\n/g,"")),t.preserveLinks&&(n=n.replace(/\[(.*?)\][[(](.*?)[\])]/g,"$1 ($2)")),n=n.replace(/<[^>]*>/g,"").replace(/^[=-]{2,}\s*$/g,"").replace(/\[\^.+?\](: .*?$)?/g,"").replace(/(#{1,6})\s+(.+)\1?/g,"$2").replace(/\s{0,2}\[.*?\]: .*?$/g,"").replace(/!\[(.*?)\][[(].*?[\])]/g,t.useImgAltText?"$1":"").replace(/\[(.*?)\][[(].*?[\])]/g,"$1").replace(/!?\[\[\S[^[\]|]*(?:\|([^[\]]*))?\S\]\]/g,"$1").replace(/^\s{0,3}>\s?/g,"").replace(/(^|\n)\s{0,3}>\s?/g,` - -`).replace(/^\s{1,2}\[(.*?)\]: (\S+)( ".*?")?\s*$/g,"").replace(/([*_]{1,3})(\S.*?\S{0,1})\1/g,"$2").replace(/([*_]{1,3})(\S.*?\S{0,1})\1/g,"$2").replace(/(`{3,})(.*?)\1/gm,"$2").replace(/`(.+?)`/g,"$1").replace(/\n{2,}/g,` - -`).replace(/\[![a-zA-Z]+\][-+]? /g,"")}catch(t){return console.error(t),e}return n},highlight=(e,t)=>{const n=20,o=e.indexOf(t);if(o!==-1){const s=n,i=e.substring(0,o).split(" ").slice(-s),a=e.substring(o+t.length,e.length-2).split(" ").slice(0,s);return(i.length==s?`...${i.join(" ")}`:i.join(" "))+`${t}`+a.join(" ")}const u=t.split(/\s+/).filter(e=>e!==""),s=e.split(/\s+/).filter(e=>e!==""),a=e=>u.some(t=>e.toLowerCase().startsWith(t.toLowerCase())),r=s.map(a);let c=0,l=0;for(let e=0;ee+t,0);t>=c&&(c=t,l=e)}const i=Math.max(l-n,0),d=Math.min(i+2*n,s.length),h=s.slice(i,d).map(e=>a(e)?`${e}`:e).join(" ").replaceAll(' '," ");return`${i===0?"":"..."}${h}${d===s.length?"":"..."}`},resultToHTML=({url:e,title:t,content:n})=>``,redir=(e,t)=>{window.Million.navigate(new URL(`${BASE_URL.replace(/\/$/g,"")}${e}#:~:text=${encodeURIComponent(t)}/`),".singlePage"),closeSearch()};function openSearch(){const t=document.getElementById("search-bar"),n=document.getElementById("results-container"),e=document.getElementById("search-container");e.style.display==="none"||e.style.display===""?(t.value="",n.innerHTML="",e.style.display="block",t.focus()):e.style.display="none"}function closeSearch(){const e=document.getElementById("search-container");e.style.display="none"}const registerHandlers=e=>{const t=document.getElementById("search-bar"),s=document.getElementById("search-container");let o;t.addEventListener("keyup",e=>{if(e.key==="Enter"){const e=document.getElementsByClassName("result-card")[0];redir(e.id,o)}}),t.addEventListener("input",e),document.addEventListener("keydown",e=>{e.key==="k"&&(e.ctrlKey||e.metaKey)&&(e.preventDefault(),openSearch()),e.key==="Escape"&&(e.preventDefault(),closeSearch())});const n=document.getElementById("search-icon");n.addEventListener("click",e=>{openSearch()}),n.addEventListener("keydown",e=>{openSearch()}),s.addEventListener("click",e=>{closeSearch()}),document.getElementById("search-space").addEventListener("click",e=>{e.stopPropagation()})},displayResults=(e,t=!1)=>{const n=document.getElementById("results-container");if(e.length===0)n.innerHTML=``;else{n.innerHTML=e.map(e=>resultToHTML(t?{url:e.url,title:highlight(e.title,term),content:highlight(removeMarkdown(e.content),term)}:e)).join(` -`);const s=[...document.getElementsByClassName("result-card")];s.forEach(e=>{e.onclick=()=>redir(e.id,term)})}} \ No newline at end of file diff --git a/layout.html b/layout.html new file mode 100644 index 000000000..089dcdca8 --- /dev/null +++ b/layout.html @@ -0,0 +1,99 @@ + +Layout

Certain emitters may also output HTML files. To enable easy customization, these emitters allow you to fully rearrange the layout of the page. The default page layouts can be found in quartz.layout.ts.

+

Each page is composed of multiple different sections which contain QuartzComponents. The following code snippet lists all of the valid sections that you can add components to:

+
quartz/cfg.ts
export interface FullPageLayout {
+  head: QuartzComponent // single component
+  header: QuartzComponent[] // laid out horizontally
+  beforeBody: QuartzComponent[] // laid out vertically
+  pageBody: QuartzComponent // single component
+  left: QuartzComponent[] // vertical on desktop, horizontal on mobile
+  right: QuartzComponent[] // vertical on desktop, horizontal on mobile
+  footer: QuartzComponent // single component
+}
+

These correspond to following parts of the page:

+

+
+
+
+

Note

+ +
+

There are two additional layout fields that are not shown in the above diagram.

+
    +
  1. head is a single component that renders the <head> tag in the HTML. This doesn’t appear visually on the page and is only is responsible for metadata about the document like the tab title, scripts, and styles.
  2. +
  3. header is a set of components that are laid out horizontally and appears before the beforeBody section. This enables you to replicate the old Quartz 3 header bar where the title, search bar, and dark mode toggle. By default, Quartz 4 doesn’t place any components in the header.
  4. +
+
+

Quartz components, like plugins, can take in additional properties as configuration options. If you’re familiar with React terminology, you can think of them as Higher-order Components.

+

See a list of all the components for all available components along with their configuration options. You can also checkout the guide on creating components if you’re interested in further customizing the behaviour of Quartz.

+

Style

+

Most meaningful style changes like colour scheme and font can be done simply through the general configuration options. However, if you’d like to make more involved style changes, you can do this by writing your own styles. Quartz 4, like Quartz 3, uses Sass for styling.

+

You can see the base style sheet in quartz/styles/base.scss and write your own in quartz/styles/custom.scss.

+
+
+
+

Note

+ +
+

Some components may provide their own styling as well! For example, quartz/components/Darkmode.tsx imports styles from quartz/components/styles/darkmode.scss. If you’d like to customize styling for a specific component, double check the component definition to see how its styles are defined.

+
\ No newline at end of file diff --git a/linkmap b/linkmap deleted file mode 100644 index fd0666ff7..000000000 --- a/linkmap +++ /dev/null @@ -1,63 +0,0 @@ -/roadmap/vac/milestones-overview/index.{html} /roadmap/vac/milestones-overview/ -/private/notes/hosting/index.{html} /private/notes/hosting/ -/private/roadmap/networking/overview/index.{html} /private/roadmap/networking/overview/ -/roadmap/acid/updates/2023-08-02/index.{html} /roadmap/acid/updates/2023-08-02/ -/roadmap/nomos/milestones-overview/index.{html} /roadmap/nomos/milestones-overview/ -/private/notes/philosophy/index.{html} /private/notes/philosophy/ -/roadmap/codex/updates/2023-07-21/index.{html} /roadmap/codex/updates/2023-07-21/ -/roadmap/innovation_lab/updates/2023-08-02/index.{html} /roadmap/innovation_lab/updates/2023-08-02/ -/roadmap/waku/milestones-overview/index.{html} /roadmap/waku/milestones-overview/ -/index.html / -/roadmap/nomos/updates/2023-08-07/index.{html} /roadmap/nomos/updates/2023-08-07/ -/roadmap/vac/updates/2023-08-07/index.{html} /roadmap/vac/updates/2023-08-07/ -/roadmap/acid/milestones-overview/index.{html} /roadmap/acid/milestones-overview/ -/roadmap/nomos/updates/2023-07-31/index.{html} /roadmap/nomos/updates/2023-07-31/ -/roadmap/vac/updates/2023-08-14/index.{html} /roadmap/vac/updates/2023-08-14/ -/roadmap/waku/updates/2023-07-31/index.{html} /roadmap/waku/updates/2023-07-31/ -/roadmap/codex/updates/2023-08-01/index.{html} /roadmap/codex/updates/2023-08-01/ -/private/notes/search/index.{html} /private/notes/search/ -/private/roadmap/consensus/candidates/claro/index.{html} /private/roadmap/consensus/candidates/claro/ -/private/roadmap/consensus/development/overview/index.{html} /private/roadmap/consensus/development/overview/ -/private/roadmap/consensus/development/prototypes/index.{html} /private/roadmap/consensus/development/prototypes/ -/roadmap/nomos/updates/2023-07-24/index.{html} /roadmap/nomos/updates/2023-07-24/ -/private/notes/custom-Domain/index.{html} /private/notes/custom-Domain/ -/private/notes/preview-changes/index.{html} /private/notes/preview-changes/ -/private/requirements/overview/index.{html} /private/requirements/overview/ -/private/roles/distributed-systems-researcher/index.{html} /private/roles/distributed-systems-researcher/ -/private/notes/callouts/index.{html} /private/notes/callouts/ -/private/notes/obsidian/index.{html} /private/notes/obsidian/ -/roadmap/innovation_lab/updates/2023-07-12/index.{html} /roadmap/innovation_lab/updates/2023-07-12/ -/roadmap/nomos/updates/2023-08-14/index.{html} /roadmap/nomos/updates/2023-08-14/ -/private/roadmap/consensus/candidates/carnot/overview/index.{html} /private/roadmap/consensus/candidates/carnot/overview/ -/private/roadmap/networking/status-network-agents/index.{html} /private/roadmap/networking/status-network-agents/ -/private/roadmap/virtual-machines/overview/index.{html} /private/roadmap/virtual-machines/overview/ -/private/notes/setup/index.{html} /private/notes/setup/ -/private/roles/zero-knowledge-research-engineer/index.{html} /private/roles/zero-knowledge-research-engineer/ -/roadmap/vac/updates/2023-08-21/index.{html} /roadmap/vac/updates/2023-08-21/ -/roadmap/waku/updates/2023-08-06/index.{html} /roadmap/waku/updates/2023-08-06/ -/private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95/index.{html} /private/notes/CJK-+-Latex-Support-%E6%B5%8B%E8%AF%95/ -/private/roadmap/consensus/theory/snow-family/index.{html} /private/roadmap/consensus/theory/snow-family/ -/private/roles/rust-developer/index.{html} /private/roles/rust-developer/ -/roadmap/codex/milestones-overview/index.{html} /roadmap/codex/milestones-overview/ -/private/notes/ignore-notes/index.{html} /private/notes/ignore-notes/ -/private/roadmap/networking/status-waku-kurtosis/index.{html} /private/roadmap/networking/status-waku-kurtosis/ -/roadmap/waku/milestone-waku-10-users/index.{html} /roadmap/waku/milestone-waku-10-users/ -/roadmap/innovation_lab/updates/2023-08-11/index.{html} /roadmap/innovation_lab/updates/2023-08-11/ -/roadmap/vac/updates/2023-07-24/index.{html} /roadmap/vac/updates/2023-07-24/ -/roadmap/waku/updates/2023-08-14/index.{html} /roadmap/waku/updates/2023-08-14/ -/private/roadmap/consensus/overview/index.{html} /private/roadmap/consensus/overview/ -/private/roadmap/consensus/theory/overview/index.{html} /private/roadmap/consensus/theory/overview/ -/private/roadmap/networking/carnot-waku-specification/index.{html} /private/roadmap/networking/carnot-waku-specification/ -/roadmap/innovation_lab/milestones-overview/index.{html} /roadmap/innovation_lab/milestones-overview/ -/roadmap/codex/updates/2023-08-11/index.{html} /roadmap/codex/updates/2023-08-11/ -/private/notes/showcase/index.{html} /private/notes/showcase/ -/private/notes/troubleshooting/index.{html} /private/notes/troubleshooting/ -/private/notes/updating/index.{html} /private/notes/updating/ -/roadmap/acid/updates/2023-08-09/index.{html} /roadmap/acid/updates/2023-08-09/ -/private/notes/config/index.{html} /private/notes/config/ -/private/notes/editing/index.{html} /private/notes/editing/ -/roadmap/vac/updates/2023-07-31/index.{html} /roadmap/vac/updates/2023-07-31/ -/roadmap/waku/updates/2023-07-24/index.{html} /roadmap/waku/updates/2023-07-24/ -/private/roadmap/consensus/candidates/carnot/FAQ/index.{html} /private/roadmap/consensus/candidates/carnot/FAQ/ -/roadmap/vac/updates/2023-07-10/index.{html} /roadmap/vac/updates/2023-07-10/ -/roadmap/vac/updates/2023-07-17/index.{html} /roadmap/vac/updates/2023-07-17/ diff --git a/migrating-from-Quartz-3.html b/migrating-from-Quartz-3.html new file mode 100644 index 000000000..db496e0a2 --- /dev/null +++ b/migrating-from-Quartz-3.html @@ -0,0 +1,98 @@ + +Migrating from Quartz 3

As you already have Quartz locally, you don’t need to fork or clone it again. Simply just checkout the alpha branch, install the dependencies, and import your old vault.

+
git fetch
+git checkout v4
+git pull upstream v4
+npm i
+npx quartz create
+

If you get an error like fatal: 'upstream' does not appear to be a git repository, make sure you add upstream as a remote origin:

+
git remote add upstream https://github.com/jackyzha0/quartz.git
+

When running npx quartz create, you will be prompted as to how to initialize your content folder. Here, you can choose to import or link your previous content folder and Quartz should work just as you expect it to.

+
+
+
+

Note

+ +
+

If the existing content folder you’d like to use is at the same path on a different branch, clone the repo again somewhere at a different path in order to use it.

+
+

Key changes

+
    +
  1. Removing Hugo and hugo-obsidian: Hugo worked well for earlier versions of Quartz but it also made it hard for people outside of the Golang and Hugo communities to fully understand what Quartz was doing under the hood and be able to properly customize it to their needs. Quartz 4 now uses a Node-based static-site generation process which should lead to a much more helpful error messages and an overall smoother user experience.
  2. +
  3. Full-hot reload: The many rough edges of how hugo-obsidian integrated with Hugo meant that watch mode didn’t re-trigger hugo-obsidian to update the content index. This lead to a lot of weird cases where the watch mode output wasn’t accurate. Quartz 4 now uses a cohesive parse, filter, and emit pipeline which gets run on every change so hot-reloads are always accurate.
  4. +
  5. Replacing Go template syntax with JSX: Quartz 3 used Go templates to create layouts for pages. However, the syntax isn’t great for doing any sort of complex rendering (like text processing) and it got very difficult to make any meaningful layout changes to Quartz 3. Quartz 4 uses an extension of JavaScript syntax called JSX which allows you to write layout code that looks like HTML in JavaScript which is significantly easier to understand and maintain.
  6. +
  7. A new extensible configuration and plugin system: Quartz 3 was hard to configure without technical knowledge of how Hugo’s partials worked. Extensions were even hard to make. Quartz 4’s configuration and plugin system is designed to be extended by users while making updating to new versions of Quartz easy.
  8. +
+

Things to update

+
    +
  • You will need to update your deploy scripts. See the hosting guide for more details.
  • +
  • Ensure that your default branch on GitHub is updated from hugo to v4.
  • +
  • Folder and tag listings have also changed. +
      +
    • Folder descriptions should go under content/<folder-name>/index.md where <folder-name> is the name of the folder.
    • +
    • Tag descriptions should go under content/tags/<tag-name>.md where <tag-name> is the name of the tag.
    • +
    +
  • +
  • Some HTML layout may not be the same between Quartz 3 and Quartz 4. If you depended on a particular HTML hierarchy or class names, you may need to update your custom CSS to reflect these changes.
  • +
  • If you customized the layout of Quartz 3, you may need to translate these changes from Go templates back to JSX as Quartz 4 no longer uses Hugo. For components, check out the guide on creating components for more details on this.
  • +
\ No newline at end of file diff --git a/philosophy.html b/philosophy.html new file mode 100644 index 000000000..65babee96 --- /dev/null +++ b/philosophy.html @@ -0,0 +1,78 @@ + +Philosophy of Quartz

A garden should be a true hypertext

+
+

The garden is the web as topology. Every walk through the garden creates new paths, new meanings, and when we add things to the garden we add them in a way that allows many future, unpredicted relationships.

+

(The Garden and the Stream)

+
+

The problem with the file cabinet is that it focuses on efficiency of access and interoperability rather than generativity and creativity. Thinking is not linear, nor is it hierarchical. In fact, not many things are linear or hierarchical at all. Then why is it that most tools and thinking strategies assume a nice chronological or hierarchical order for my thought processes? The ideal tool for thought for me would embrace the messiness of my mind, and organically help insights emerge from chaos instead of forcing an artificial order. A rhizomatic, not arboresecent, form of note taking.

+

My goal with a digital garden is not purely as an organizing system and information store (though it works nicely for that). I want my digital garden to be a playground for new ways ideas can connect together. As a result, existing formal organizing systems like Zettelkasten or the hierarchical folder structures of Notion don’t work well for me. There is way too much upfront friction that by the time I’ve thought about how to organize my thought into folders categories, I’ve lost it.

+

Quartz embraces the inherent rhizomatic and web-like nature of our thinking and tries to encourage note-taking in a similar form.

+
+

A garden should be shared

+

The goal of digital gardening should be to tap into your network’s collective intelligence to create constructive feedback loops. If done well, I have a shareable representation of my thoughts that I can send out into the world and people can respond. Even for my most half-baked thoughts, this helps me create a feedback cycle to strengthen and fully flesh out that idea.

+

Quartz is designed first and foremost as a tool for publishing digital gardens to the web. To me, digital gardening is not just passive knowledge collection. It’s a form of expression and sharing.

+
+

“[One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important.” +— Richard Hamming

+
+

The goal of Quartz is to make sharing your digital garden free and simple. At its core, Quartz is designed to be easy to use enough for non-technical people to get going but also powerful enough that senior developers can tweak it to work how they’d like it to work.

\ No newline at end of file diff --git a/postscript.js b/postscript.js new file mode 100644 index 000000000..f8ffaa4cb --- /dev/null +++ b/postscript.js @@ -0,0 +1,6282 @@ +(function () {// quartz/components/scripts/quartz/components/scripts/clipboard.inline.ts +var svgCopy = ''; +var svgCheck = ''; +document.addEventListener("nav", () => { + const els = document.getElementsByTagName("pre"); + for (let i = 0; i < els.length; i++) { + const codeBlock = els[i].getElementsByTagName("code")[0]; + if (codeBlock) { + const source = codeBlock.innerText.replace(/\n\n/g, "\n"); + const button = document.createElement("button"); + button.className = "clipboard-button"; + button.type = "button"; + button.innerHTML = svgCopy; + button.ariaLabel = "Copy source"; + button.addEventListener("click", () => { + navigator.clipboard.writeText(source).then( + () => { + button.blur(); + button.innerHTML = svgCheck; + setTimeout(() => { + button.innerHTML = svgCopy; + button.style.borderColor = ""; + }, 2e3); + }, + (error) => console.error(error) + ); + }); + els[i].prepend(button); + } + } +}); +})(); +(function () {var __create = Object.create; +var __defProp = Object.defineProperty; +var __getOwnPropDesc = Object.getOwnPropertyDescriptor; +var __getOwnPropNames = Object.getOwnPropertyNames; +var __getProtoOf = Object.getPrototypeOf; +var __hasOwnProp = Object.prototype.hasOwnProperty; +var __commonJS = (cb, mod) => function __require() { + return mod || (0, cb[__getOwnPropNames(cb)[0]])((mod = { exports: {} }).exports, mod), mod.exports; +}; +var __copyProps = (to, from, except, desc) => { + if (from && typeof from === "object" || typeof from === "function") { + for (let key of __getOwnPropNames(from)) + if (!__hasOwnProp.call(to, key) && key !== except) + __defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable }); + } + return to; +}; +var __toESM = (mod, isNodeMode, target) => (target = mod != null ? __create(__getProtoOf(mod)) : {}, __copyProps( + // If the importer is in node compatibility mode or this is not an ESM + // file that has been converted to a CommonJS file using a Babel- + // compatible transform (i.e. "__esModule" has not been set), then set + // "default" to the CommonJS "module.exports" for node compatibility. + isNodeMode || !mod || !mod.__esModule ? __defProp(target, "default", { value: mod, enumerable: true }) : target, + mod +)); + +// node_modules/flexsearch/dist/flexsearch.bundle.js +var require_flexsearch_bundle = __commonJS({ + "node_modules/flexsearch/dist/flexsearch.bundle.js"(exports, module) { + (function _f(self) { + "use strict"; + try { + if (module) + self = module; + } catch (e) { + } + self._factory = _f; + var t; + function u(a2) { + return "undefined" !== typeof a2 ? a2 : true; + } + function aa(a2) { + const b2 = Array(a2); + for (let c2 = 0; c2 < a2; c2++) + b2[c2] = v(); + return b2; + } + function v() { + return /* @__PURE__ */ Object.create(null); + } + function ba(a2, b2) { + return b2.length - a2.length; + } + function x(a2) { + return "string" === typeof a2; + } + function C(a2) { + return "object" === typeof a2; + } + function D(a2) { + return "function" === typeof a2; + } + ; + function ca(a2, b2) { + var c2 = da; + if (a2 && (b2 && (a2 = E(a2, b2)), this.H && (a2 = E(a2, this.H)), this.J && 1 < a2.length && (a2 = E(a2, this.J)), c2 || "" === c2)) { + a2 = a2.split(c2); + if (this.filter) { + b2 = this.filter; + c2 = a2.length; + const d2 = []; + for (let e = 0, f = 0; e < c2; e++) { + const g = a2[e]; + g && !b2[g] && (d2[f++] = g); + } + a2 = d2; + } + return a2; + } + return a2; + } + const da = /[\p{Z}\p{S}\p{P}\p{C}]+/u, ea = /[\u0300-\u036f]/g; + function fa(a2, b2) { + const c2 = Object.keys(a2), d2 = c2.length, e = []; + let f = "", g = 0; + for (let h = 0, k, m; h < d2; h++) + k = c2[h], (m = a2[k]) ? (e[g++] = F(b2 ? "(?!\\b)" + k + "(\\b|_)" : k), e[g++] = m) : f += (f ? "|" : "") + k; + f && (e[g++] = F(b2 ? "(?!\\b)(" + f + ")(\\b|_)" : "(" + f + ")"), e[g] = ""); + return e; + } + function E(a2, b2) { + for (let c2 = 0, d2 = b2.length; c2 < d2 && (a2 = a2.replace(b2[c2], b2[c2 + 1]), a2); c2 += 2) + ; + return a2; + } + function F(a2) { + return new RegExp(a2, "g"); + } + function ha(a2) { + let b2 = "", c2 = ""; + for (let d2 = 0, e = a2.length, f; d2 < e; d2++) + (f = a2[d2]) !== c2 && (b2 += c2 = f); + return b2; + } + ; + var ja = { encode: ia, F: false, G: "" }; + function ia(a2) { + return ca.call(this, ("" + a2).toLowerCase(), false); + } + ; + const ka = {}, G = {}; + function la(a2) { + I(a2, "add"); + I(a2, "append"); + I(a2, "search"); + I(a2, "update"); + I(a2, "remove"); + } + function I(a2, b2) { + a2[b2 + "Async"] = function() { + const c2 = this, d2 = arguments; + var e = d2[d2.length - 1]; + let f; + D(e) && (f = e, delete d2[d2.length - 1]); + e = new Promise(function(g) { + setTimeout(function() { + c2.async = true; + const h = c2[b2].apply(c2, d2); + c2.async = false; + g(h); + }); + }); + return f ? (e.then(f), this) : e; + }; + } + ; + function ma(a2, b2, c2, d2) { + const e = a2.length; + let f = [], g, h, k = 0; + d2 && (d2 = []); + for (let m = e - 1; 0 <= m; m--) { + const n = a2[m], w = n.length, q = v(); + let r = !g; + for (let l = 0; l < w; l++) { + const p = n[l], z = p.length; + if (z) + for (let B = 0, A, y; B < z; B++) + if (y = p[B], g) { + if (g[y]) { + if (!m) { + if (c2) + c2--; + else if (f[k++] = y, k === b2) + return f; + } + if (m || d2) + q[y] = 1; + r = true; + } + if (d2 && (h[y] = (A = h[y]) ? ++A : A = 1, A < e)) { + const H = d2[A - 2] || (d2[A - 2] = []); + H[H.length] = y; + } + } else + q[y] = 1; + } + if (d2) + g || (h = q); + else if (!r) + return []; + g = q; + } + if (d2) + for (let m = d2.length - 1, n, w; 0 <= m; m--) { + n = d2[m]; + w = n.length; + for (let q = 0, r; q < w; q++) + if (r = n[q], !g[r]) { + if (c2) + c2--; + else if (f[k++] = r, k === b2) + return f; + g[r] = 1; + } + } + return f; + } + function na(a2, b2) { + const c2 = v(), d2 = v(), e = []; + for (let f = 0; f < a2.length; f++) + c2[a2[f]] = 1; + for (let f = 0, g; f < b2.length; f++) { + g = b2[f]; + for (let h = 0, k; h < g.length; h++) + k = g[h], c2[k] && !d2[k] && (d2[k] = 1, e[e.length] = k); + } + return e; + } + ; + function J(a2) { + this.l = true !== a2 && a2; + this.cache = v(); + this.h = []; + } + function oa(a2, b2, c2) { + C(a2) && (a2 = a2.query); + let d2 = this.cache.get(a2); + d2 || (d2 = this.search(a2, b2, c2), this.cache.set(a2, d2)); + return d2; + } + J.prototype.set = function(a2, b2) { + if (!this.cache[a2]) { + var c2 = this.h.length; + c2 === this.l ? delete this.cache[this.h[c2 - 1]] : c2++; + for (--c2; 0 < c2; c2--) + this.h[c2] = this.h[c2 - 1]; + this.h[0] = a2; + } + this.cache[a2] = b2; + }; + J.prototype.get = function(a2) { + const b2 = this.cache[a2]; + if (this.l && b2 && (a2 = this.h.indexOf(a2))) { + const c2 = this.h[a2 - 1]; + this.h[a2 - 1] = this.h[a2]; + this.h[a2] = c2; + } + return b2; + }; + const qa = { memory: { charset: "latin:extra", D: 3, B: 4, m: false }, performance: { D: 3, B: 3, s: false, context: { depth: 2, D: 1 } }, match: { charset: "latin:extra", G: "reverse" }, score: { charset: "latin:advanced", D: 20, B: 3, context: { depth: 3, D: 9 } }, "default": {} }; + function ra(a2, b2, c2, d2, e, f) { + setTimeout(function() { + const g = a2(c2, JSON.stringify(f)); + g && g.then ? g.then(function() { + b2.export(a2, b2, c2, d2, e + 1); + }) : b2.export(a2, b2, c2, d2, e + 1); + }); + } + ; + function K(a2, b2) { + if (!(this instanceof K)) + return new K(a2); + var c2; + if (a2) { + x(a2) ? a2 = qa[a2] : (c2 = a2.preset) && (a2 = Object.assign({}, c2[c2], a2)); + c2 = a2.charset; + var d2 = a2.lang; + x(c2) && (-1 === c2.indexOf(":") && (c2 += ":default"), c2 = G[c2]); + x(d2) && (d2 = ka[d2]); + } else + a2 = {}; + let e, f, g = a2.context || {}; + this.encode = a2.encode || c2 && c2.encode || ia; + this.register = b2 || v(); + this.D = e = a2.resolution || 9; + this.G = b2 = c2 && c2.G || a2.tokenize || "strict"; + this.depth = "strict" === b2 && g.depth; + this.l = u(g.bidirectional); + this.s = f = u(a2.optimize); + this.m = u(a2.fastupdate); + this.B = a2.minlength || 1; + this.C = a2.boost; + this.map = f ? aa(e) : v(); + this.A = e = g.resolution || 1; + this.h = f ? aa(e) : v(); + this.F = c2 && c2.F || a2.rtl; + this.H = (b2 = a2.matcher || d2 && d2.H) && fa(b2, false); + this.J = (b2 = a2.stemmer || d2 && d2.J) && fa(b2, true); + if (c2 = b2 = a2.filter || d2 && d2.filter) { + c2 = b2; + d2 = v(); + for (let h = 0, k = c2.length; h < k; h++) + d2[c2[h]] = 1; + c2 = d2; + } + this.filter = c2; + this.cache = (b2 = a2.cache) && new J(b2); + } + t = K.prototype; + t.append = function(a2, b2) { + return this.add(a2, b2, true); + }; + t.add = function(a2, b2, c2, d2) { + if (b2 && (a2 || 0 === a2)) { + if (!d2 && !c2 && this.register[a2]) + return this.update(a2, b2); + b2 = this.encode(b2); + if (d2 = b2.length) { + const m = v(), n = v(), w = this.depth, q = this.D; + for (let r = 0; r < d2; r++) { + let l = b2[this.F ? d2 - 1 - r : r]; + var e = l.length; + if (l && e >= this.B && (w || !n[l])) { + var f = L(q, d2, r), g = ""; + switch (this.G) { + case "full": + if (3 < e) { + for (f = 0; f < e; f++) + for (var h = e; h > f; h--) + if (h - f >= this.B) { + var k = L(q, d2, r, e, f); + g = l.substring(f, h); + M(this, n, g, k, a2, c2); + } + break; + } + case "reverse": + if (2 < e) { + for (h = e - 1; 0 < h; h--) + g = l[h] + g, g.length >= this.B && M( + this, + n, + g, + L(q, d2, r, e, h), + a2, + c2 + ); + g = ""; + } + case "forward": + if (1 < e) { + for (h = 0; h < e; h++) + g += l[h], g.length >= this.B && M(this, n, g, f, a2, c2); + break; + } + default: + if (this.C && (f = Math.min(f / this.C(b2, l, r) | 0, q - 1)), M(this, n, l, f, a2, c2), w && 1 < d2 && r < d2 - 1) { + for (e = v(), g = this.A, f = l, h = Math.min(w + 1, d2 - r), e[f] = 1, k = 1; k < h; k++) + if ((l = b2[this.F ? d2 - 1 - r - k : r + k]) && l.length >= this.B && !e[l]) { + e[l] = 1; + const p = this.l && l > f; + M(this, m, p ? f : l, L(g + (d2 / 2 > g ? 0 : 1), d2, r, h - 1, k - 1), a2, c2, p ? l : f); + } + } + } + } + } + this.m || (this.register[a2] = 1); + } + } + return this; + }; + function L(a2, b2, c2, d2, e) { + return c2 && 1 < a2 ? b2 + (d2 || 0) <= a2 ? c2 + (e || 0) : (a2 - 1) / (b2 + (d2 || 0)) * (c2 + (e || 0)) + 1 | 0 : 0; + } + function M(a2, b2, c2, d2, e, f, g) { + let h = g ? a2.h : a2.map; + if (!b2[c2] || g && !b2[c2][g]) + a2.s && (h = h[d2]), g ? (b2 = b2[c2] || (b2[c2] = v()), b2[g] = 1, h = h[g] || (h[g] = v())) : b2[c2] = 1, h = h[c2] || (h[c2] = []), a2.s || (h = h[d2] || (h[d2] = [])), f && -1 !== h.indexOf(e) || (h[h.length] = e, a2.m && (a2 = a2.register[e] || (a2.register[e] = []), a2[a2.length] = h)); + } + t.search = function(a2, b2, c2) { + c2 || (!b2 && C(a2) ? (c2 = a2, a2 = c2.query) : C(b2) && (c2 = b2)); + let d2 = [], e; + let f, g = 0; + if (c2) { + b2 = c2.limit; + g = c2.offset || 0; + var h = c2.context; + f = c2.suggest; + } + if (a2 && (a2 = this.encode(a2), e = a2.length, 1 < e)) { + c2 = v(); + var k = []; + for (let n = 0, w = 0, q; n < e; n++) + if ((q = a2[n]) && q.length >= this.B && !c2[q]) + if (this.s || f || this.map[q]) + k[w++] = q, c2[q] = 1; + else + return d2; + a2 = k; + e = a2.length; + } + if (!e) + return d2; + b2 || (b2 = 100); + h = this.depth && 1 < e && false !== h; + c2 = 0; + let m; + h ? (m = a2[0], c2 = 1) : 1 < e && a2.sort(ba); + for (let n, w; c2 < e; c2++) { + w = a2[c2]; + h ? (n = sa(this, d2, f, b2, g, 2 === e, w, m), f && false === n && d2.length || (m = w)) : n = sa(this, d2, f, b2, g, 1 === e, w); + if (n) + return n; + if (f && c2 === e - 1) { + k = d2.length; + if (!k) { + if (h) { + h = 0; + c2 = -1; + continue; + } + return d2; + } + if (1 === k) + return ta(d2[0], b2, g); + } + } + return ma(d2, b2, g, f); + }; + function sa(a2, b2, c2, d2, e, f, g, h) { + let k = [], m = h ? a2.h : a2.map; + a2.s || (m = ua(m, g, h, a2.l)); + if (m) { + let n = 0; + const w = Math.min(m.length, h ? a2.A : a2.D); + for (let q = 0, r = 0, l, p; q < w; q++) + if (l = m[q]) { + if (a2.s && (l = ua(l, g, h, a2.l)), e && l && f && (p = l.length, p <= e ? (e -= p, l = null) : (l = l.slice(e), e = 0)), l && (k[n++] = l, f && (r += l.length, r >= d2))) + break; + } + if (n) { + if (f) + return ta(k, d2, 0); + b2[b2.length] = k; + return; + } + } + return !c2 && k; + } + function ta(a2, b2, c2) { + a2 = 1 === a2.length ? a2[0] : [].concat.apply([], a2); + return c2 || a2.length > b2 ? a2.slice(c2, c2 + b2) : a2; + } + function ua(a2, b2, c2, d2) { + c2 ? (d2 = d2 && b2 > c2, a2 = (a2 = a2[d2 ? b2 : c2]) && a2[d2 ? c2 : b2]) : a2 = a2[b2]; + return a2; + } + t.contain = function(a2) { + return !!this.register[a2]; + }; + t.update = function(a2, b2) { + return this.remove(a2).add(a2, b2); + }; + t.remove = function(a2, b2) { + const c2 = this.register[a2]; + if (c2) { + if (this.m) + for (let d2 = 0, e; d2 < c2.length; d2++) + e = c2[d2], e.splice(e.indexOf(a2), 1); + else + N(this.map, a2, this.D, this.s), this.depth && N(this.h, a2, this.A, this.s); + b2 || delete this.register[a2]; + if (this.cache) { + b2 = this.cache; + for (let d2 = 0, e, f; d2 < b2.h.length; d2++) + f = b2.h[d2], e = b2.cache[f], -1 !== e.indexOf(a2) && (b2.h.splice(d2--, 1), delete b2.cache[f]); + } + } + return this; + }; + function N(a2, b2, c2, d2, e) { + let f = 0; + if (a2.constructor === Array) + if (e) + b2 = a2.indexOf(b2), -1 !== b2 ? 1 < a2.length && (a2.splice(b2, 1), f++) : f++; + else { + e = Math.min(a2.length, c2); + for (let g = 0, h; g < e; g++) + if (h = a2[g]) + f = N(h, b2, c2, d2, e), d2 || f || delete a2[g]; + } + else + for (let g in a2) + (f = N(a2[g], b2, c2, d2, e)) || delete a2[g]; + return f; + } + t.searchCache = oa; + t.export = function(a2, b2, c2, d2, e) { + let f, g; + switch (e || (e = 0)) { + case 0: + f = "reg"; + if (this.m) { + g = v(); + for (let h in this.register) + g[h] = 1; + } else + g = this.register; + break; + case 1: + f = "cfg"; + g = { doc: 0, opt: this.s ? 1 : 0 }; + break; + case 2: + f = "map"; + g = this.map; + break; + case 3: + f = "ctx"; + g = this.h; + break; + default: + return; + } + ra(a2, b2 || this, c2 ? c2 + "." + f : f, d2, e, g); + return true; + }; + t.import = function(a2, b2) { + if (b2) + switch (x(b2) && (b2 = JSON.parse(b2)), a2) { + case "cfg": + this.s = !!b2.opt; + break; + case "reg": + this.m = false; + this.register = b2; + break; + case "map": + this.map = b2; + break; + case "ctx": + this.h = b2; + } + }; + la(K.prototype); + function va(a2) { + a2 = a2.data; + var b2 = self._index; + const c2 = a2.args; + var d2 = a2.task; + switch (d2) { + case "init": + d2 = a2.options || {}; + a2 = a2.factory; + b2 = d2.encode; + d2.cache = false; + b2 && 0 === b2.indexOf("function") && (d2.encode = Function("return " + b2)()); + a2 ? (Function("return " + a2)()(self), self._index = new self.FlexSearch.Index(d2), delete self.FlexSearch) : self._index = new K(d2); + break; + default: + a2 = a2.id, b2 = b2[d2].apply(b2, c2), postMessage("search" === d2 ? { id: a2, msg: b2 } : { id: a2 }); + } + } + ; + let wa = 0; + function O(a2) { + if (!(this instanceof O)) + return new O(a2); + var b2; + a2 ? D(b2 = a2.encode) && (a2.encode = b2.toString()) : a2 = {}; + (b2 = (self || window)._factory) && (b2 = b2.toString()); + const c2 = self.exports, d2 = this; + this.o = xa(b2, c2, a2.worker); + this.h = v(); + if (this.o) { + if (c2) + this.o.on("message", function(e) { + d2.h[e.id](e.msg); + delete d2.h[e.id]; + }); + else + this.o.onmessage = function(e) { + e = e.data; + d2.h[e.id](e.msg); + delete d2.h[e.id]; + }; + this.o.postMessage({ task: "init", factory: b2, options: a2 }); + } + } + P("add"); + P("append"); + P("search"); + P("update"); + P("remove"); + function P(a2) { + O.prototype[a2] = O.prototype[a2 + "Async"] = function() { + const b2 = this, c2 = [].slice.call(arguments); + var d2 = c2[c2.length - 1]; + let e; + D(d2) && (e = d2, c2.splice(c2.length - 1, 1)); + d2 = new Promise(function(f) { + setTimeout(function() { + b2.h[++wa] = f; + b2.o.postMessage({ task: a2, id: wa, args: c2 }); + }); + }); + return e ? (d2.then(e), this) : d2; + }; + } + function xa(a, b, c) { + let d; + try { + d = b ? eval('new (require("worker_threads")["Worker"])("../dist/node/node.js")') : a ? new Worker(URL.createObjectURL(new Blob(["onmessage=" + va.toString()], { type: "text/javascript" }))) : new Worker(x(c) ? c : "worker/worker.js", { type: "module" }); + } catch (e) { + } + return d; + } + ; + function Q(a2) { + if (!(this instanceof Q)) + return new Q(a2); + var b2 = a2.document || a2.doc || a2, c2; + this.K = []; + this.h = []; + this.A = []; + this.register = v(); + this.key = (c2 = b2.key || b2.id) && S(c2, this.A) || "id"; + this.m = u(a2.fastupdate); + this.C = (c2 = b2.store) && true !== c2 && []; + this.store = c2 && v(); + this.I = (c2 = b2.tag) && S(c2, this.A); + this.l = c2 && v(); + this.cache = (c2 = a2.cache) && new J(c2); + a2.cache = false; + this.o = a2.worker; + this.async = false; + c2 = v(); + let d2 = b2.index || b2.field || b2; + x(d2) && (d2 = [d2]); + for (let e = 0, f, g; e < d2.length; e++) + f = d2[e], x(f) || (g = f, f = f.field), g = C(g) ? Object.assign({}, a2, g) : a2, this.o && (c2[f] = new O(g), c2[f].o || (this.o = false)), this.o || (c2[f] = new K(g, this.register)), this.K[e] = S(f, this.A), this.h[e] = f; + if (this.C) + for (a2 = b2.store, x(a2) && (a2 = [a2]), b2 = 0; b2 < a2.length; b2++) + this.C[b2] = S(a2[b2], this.A); + this.index = c2; + } + function S(a2, b2) { + const c2 = a2.split(":"); + let d2 = 0; + for (let e = 0; e < c2.length; e++) + a2 = c2[e], 0 <= a2.indexOf("[]") && (a2 = a2.substring(0, a2.length - 2)) && (b2[d2] = true), a2 && (c2[d2++] = a2); + d2 < c2.length && (c2.length = d2); + return 1 < d2 ? c2 : c2[0]; + } + function T(a2, b2) { + if (x(b2)) + a2 = a2[b2]; + else + for (let c2 = 0; a2 && c2 < b2.length; c2++) + a2 = a2[b2[c2]]; + return a2; + } + function U(a2, b2, c2, d2, e) { + a2 = a2[e]; + if (d2 === c2.length - 1) + b2[e] = a2; + else if (a2) + if (a2.constructor === Array) + for (b2 = b2[e] = Array(a2.length), e = 0; e < a2.length; e++) + U(a2, b2, c2, d2, e); + else + b2 = b2[e] || (b2[e] = v()), e = c2[++d2], U(a2, b2, c2, d2, e); + } + function V(a2, b2, c2, d2, e, f, g, h) { + if (a2 = a2[g]) + if (d2 === b2.length - 1) { + if (a2.constructor === Array) { + if (c2[d2]) { + for (b2 = 0; b2 < a2.length; b2++) + e.add(f, a2[b2], true, true); + return; + } + a2 = a2.join(" "); + } + e.add(f, a2, h, true); + } else if (a2.constructor === Array) + for (g = 0; g < a2.length; g++) + V(a2, b2, c2, d2, e, f, g, h); + else + g = b2[++d2], V(a2, b2, c2, d2, e, f, g, h); + } + t = Q.prototype; + t.add = function(a2, b2, c2) { + C(a2) && (b2 = a2, a2 = T(b2, this.key)); + if (b2 && (a2 || 0 === a2)) { + if (!c2 && this.register[a2]) + return this.update(a2, b2); + for (let d2 = 0, e, f; d2 < this.h.length; d2++) + f = this.h[d2], e = this.K[d2], x(e) && (e = [e]), V(b2, e, this.A, 0, this.index[f], a2, e[0], c2); + if (this.I) { + let d2 = T(b2, this.I), e = v(); + x(d2) && (d2 = [d2]); + for (let f = 0, g, h; f < d2.length; f++) + if (g = d2[f], !e[g] && (e[g] = 1, h = this.l[g] || (this.l[g] = []), !c2 || -1 === h.indexOf(a2))) { + if (h[h.length] = a2, this.m) { + const k = this.register[a2] || (this.register[a2] = []); + k[k.length] = h; + } + } + } + if (this.store && (!c2 || !this.store[a2])) { + let d2; + if (this.C) { + d2 = v(); + for (let e = 0, f; e < this.C.length; e++) + f = this.C[e], x(f) ? d2[f] = b2[f] : U(b2, d2, f, 0, f[0]); + } + this.store[a2] = d2 || b2; + } + } + return this; + }; + t.append = function(a2, b2) { + return this.add(a2, b2, true); + }; + t.update = function(a2, b2) { + return this.remove(a2).add(a2, b2); + }; + t.remove = function(a2) { + C(a2) && (a2 = T(a2, this.key)); + if (this.register[a2]) { + for (var b2 = 0; b2 < this.h.length && (this.index[this.h[b2]].remove(a2, !this.o), !this.m); b2++) + ; + if (this.I && !this.m) + for (let c2 in this.l) { + b2 = this.l[c2]; + const d2 = b2.indexOf(a2); + -1 !== d2 && (1 < b2.length ? b2.splice(d2, 1) : delete this.l[c2]); + } + this.store && delete this.store[a2]; + delete this.register[a2]; + } + return this; + }; + t.search = function(a2, b2, c2, d2) { + c2 || (!b2 && C(a2) ? (c2 = a2, a2 = c2.query) : C(b2) && (c2 = b2, b2 = 0)); + let e = [], f = [], g, h, k, m, n, w, q = 0; + if (c2) + if (c2.constructor === Array) + k = c2, c2 = null; + else { + k = (g = c2.pluck) || c2.index || c2.field; + m = c2.tag; + h = this.store && c2.enrich; + n = "and" === c2.bool; + b2 = c2.limit || 100; + w = c2.offset || 0; + if (m && (x(m) && (m = [m]), !a2)) { + for (let l = 0, p; l < m.length; l++) + if (p = ya.call(this, m[l], b2, w, h)) + e[e.length] = p, q++; + return q ? e : []; + } + x(k) && (k = [k]); + } + k || (k = this.h); + n = n && (1 < k.length || m && 1 < m.length); + const r = !d2 && (this.o || this.async) && []; + for (let l = 0, p, z, B; l < k.length; l++) { + let A; + z = k[l]; + x(z) || (A = z, z = z.field); + if (r) + r[l] = this.index[z].searchAsync(a2, b2, A || c2); + else { + d2 ? p = d2[l] : p = this.index[z].search(a2, b2, A || c2); + B = p && p.length; + if (m && B) { + const y = []; + let H = 0; + n && (y[0] = [p]); + for (let X = 0, pa, R; X < m.length; X++) + if (pa = m[X], B = (R = this.l[pa]) && R.length) + H++, y[y.length] = n ? [R] : R; + H && (p = n ? ma(y, b2 || 100, w || 0) : na(p, y), B = p.length); + } + if (B) + f[q] = z, e[q++] = p; + else if (n) + return []; + } + } + if (r) { + const l = this; + return new Promise(function(p) { + Promise.all(r).then(function(z) { + p(l.search(a2, b2, c2, z)); + }); + }); + } + if (!q) + return []; + if (g && (!h || !this.store)) + return e[0]; + for (let l = 0, p; l < f.length; l++) { + p = e[l]; + p.length && h && (p = za.call(this, p)); + if (g) + return p; + e[l] = { field: f[l], result: p }; + } + return e; + }; + function ya(a2, b2, c2, d2) { + let e = this.l[a2], f = e && e.length - c2; + if (f && 0 < f) { + if (f > b2 || c2) + e = e.slice(c2, c2 + b2); + d2 && (e = za.call(this, e)); + return { tag: a2, result: e }; + } + } + function za(a2) { + const b2 = Array(a2.length); + for (let c2 = 0, d2; c2 < a2.length; c2++) + d2 = a2[c2], b2[c2] = { id: d2, doc: this.store[d2] }; + return b2; + } + t.contain = function(a2) { + return !!this.register[a2]; + }; + t.get = function(a2) { + return this.store[a2]; + }; + t.set = function(a2, b2) { + this.store[a2] = b2; + return this; + }; + t.searchCache = oa; + t.export = function(a2, b2, c2, d2, e) { + e || (e = 0); + d2 || (d2 = 0); + if (d2 < this.h.length) { + const f = this.h[d2], g = this.index[f]; + b2 = this; + setTimeout(function() { + g.export(a2, b2, e ? f.replace(":", "-") : "", d2, e++) || (d2++, e = 1, b2.export(a2, b2, f, d2, e)); + }); + } else { + let f; + switch (e) { + case 1: + c2 = "tag"; + f = this.l; + break; + case 2: + c2 = "store"; + f = this.store; + break; + default: + return; + } + ra(a2, this, c2, d2, e, f); + } + }; + t.import = function(a2, b2) { + if (b2) + switch (x(b2) && (b2 = JSON.parse(b2)), a2) { + case "tag": + this.l = b2; + break; + case "reg": + this.m = false; + this.register = b2; + for (let d2 = 0, e; d2 < this.h.length; d2++) + e = this.index[this.h[d2]], e.register = b2, e.m = false; + break; + case "store": + this.store = b2; + break; + default: + a2 = a2.split("."); + const c2 = a2[0]; + a2 = a2[1]; + c2 && a2 && this.index[c2].import(a2, b2); + } + }; + la(Q.prototype); + var Ba = { encode: Aa, F: false, G: "" }; + const Ca = [F("[\xE0\xE1\xE2\xE3\xE4\xE5]"), "a", F("[\xE8\xE9\xEA\xEB]"), "e", F("[\xEC\xED\xEE\xEF]"), "i", F("[\xF2\xF3\xF4\xF5\xF6\u0151]"), "o", F("[\xF9\xFA\xFB\xFC\u0171]"), "u", F("[\xFD\u0177\xFF]"), "y", F("\xF1"), "n", F("[\xE7c]"), "k", F("\xDF"), "s", F(" & "), " and "]; + function Aa(a2) { + var b2 = a2; + b2.normalize && (b2 = b2.normalize("NFD").replace(ea, "")); + return ca.call(this, b2.toLowerCase(), !a2.normalize && Ca); + } + ; + var Ea = { encode: Da, F: false, G: "strict" }; + const Fa = /[^a-z0-9]+/, Ga = { b: "p", v: "f", w: "f", z: "s", x: "s", "\xDF": "s", d: "t", n: "m", c: "k", g: "k", j: "k", q: "k", i: "e", y: "e", u: "o" }; + function Da(a2) { + a2 = Aa.call(this, a2).join(" "); + const b2 = []; + if (a2) { + const c2 = a2.split(Fa), d2 = c2.length; + for (let e = 0, f, g = 0; e < d2; e++) + if ((a2 = c2[e]) && (!this.filter || !this.filter[a2])) { + f = a2[0]; + let h = Ga[f] || f, k = h; + for (let m = 1; m < a2.length; m++) { + f = a2[m]; + const n = Ga[f] || f; + n && n !== k && (h += n, k = n); + } + b2[g++] = h; + } + } + return b2; + } + ; + var Ia = { encode: Ha, F: false, G: "" }; + const Ja = [F("ae"), "a", F("oe"), "o", F("sh"), "s", F("th"), "t", F("ph"), "f", F("pf"), "f", F("(?![aeo])h(?![aeo])"), "", F("(?!^[aeo])h(?!^[aeo])"), ""]; + function Ha(a2, b2) { + a2 && (a2 = Da.call(this, a2).join(" "), 2 < a2.length && (a2 = E(a2, Ja)), b2 || (1 < a2.length && (a2 = ha(a2)), a2 && (a2 = a2.split(" ")))); + return a2; + } + ; + var La = { encode: Ka, F: false, G: "" }; + const Ma = F("(?!\\b)[aeo]"); + function Ka(a2) { + a2 && (a2 = Ha.call(this, a2, true), 1 < a2.length && (a2 = a2.replace(Ma, "")), 1 < a2.length && (a2 = ha(a2)), a2 && (a2 = a2.split(" "))); + return a2; + } + ; + G["latin:default"] = ja; + G["latin:simple"] = Ba; + G["latin:balance"] = Ea; + G["latin:advanced"] = Ia; + G["latin:extra"] = La; + const W = self; + let Y; + const Z = { Index: K, Document: Q, Worker: O, registerCharset: function(a2, b2) { + G[a2] = b2; + }, registerLanguage: function(a2, b2) { + ka[a2] = b2; + } }; + (Y = W.define) && Y.amd ? Y([], function() { + return Z; + }) : W.exports ? W.exports = Z : W.FlexSearch = Z; + })(exports); + } +}); + +// quartz/components/scripts/quartz/components/scripts/search.inline.ts +var import_flexsearch = __toESM(require_flexsearch_bundle()); + +// quartz/components/scripts/util.ts +function registerEscapeHandler(outsideContainer, cb) { + if (!outsideContainer) + return; + function click(e) { + if (e.target !== this) + return; + e.preventDefault(); + cb(); + } + function esc(e) { + if (!e.key.startsWith("Esc")) + return; + e.preventDefault(); + cb(); + } + outsideContainer?.removeEventListener("click", click); + outsideContainer?.addEventListener("click", click); + document.removeEventListener("keydown", esc); + document.addEventListener("keydown", esc); +} +function removeAllChildren(node) { + while (node.firstChild) { + node.removeChild(node.firstChild); + } +} + +// node_modules/github-slugger/index.js +var own = Object.hasOwnProperty; + +// quartz/util/path.ts +function simplifySlug(fp) { + return _stripSlashes(_trimSuffix(fp, "index"), true); +} +function pathToRoot(slug2) { + let rootPath = slug2.split("/").filter((x2) => x2 !== "").slice(0, -1).map((_) => "..").join("/"); + if (rootPath.length === 0) { + rootPath = "."; + } + return rootPath; +} +function resolveRelative(current, target) { + const res = joinSegments(pathToRoot(current), simplifySlug(target)); + return res; +} +function joinSegments(...args) { + return args.filter((segment) => segment !== "").join("/"); +} +function _endsWith(s, suffix) { + return s === suffix || s.endsWith("/" + suffix); +} +function _trimSuffix(s, suffix) { + if (_endsWith(s, suffix)) { + s = s.slice(0, -suffix.length); + } + return s; +} +function _stripSlashes(s, onlyStripPrefix) { + if (s.startsWith("/")) { + s = s.substring(1); + } + if (!onlyStripPrefix && s.endsWith("/")) { + s = s.slice(0, -1); + } + return s; +} + +// quartz/components/scripts/quartz/components/scripts/search.inline.ts +var index = void 0; +var contextWindowWords = 30; +var numSearchResults = 5; +function highlight(searchTerm, text, trim) { + const tokenizedTerms = searchTerm.split(/\s+/).filter((t2) => t2 !== "").sort((a2, b2) => b2.length - a2.length); + let tokenizedText = text.split(/\s+/).filter((t2) => t2 !== ""); + let startIndex = 0; + let endIndex = tokenizedText.length - 1; + if (trim) { + const includesCheck = (tok) => tokenizedTerms.some((term) => tok.toLowerCase().startsWith(term.toLowerCase())); + const occurencesIndices = tokenizedText.map(includesCheck); + let bestSum = 0; + let bestIndex = 0; + for (let i = 0; i < Math.max(tokenizedText.length - contextWindowWords, 0); i++) { + const window2 = occurencesIndices.slice(i, i + contextWindowWords); + const windowSum = window2.reduce((total, cur) => total + (cur ? 1 : 0), 0); + if (windowSum >= bestSum) { + bestSum = windowSum; + bestIndex = i; + } + } + startIndex = Math.max(bestIndex - contextWindowWords, 0); + endIndex = Math.min(startIndex + 2 * contextWindowWords, tokenizedText.length - 1); + tokenizedText = tokenizedText.slice(startIndex, endIndex); + } + const slice = tokenizedText.map((tok) => { + for (const searchTok of tokenizedTerms) { + if (tok.toLowerCase().includes(searchTok.toLowerCase())) { + const regex2 = new RegExp(searchTok.toLowerCase(), "gi"); + return tok.replace(regex2, `$&`); + } + } + return tok; + }).join(" "); + return `${startIndex === 0 ? "" : "..."}${slice}${endIndex === tokenizedText.length - 1 ? "" : "..."}`; +} +var encoder = (str) => str.toLowerCase().split(/([^a-z]|[^\x00-\x7F])/); +var prevShortcutHandler = void 0; +document.addEventListener("nav", async (e) => { + const currentSlug = e.detail.url; + const data = await fetchData; + const container = document.getElementById("search-container"); + const sidebar = container?.closest(".sidebar"); + const searchIcon = document.getElementById("search-icon"); + const searchBar = document.getElementById("search-bar"); + const results = document.getElementById("results-container"); + const idDataMap = Object.keys(data); + function hideSearch() { + container?.classList.remove("active"); + if (searchBar) { + searchBar.value = ""; + } + if (sidebar) { + sidebar.style.zIndex = "unset"; + } + if (results) { + removeAllChildren(results); + } + } + function showSearch() { + if (sidebar) { + sidebar.style.zIndex = "1"; + } + container?.classList.add("active"); + searchBar?.focus(); + } + function shortcutHandler(e2) { + if (e2.key === "k" && (e2.ctrlKey || e2.metaKey)) { + e2.preventDefault(); + const searchBarOpen = container?.classList.contains("active"); + searchBarOpen ? hideSearch() : showSearch(); + } else if (e2.key === "Enter") { + const anchor = document.getElementsByClassName("result-card")[0]; + if (anchor) { + anchor.click(); + } + } + } + const formatForDisplay = (term, id) => { + const slug2 = idDataMap[id]; + return { + id, + slug: slug2, + title: highlight(term, data[slug2].title ?? ""), + content: highlight(term, data[slug2].content ?? "", true) + }; + }; + const resultToHTML = ({ slug: slug2, title, content }) => { + const button = document.createElement("button"); + button.classList.add("result-card"); + button.id = slug2; + button.innerHTML = `

${title}

${content}

`; + button.addEventListener("click", () => { + const targ = resolveRelative(currentSlug, slug2); + window.spaNavigate(new URL(targ, window.location.toString())); + }); + return button; + }; + function displayResults(finalResults) { + if (!results) + return; + removeAllChildren(results); + if (finalResults.length === 0) { + results.innerHTML = ``; + } else { + results.append(...finalResults.map(resultToHTML)); + } + } + async function onType(e2) { + const term = e2.target.value; + const searchResults = await index?.searchAsync(term, numSearchResults) ?? []; + const getByField = (field) => { + const results2 = searchResults.filter((x2) => x2.field === field); + return results2.length === 0 ? [] : [...results2[0].result]; + }; + const allIds = /* @__PURE__ */ new Set([...getByField("title"), ...getByField("content")]); + const finalResults = [...allIds].map((id) => formatForDisplay(term, id)); + displayResults(finalResults); + } + if (prevShortcutHandler) { + document.removeEventListener("keydown", prevShortcutHandler); + } + document.addEventListener("keydown", shortcutHandler); + prevShortcutHandler = shortcutHandler; + searchIcon?.removeEventListener("click", showSearch); + searchIcon?.addEventListener("click", showSearch); + searchBar?.removeEventListener("input", onType); + searchBar?.addEventListener("input", onType); + if (!index) { + index = new import_flexsearch.Document({ + cache: true, + charset: "latin:extra", + optimize: true, + encode: encoder, + document: { + id: "id", + index: [ + { + field: "title", + tokenize: "reverse" + }, + { + field: "content", + tokenize: "reverse" + } + ] + } + }); + let id = 0; + for (const [slug2, fileData] of Object.entries(data)) { + await index.addAsync(id, { + id, + slug: slug2, + title: fileData.title, + content: fileData.content + }); + id++; + } + } + registerEscapeHandler(container, hideSearch); +}); +})(); +(function () {// quartz/components/scripts/quartz/components/scripts/toc.inline.ts +var observer = new IntersectionObserver((entries) => { + for (const entry of entries) { + const slug = entry.target.id; + const tocEntryElement = document.querySelector(`a[data-for="${slug}"]`); + const windowHeight = entry.rootBounds?.height; + if (windowHeight && tocEntryElement) { + if (entry.boundingClientRect.y < windowHeight) { + tocEntryElement.classList.add("in-view"); + } else { + tocEntryElement.classList.remove("in-view"); + } + } + } +}); +function toggleToc() { + this.classList.toggle("collapsed"); + const content = this.nextElementSibling; + content.classList.toggle("collapsed"); + content.style.maxHeight = content.style.maxHeight === "0px" ? content.scrollHeight + "px" : "0px"; +} +function setupToc() { + const toc = document.getElementById("toc"); + if (toc) { + const content = toc.nextElementSibling; + content.style.maxHeight = content.scrollHeight + "px"; + toc.removeEventListener("click", toggleToc); + toc.addEventListener("click", toggleToc); + } +} +window.addEventListener("resize", setupToc); +document.addEventListener("nav", () => { + setupToc(); + observer.disconnect(); + const headers = document.querySelectorAll("h1[id], h2[id], h3[id], h4[id], h5[id], h6[id]"); + headers.forEach((header) => observer.observe(header)); +}); +})(); +(function () {// node_modules/d3-dispatch/src/dispatch.js +var noop = { value: () => { +} }; +function dispatch() { + for (var i = 0, n = arguments.length, _ = {}, t; i < n; ++i) { + if (!(t = arguments[i] + "") || t in _ || /[\s.]/.test(t)) + throw new Error("illegal type: " + t); + _[t] = []; + } + return new Dispatch(_); +} +function Dispatch(_) { + this._ = _; +} +function parseTypenames(typenames, types) { + return typenames.trim().split(/^|\s+/).map(function(t) { + var name = "", i = t.indexOf("."); + if (i >= 0) + name = t.slice(i + 1), t = t.slice(0, i); + if (t && !types.hasOwnProperty(t)) + throw new Error("unknown type: " + t); + return { type: t, name }; + }); +} +Dispatch.prototype = dispatch.prototype = { + constructor: Dispatch, + on: function(typename, callback) { + var _ = this._, T = parseTypenames(typename + "", _), t, i = -1, n = T.length; + if (arguments.length < 2) { + while (++i < n) + if ((t = (typename = T[i]).type) && (t = get(_[t], typename.name))) + return t; + return; + } + if (callback != null && typeof callback !== "function") + throw new Error("invalid callback: " + callback); + while (++i < n) { + if (t = (typename = T[i]).type) + _[t] = set(_[t], typename.name, callback); + else if (callback == null) + for (t in _) + _[t] = set(_[t], typename.name, null); + } + return this; + }, + copy: function() { + var copy = {}, _ = this._; + for (var t in _) + copy[t] = _[t].slice(); + return new Dispatch(copy); + }, + call: function(type2, that) { + if ((n = arguments.length - 2) > 0) + for (var args = new Array(n), i = 0, n, t; i < n; ++i) + args[i] = arguments[i + 2]; + if (!this._.hasOwnProperty(type2)) + throw new Error("unknown type: " + type2); + for (t = this._[type2], i = 0, n = t.length; i < n; ++i) + t[i].value.apply(that, args); + }, + apply: function(type2, that, args) { + if (!this._.hasOwnProperty(type2)) + throw new Error("unknown type: " + type2); + for (var t = this._[type2], i = 0, n = t.length; i < n; ++i) + t[i].value.apply(that, args); + } +}; +function get(type2, name) { + for (var i = 0, n = type2.length, c2; i < n; ++i) { + if ((c2 = type2[i]).name === name) { + return c2.value; + } + } +} +function set(type2, name, callback) { + for (var i = 0, n = type2.length; i < n; ++i) { + if (type2[i].name === name) { + type2[i] = noop, type2 = type2.slice(0, i).concat(type2.slice(i + 1)); + break; + } + } + if (callback != null) + type2.push({ name, value: callback }); + return type2; +} +var dispatch_default = dispatch; + +// node_modules/d3-selection/src/namespaces.js +var xhtml = "http://www.w3.org/1999/xhtml"; +var namespaces_default = { + svg: "http://www.w3.org/2000/svg", + xhtml, + xlink: "http://www.w3.org/1999/xlink", + xml: "http://www.w3.org/XML/1998/namespace", + xmlns: "http://www.w3.org/2000/xmlns/" +}; + +// node_modules/d3-selection/src/namespace.js +function namespace_default(name) { + var prefix = name += "", i = prefix.indexOf(":"); + if (i >= 0 && (prefix = name.slice(0, i)) !== "xmlns") + name = name.slice(i + 1); + return namespaces_default.hasOwnProperty(prefix) ? { space: namespaces_default[prefix], local: name } : name; +} + +// node_modules/d3-selection/src/creator.js +function creatorInherit(name) { + return function() { + var document2 = this.ownerDocument, uri = this.namespaceURI; + return uri === xhtml && document2.documentElement.namespaceURI === xhtml ? document2.createElement(name) : document2.createElementNS(uri, name); + }; +} +function creatorFixed(fullname) { + return function() { + return this.ownerDocument.createElementNS(fullname.space, fullname.local); + }; +} +function creator_default(name) { + var fullname = namespace_default(name); + return (fullname.local ? creatorFixed : creatorInherit)(fullname); +} + +// node_modules/d3-selection/src/selector.js +function none() { +} +function selector_default(selector) { + return selector == null ? none : function() { + return this.querySelector(selector); + }; +} + +// node_modules/d3-selection/src/selection/select.js +function select_default(select) { + if (typeof select !== "function") + select = selector_default(select); + for (var groups = this._groups, m2 = groups.length, subgroups = new Array(m2), j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, subgroup = subgroups[j] = new Array(n), node, subnode, i = 0; i < n; ++i) { + if ((node = group[i]) && (subnode = select.call(node, node.__data__, i, group))) { + if ("__data__" in node) + subnode.__data__ = node.__data__; + subgroup[i] = subnode; + } + } + } + return new Selection(subgroups, this._parents); +} + +// node_modules/d3-selection/src/array.js +function array(x2) { + return x2 == null ? [] : Array.isArray(x2) ? x2 : Array.from(x2); +} + +// node_modules/d3-selection/src/selectorAll.js +function empty() { + return []; +} +function selectorAll_default(selector) { + return selector == null ? empty : function() { + return this.querySelectorAll(selector); + }; +} + +// node_modules/d3-selection/src/selection/selectAll.js +function arrayAll(select) { + return function() { + return array(select.apply(this, arguments)); + }; +} +function selectAll_default(select) { + if (typeof select === "function") + select = arrayAll(select); + else + select = selectorAll_default(select); + for (var groups = this._groups, m2 = groups.length, subgroups = [], parents = [], j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, node, i = 0; i < n; ++i) { + if (node = group[i]) { + subgroups.push(select.call(node, node.__data__, i, group)); + parents.push(node); + } + } + } + return new Selection(subgroups, parents); +} + +// node_modules/d3-selection/src/matcher.js +function matcher_default(selector) { + return function() { + return this.matches(selector); + }; +} +function childMatcher(selector) { + return function(node) { + return node.matches(selector); + }; +} + +// node_modules/d3-selection/src/selection/selectChild.js +var find = Array.prototype.find; +function childFind(match) { + return function() { + return find.call(this.children, match); + }; +} +function childFirst() { + return this.firstElementChild; +} +function selectChild_default(match) { + return this.select(match == null ? childFirst : childFind(typeof match === "function" ? match : childMatcher(match))); +} + +// node_modules/d3-selection/src/selection/selectChildren.js +var filter = Array.prototype.filter; +function children() { + return Array.from(this.children); +} +function childrenFilter(match) { + return function() { + return filter.call(this.children, match); + }; +} +function selectChildren_default(match) { + return this.selectAll(match == null ? children : childrenFilter(typeof match === "function" ? match : childMatcher(match))); +} + +// node_modules/d3-selection/src/selection/filter.js +function filter_default(match) { + if (typeof match !== "function") + match = matcher_default(match); + for (var groups = this._groups, m2 = groups.length, subgroups = new Array(m2), j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, subgroup = subgroups[j] = [], node, i = 0; i < n; ++i) { + if ((node = group[i]) && match.call(node, node.__data__, i, group)) { + subgroup.push(node); + } + } + } + return new Selection(subgroups, this._parents); +} + +// node_modules/d3-selection/src/selection/sparse.js +function sparse_default(update) { + return new Array(update.length); +} + +// node_modules/d3-selection/src/selection/enter.js +function enter_default() { + return new Selection(this._enter || this._groups.map(sparse_default), this._parents); +} +function EnterNode(parent, datum2) { + this.ownerDocument = parent.ownerDocument; + this.namespaceURI = parent.namespaceURI; + this._next = null; + this._parent = parent; + this.__data__ = datum2; +} +EnterNode.prototype = { + constructor: EnterNode, + appendChild: function(child) { + return this._parent.insertBefore(child, this._next); + }, + insertBefore: function(child, next) { + return this._parent.insertBefore(child, next); + }, + querySelector: function(selector) { + return this._parent.querySelector(selector); + }, + querySelectorAll: function(selector) { + return this._parent.querySelectorAll(selector); + } +}; + +// node_modules/d3-selection/src/constant.js +function constant_default(x2) { + return function() { + return x2; + }; +} + +// node_modules/d3-selection/src/selection/data.js +function bindIndex(parent, group, enter, update, exit, data) { + var i = 0, node, groupLength = group.length, dataLength = data.length; + for (; i < dataLength; ++i) { + if (node = group[i]) { + node.__data__ = data[i]; + update[i] = node; + } else { + enter[i] = new EnterNode(parent, data[i]); + } + } + for (; i < groupLength; ++i) { + if (node = group[i]) { + exit[i] = node; + } + } +} +function bindKey(parent, group, enter, update, exit, data, key) { + var i, node, nodeByKeyValue = /* @__PURE__ */ new Map(), groupLength = group.length, dataLength = data.length, keyValues = new Array(groupLength), keyValue; + for (i = 0; i < groupLength; ++i) { + if (node = group[i]) { + keyValues[i] = keyValue = key.call(node, node.__data__, i, group) + ""; + if (nodeByKeyValue.has(keyValue)) { + exit[i] = node; + } else { + nodeByKeyValue.set(keyValue, node); + } + } + } + for (i = 0; i < dataLength; ++i) { + keyValue = key.call(parent, data[i], i, data) + ""; + if (node = nodeByKeyValue.get(keyValue)) { + update[i] = node; + node.__data__ = data[i]; + nodeByKeyValue.delete(keyValue); + } else { + enter[i] = new EnterNode(parent, data[i]); + } + } + for (i = 0; i < groupLength; ++i) { + if ((node = group[i]) && nodeByKeyValue.get(keyValues[i]) === node) { + exit[i] = node; + } + } +} +function datum(node) { + return node.__data__; +} +function data_default(value, key) { + if (!arguments.length) + return Array.from(this, datum); + var bind = key ? bindKey : bindIndex, parents = this._parents, groups = this._groups; + if (typeof value !== "function") + value = constant_default(value); + for (var m2 = groups.length, update = new Array(m2), enter = new Array(m2), exit = new Array(m2), j = 0; j < m2; ++j) { + var parent = parents[j], group = groups[j], groupLength = group.length, data = arraylike(value.call(parent, parent && parent.__data__, j, parents)), dataLength = data.length, enterGroup = enter[j] = new Array(dataLength), updateGroup = update[j] = new Array(dataLength), exitGroup = exit[j] = new Array(groupLength); + bind(parent, group, enterGroup, updateGroup, exitGroup, data, key); + for (var i0 = 0, i1 = 0, previous, next; i0 < dataLength; ++i0) { + if (previous = enterGroup[i0]) { + if (i0 >= i1) + i1 = i0 + 1; + while (!(next = updateGroup[i1]) && ++i1 < dataLength) + ; + previous._next = next || null; + } + } + } + update = new Selection(update, parents); + update._enter = enter; + update._exit = exit; + return update; +} +function arraylike(data) { + return typeof data === "object" && "length" in data ? data : Array.from(data); +} + +// node_modules/d3-selection/src/selection/exit.js +function exit_default() { + return new Selection(this._exit || this._groups.map(sparse_default), this._parents); +} + +// node_modules/d3-selection/src/selection/join.js +function join_default(onenter, onupdate, onexit) { + var enter = this.enter(), update = this, exit = this.exit(); + if (typeof onenter === "function") { + enter = onenter(enter); + if (enter) + enter = enter.selection(); + } else { + enter = enter.append(onenter + ""); + } + if (onupdate != null) { + update = onupdate(update); + if (update) + update = update.selection(); + } + if (onexit == null) + exit.remove(); + else + onexit(exit); + return enter && update ? enter.merge(update).order() : update; +} + +// node_modules/d3-selection/src/selection/merge.js +function merge_default(context) { + var selection2 = context.selection ? context.selection() : context; + for (var groups0 = this._groups, groups1 = selection2._groups, m0 = groups0.length, m1 = groups1.length, m2 = Math.min(m0, m1), merges = new Array(m0), j = 0; j < m2; ++j) { + for (var group0 = groups0[j], group1 = groups1[j], n = group0.length, merge = merges[j] = new Array(n), node, i = 0; i < n; ++i) { + if (node = group0[i] || group1[i]) { + merge[i] = node; + } + } + } + for (; j < m0; ++j) { + merges[j] = groups0[j]; + } + return new Selection(merges, this._parents); +} + +// node_modules/d3-selection/src/selection/order.js +function order_default() { + for (var groups = this._groups, j = -1, m2 = groups.length; ++j < m2; ) { + for (var group = groups[j], i = group.length - 1, next = group[i], node; --i >= 0; ) { + if (node = group[i]) { + if (next && node.compareDocumentPosition(next) ^ 4) + next.parentNode.insertBefore(node, next); + next = node; + } + } + } + return this; +} + +// node_modules/d3-selection/src/selection/sort.js +function sort_default(compare) { + if (!compare) + compare = ascending; + function compareNode(a2, b) { + return a2 && b ? compare(a2.__data__, b.__data__) : !a2 - !b; + } + for (var groups = this._groups, m2 = groups.length, sortgroups = new Array(m2), j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, sortgroup = sortgroups[j] = new Array(n), node, i = 0; i < n; ++i) { + if (node = group[i]) { + sortgroup[i] = node; + } + } + sortgroup.sort(compareNode); + } + return new Selection(sortgroups, this._parents).order(); +} +function ascending(a2, b) { + return a2 < b ? -1 : a2 > b ? 1 : a2 >= b ? 0 : NaN; +} + +// node_modules/d3-selection/src/selection/call.js +function call_default() { + var callback = arguments[0]; + arguments[0] = this; + callback.apply(null, arguments); + return this; +} + +// node_modules/d3-selection/src/selection/nodes.js +function nodes_default() { + return Array.from(this); +} + +// node_modules/d3-selection/src/selection/node.js +function node_default() { + for (var groups = this._groups, j = 0, m2 = groups.length; j < m2; ++j) { + for (var group = groups[j], i = 0, n = group.length; i < n; ++i) { + var node = group[i]; + if (node) + return node; + } + } + return null; +} + +// node_modules/d3-selection/src/selection/size.js +function size_default() { + let size = 0; + for (const node of this) + ++size; + return size; +} + +// node_modules/d3-selection/src/selection/empty.js +function empty_default() { + return !this.node(); +} + +// node_modules/d3-selection/src/selection/each.js +function each_default(callback) { + for (var groups = this._groups, j = 0, m2 = groups.length; j < m2; ++j) { + for (var group = groups[j], i = 0, n = group.length, node; i < n; ++i) { + if (node = group[i]) + callback.call(node, node.__data__, i, group); + } + } + return this; +} + +// node_modules/d3-selection/src/selection/attr.js +function attrRemove(name) { + return function() { + this.removeAttribute(name); + }; +} +function attrRemoveNS(fullname) { + return function() { + this.removeAttributeNS(fullname.space, fullname.local); + }; +} +function attrConstant(name, value) { + return function() { + this.setAttribute(name, value); + }; +} +function attrConstantNS(fullname, value) { + return function() { + this.setAttributeNS(fullname.space, fullname.local, value); + }; +} +function attrFunction(name, value) { + return function() { + var v = value.apply(this, arguments); + if (v == null) + this.removeAttribute(name); + else + this.setAttribute(name, v); + }; +} +function attrFunctionNS(fullname, value) { + return function() { + var v = value.apply(this, arguments); + if (v == null) + this.removeAttributeNS(fullname.space, fullname.local); + else + this.setAttributeNS(fullname.space, fullname.local, v); + }; +} +function attr_default(name, value) { + var fullname = namespace_default(name); + if (arguments.length < 2) { + var node = this.node(); + return fullname.local ? node.getAttributeNS(fullname.space, fullname.local) : node.getAttribute(fullname); + } + return this.each((value == null ? fullname.local ? attrRemoveNS : attrRemove : typeof value === "function" ? fullname.local ? attrFunctionNS : attrFunction : fullname.local ? attrConstantNS : attrConstant)(fullname, value)); +} + +// node_modules/d3-selection/src/window.js +function window_default(node) { + return node.ownerDocument && node.ownerDocument.defaultView || node.document && node || node.defaultView; +} + +// node_modules/d3-selection/src/selection/style.js +function styleRemove(name) { + return function() { + this.style.removeProperty(name); + }; +} +function styleConstant(name, value, priority) { + return function() { + this.style.setProperty(name, value, priority); + }; +} +function styleFunction(name, value, priority) { + return function() { + var v = value.apply(this, arguments); + if (v == null) + this.style.removeProperty(name); + else + this.style.setProperty(name, v, priority); + }; +} +function style_default(name, value, priority) { + return arguments.length > 1 ? this.each((value == null ? styleRemove : typeof value === "function" ? styleFunction : styleConstant)(name, value, priority == null ? "" : priority)) : styleValue(this.node(), name); +} +function styleValue(node, name) { + return node.style.getPropertyValue(name) || window_default(node).getComputedStyle(node, null).getPropertyValue(name); +} + +// node_modules/d3-selection/src/selection/property.js +function propertyRemove(name) { + return function() { + delete this[name]; + }; +} +function propertyConstant(name, value) { + return function() { + this[name] = value; + }; +} +function propertyFunction(name, value) { + return function() { + var v = value.apply(this, arguments); + if (v == null) + delete this[name]; + else + this[name] = v; + }; +} +function property_default(name, value) { + return arguments.length > 1 ? this.each((value == null ? propertyRemove : typeof value === "function" ? propertyFunction : propertyConstant)(name, value)) : this.node()[name]; +} + +// node_modules/d3-selection/src/selection/classed.js +function classArray(string) { + return string.trim().split(/^|\s+/); +} +function classList(node) { + return node.classList || new ClassList(node); +} +function ClassList(node) { + this._node = node; + this._names = classArray(node.getAttribute("class") || ""); +} +ClassList.prototype = { + add: function(name) { + var i = this._names.indexOf(name); + if (i < 0) { + this._names.push(name); + this._node.setAttribute("class", this._names.join(" ")); + } + }, + remove: function(name) { + var i = this._names.indexOf(name); + if (i >= 0) { + this._names.splice(i, 1); + this._node.setAttribute("class", this._names.join(" ")); + } + }, + contains: function(name) { + return this._names.indexOf(name) >= 0; + } +}; +function classedAdd(node, names) { + var list = classList(node), i = -1, n = names.length; + while (++i < n) + list.add(names[i]); +} +function classedRemove(node, names) { + var list = classList(node), i = -1, n = names.length; + while (++i < n) + list.remove(names[i]); +} +function classedTrue(names) { + return function() { + classedAdd(this, names); + }; +} +function classedFalse(names) { + return function() { + classedRemove(this, names); + }; +} +function classedFunction(names, value) { + return function() { + (value.apply(this, arguments) ? classedAdd : classedRemove)(this, names); + }; +} +function classed_default(name, value) { + var names = classArray(name + ""); + if (arguments.length < 2) { + var list = classList(this.node()), i = -1, n = names.length; + while (++i < n) + if (!list.contains(names[i])) + return false; + return true; + } + return this.each((typeof value === "function" ? classedFunction : value ? classedTrue : classedFalse)(names, value)); +} + +// node_modules/d3-selection/src/selection/text.js +function textRemove() { + this.textContent = ""; +} +function textConstant(value) { + return function() { + this.textContent = value; + }; +} +function textFunction(value) { + return function() { + var v = value.apply(this, arguments); + this.textContent = v == null ? "" : v; + }; +} +function text_default(value) { + return arguments.length ? this.each(value == null ? textRemove : (typeof value === "function" ? textFunction : textConstant)(value)) : this.node().textContent; +} + +// node_modules/d3-selection/src/selection/html.js +function htmlRemove() { + this.innerHTML = ""; +} +function htmlConstant(value) { + return function() { + this.innerHTML = value; + }; +} +function htmlFunction(value) { + return function() { + var v = value.apply(this, arguments); + this.innerHTML = v == null ? "" : v; + }; +} +function html_default(value) { + return arguments.length ? this.each(value == null ? htmlRemove : (typeof value === "function" ? htmlFunction : htmlConstant)(value)) : this.node().innerHTML; +} + +// node_modules/d3-selection/src/selection/raise.js +function raise() { + if (this.nextSibling) + this.parentNode.appendChild(this); +} +function raise_default() { + return this.each(raise); +} + +// node_modules/d3-selection/src/selection/lower.js +function lower() { + if (this.previousSibling) + this.parentNode.insertBefore(this, this.parentNode.firstChild); +} +function lower_default() { + return this.each(lower); +} + +// node_modules/d3-selection/src/selection/append.js +function append_default(name) { + var create2 = typeof name === "function" ? name : creator_default(name); + return this.select(function() { + return this.appendChild(create2.apply(this, arguments)); + }); +} + +// node_modules/d3-selection/src/selection/insert.js +function constantNull() { + return null; +} +function insert_default(name, before) { + var create2 = typeof name === "function" ? name : creator_default(name), select = before == null ? constantNull : typeof before === "function" ? before : selector_default(before); + return this.select(function() { + return this.insertBefore(create2.apply(this, arguments), select.apply(this, arguments) || null); + }); +} + +// node_modules/d3-selection/src/selection/remove.js +function remove() { + var parent = this.parentNode; + if (parent) + parent.removeChild(this); +} +function remove_default() { + return this.each(remove); +} + +// node_modules/d3-selection/src/selection/clone.js +function selection_cloneShallow() { + var clone = this.cloneNode(false), parent = this.parentNode; + return parent ? parent.insertBefore(clone, this.nextSibling) : clone; +} +function selection_cloneDeep() { + var clone = this.cloneNode(true), parent = this.parentNode; + return parent ? parent.insertBefore(clone, this.nextSibling) : clone; +} +function clone_default(deep) { + return this.select(deep ? selection_cloneDeep : selection_cloneShallow); +} + +// node_modules/d3-selection/src/selection/datum.js +function datum_default(value) { + return arguments.length ? this.property("__data__", value) : this.node().__data__; +} + +// node_modules/d3-selection/src/selection/on.js +function contextListener(listener) { + return function(event) { + listener.call(this, event, this.__data__); + }; +} +function parseTypenames2(typenames) { + return typenames.trim().split(/^|\s+/).map(function(t) { + var name = "", i = t.indexOf("."); + if (i >= 0) + name = t.slice(i + 1), t = t.slice(0, i); + return { type: t, name }; + }); +} +function onRemove(typename) { + return function() { + var on = this.__on; + if (!on) + return; + for (var j = 0, i = -1, m2 = on.length, o; j < m2; ++j) { + if (o = on[j], (!typename.type || o.type === typename.type) && o.name === typename.name) { + this.removeEventListener(o.type, o.listener, o.options); + } else { + on[++i] = o; + } + } + if (++i) + on.length = i; + else + delete this.__on; + }; +} +function onAdd(typename, value, options) { + return function() { + var on = this.__on, o, listener = contextListener(value); + if (on) + for (var j = 0, m2 = on.length; j < m2; ++j) { + if ((o = on[j]).type === typename.type && o.name === typename.name) { + this.removeEventListener(o.type, o.listener, o.options); + this.addEventListener(o.type, o.listener = listener, o.options = options); + o.value = value; + return; + } + } + this.addEventListener(typename.type, listener, options); + o = { type: typename.type, name: typename.name, value, listener, options }; + if (!on) + this.__on = [o]; + else + on.push(o); + }; +} +function on_default(typename, value, options) { + var typenames = parseTypenames2(typename + ""), i, n = typenames.length, t; + if (arguments.length < 2) { + var on = this.node().__on; + if (on) + for (var j = 0, m2 = on.length, o; j < m2; ++j) { + for (i = 0, o = on[j]; i < n; ++i) { + if ((t = typenames[i]).type === o.type && t.name === o.name) { + return o.value; + } + } + } + return; + } + on = value ? onAdd : onRemove; + for (i = 0; i < n; ++i) + this.each(on(typenames[i], value, options)); + return this; +} + +// node_modules/d3-selection/src/selection/dispatch.js +function dispatchEvent(node, type2, params) { + var window2 = window_default(node), event = window2.CustomEvent; + if (typeof event === "function") { + event = new event(type2, params); + } else { + event = window2.document.createEvent("Event"); + if (params) + event.initEvent(type2, params.bubbles, params.cancelable), event.detail = params.detail; + else + event.initEvent(type2, false, false); + } + node.dispatchEvent(event); +} +function dispatchConstant(type2, params) { + return function() { + return dispatchEvent(this, type2, params); + }; +} +function dispatchFunction(type2, params) { + return function() { + return dispatchEvent(this, type2, params.apply(this, arguments)); + }; +} +function dispatch_default2(type2, params) { + return this.each((typeof params === "function" ? dispatchFunction : dispatchConstant)(type2, params)); +} + +// node_modules/d3-selection/src/selection/iterator.js +function* iterator_default() { + for (var groups = this._groups, j = 0, m2 = groups.length; j < m2; ++j) { + for (var group = groups[j], i = 0, n = group.length, node; i < n; ++i) { + if (node = group[i]) + yield node; + } + } +} + +// node_modules/d3-selection/src/selection/index.js +var root = [null]; +function Selection(groups, parents) { + this._groups = groups; + this._parents = parents; +} +function selection() { + return new Selection([[document.documentElement]], root); +} +function selection_selection() { + return this; +} +Selection.prototype = selection.prototype = { + constructor: Selection, + select: select_default, + selectAll: selectAll_default, + selectChild: selectChild_default, + selectChildren: selectChildren_default, + filter: filter_default, + data: data_default, + enter: enter_default, + exit: exit_default, + join: join_default, + merge: merge_default, + selection: selection_selection, + order: order_default, + sort: sort_default, + call: call_default, + nodes: nodes_default, + node: node_default, + size: size_default, + empty: empty_default, + each: each_default, + attr: attr_default, + style: style_default, + property: property_default, + classed: classed_default, + text: text_default, + html: html_default, + raise: raise_default, + lower: lower_default, + append: append_default, + insert: insert_default, + remove: remove_default, + clone: clone_default, + datum: datum_default, + on: on_default, + dispatch: dispatch_default2, + [Symbol.iterator]: iterator_default +}; +var selection_default = selection; + +// node_modules/d3-selection/src/select.js +function select_default2(selector) { + return typeof selector === "string" ? new Selection([[document.querySelector(selector)]], [document.documentElement]) : new Selection([[selector]], root); +} + +// node_modules/d3-selection/src/sourceEvent.js +function sourceEvent_default(event) { + let sourceEvent; + while (sourceEvent = event.sourceEvent) + event = sourceEvent; + return event; +} + +// node_modules/d3-selection/src/pointer.js +function pointer_default(event, node) { + event = sourceEvent_default(event); + if (node === void 0) + node = event.currentTarget; + if (node) { + var svg = node.ownerSVGElement || node; + if (svg.createSVGPoint) { + var point = svg.createSVGPoint(); + point.x = event.clientX, point.y = event.clientY; + point = point.matrixTransform(node.getScreenCTM().inverse()); + return [point.x, point.y]; + } + if (node.getBoundingClientRect) { + var rect = node.getBoundingClientRect(); + return [event.clientX - rect.left - node.clientLeft, event.clientY - rect.top - node.clientTop]; + } + } + return [event.pageX, event.pageY]; +} + +// node_modules/d3-selection/src/selectAll.js +function selectAll_default2(selector) { + return typeof selector === "string" ? new Selection([document.querySelectorAll(selector)], [document.documentElement]) : new Selection([array(selector)], root); +} + +// node_modules/d3-drag/src/noevent.js +var nonpassive = { passive: false }; +var nonpassivecapture = { capture: true, passive: false }; +function nopropagation(event) { + event.stopImmediatePropagation(); +} +function noevent_default(event) { + event.preventDefault(); + event.stopImmediatePropagation(); +} + +// node_modules/d3-drag/src/nodrag.js +function nodrag_default(view) { + var root2 = view.document.documentElement, selection2 = select_default2(view).on("dragstart.drag", noevent_default, nonpassivecapture); + if ("onselectstart" in root2) { + selection2.on("selectstart.drag", noevent_default, nonpassivecapture); + } else { + root2.__noselect = root2.style.MozUserSelect; + root2.style.MozUserSelect = "none"; + } +} +function yesdrag(view, noclick) { + var root2 = view.document.documentElement, selection2 = select_default2(view).on("dragstart.drag", null); + if (noclick) { + selection2.on("click.drag", noevent_default, nonpassivecapture); + setTimeout(function() { + selection2.on("click.drag", null); + }, 0); + } + if ("onselectstart" in root2) { + selection2.on("selectstart.drag", null); + } else { + root2.style.MozUserSelect = root2.__noselect; + delete root2.__noselect; + } +} + +// node_modules/d3-drag/src/constant.js +var constant_default2 = (x2) => () => x2; + +// node_modules/d3-drag/src/event.js +function DragEvent(type2, { + sourceEvent, + subject, + target, + identifier, + active, + x: x2, + y: y2, + dx, + dy, + dispatch: dispatch2 +}) { + Object.defineProperties(this, { + type: { value: type2, enumerable: true, configurable: true }, + sourceEvent: { value: sourceEvent, enumerable: true, configurable: true }, + subject: { value: subject, enumerable: true, configurable: true }, + target: { value: target, enumerable: true, configurable: true }, + identifier: { value: identifier, enumerable: true, configurable: true }, + active: { value: active, enumerable: true, configurable: true }, + x: { value: x2, enumerable: true, configurable: true }, + y: { value: y2, enumerable: true, configurable: true }, + dx: { value: dx, enumerable: true, configurable: true }, + dy: { value: dy, enumerable: true, configurable: true }, + _: { value: dispatch2 } + }); +} +DragEvent.prototype.on = function() { + var value = this._.on.apply(this._, arguments); + return value === this._ ? this : value; +}; + +// node_modules/d3-drag/src/drag.js +function defaultFilter(event) { + return !event.ctrlKey && !event.button; +} +function defaultContainer() { + return this.parentNode; +} +function defaultSubject(event, d) { + return d == null ? { x: event.x, y: event.y } : d; +} +function defaultTouchable() { + return navigator.maxTouchPoints || "ontouchstart" in this; +} +function drag_default() { + var filter2 = defaultFilter, container = defaultContainer, subject = defaultSubject, touchable = defaultTouchable, gestures = {}, listeners = dispatch_default("start", "drag", "end"), active = 0, mousedownx, mousedowny, mousemoving, touchending, clickDistance2 = 0; + function drag(selection2) { + selection2.on("mousedown.drag", mousedowned).filter(touchable).on("touchstart.drag", touchstarted).on("touchmove.drag", touchmoved, nonpassive).on("touchend.drag touchcancel.drag", touchended).style("touch-action", "none").style("-webkit-tap-highlight-color", "rgba(0,0,0,0)"); + } + function mousedowned(event, d) { + if (touchending || !filter2.call(this, event, d)) + return; + var gesture = beforestart(this, container.call(this, event, d), event, d, "mouse"); + if (!gesture) + return; + select_default2(event.view).on("mousemove.drag", mousemoved, nonpassivecapture).on("mouseup.drag", mouseupped, nonpassivecapture); + nodrag_default(event.view); + nopropagation(event); + mousemoving = false; + mousedownx = event.clientX; + mousedowny = event.clientY; + gesture("start", event); + } + function mousemoved(event) { + noevent_default(event); + if (!mousemoving) { + var dx = event.clientX - mousedownx, dy = event.clientY - mousedowny; + mousemoving = dx * dx + dy * dy > clickDistance2; + } + gestures.mouse("drag", event); + } + function mouseupped(event) { + select_default2(event.view).on("mousemove.drag mouseup.drag", null); + yesdrag(event.view, mousemoving); + noevent_default(event); + gestures.mouse("end", event); + } + function touchstarted(event, d) { + if (!filter2.call(this, event, d)) + return; + var touches = event.changedTouches, c2 = container.call(this, event, d), n = touches.length, i, gesture; + for (i = 0; i < n; ++i) { + if (gesture = beforestart(this, c2, event, d, touches[i].identifier, touches[i])) { + nopropagation(event); + gesture("start", event, touches[i]); + } + } + } + function touchmoved(event) { + var touches = event.changedTouches, n = touches.length, i, gesture; + for (i = 0; i < n; ++i) { + if (gesture = gestures[touches[i].identifier]) { + noevent_default(event); + gesture("drag", event, touches[i]); + } + } + } + function touchended(event) { + var touches = event.changedTouches, n = touches.length, i, gesture; + if (touchending) + clearTimeout(touchending); + touchending = setTimeout(function() { + touchending = null; + }, 500); + for (i = 0; i < n; ++i) { + if (gesture = gestures[touches[i].identifier]) { + nopropagation(event); + gesture("end", event, touches[i]); + } + } + } + function beforestart(that, container2, event, d, identifier, touch) { + var dispatch2 = listeners.copy(), p = pointer_default(touch || event, container2), dx, dy, s; + if ((s = subject.call(that, new DragEvent("beforestart", { + sourceEvent: event, + target: drag, + identifier, + active, + x: p[0], + y: p[1], + dx: 0, + dy: 0, + dispatch: dispatch2 + }), d)) == null) + return; + dx = s.x - p[0] || 0; + dy = s.y - p[1] || 0; + return function gesture(type2, event2, touch2) { + var p0 = p, n; + switch (type2) { + case "start": + gestures[identifier] = gesture, n = active++; + break; + case "end": + delete gestures[identifier], --active; + case "drag": + p = pointer_default(touch2 || event2, container2), n = active; + break; + } + dispatch2.call( + type2, + that, + new DragEvent(type2, { + sourceEvent: event2, + subject: s, + target: drag, + identifier, + active: n, + x: p[0] + dx, + y: p[1] + dy, + dx: p[0] - p0[0], + dy: p[1] - p0[1], + dispatch: dispatch2 + }), + d + ); + }; + } + drag.filter = function(_) { + return arguments.length ? (filter2 = typeof _ === "function" ? _ : constant_default2(!!_), drag) : filter2; + }; + drag.container = function(_) { + return arguments.length ? (container = typeof _ === "function" ? _ : constant_default2(_), drag) : container; + }; + drag.subject = function(_) { + return arguments.length ? (subject = typeof _ === "function" ? _ : constant_default2(_), drag) : subject; + }; + drag.touchable = function(_) { + return arguments.length ? (touchable = typeof _ === "function" ? _ : constant_default2(!!_), drag) : touchable; + }; + drag.on = function() { + var value = listeners.on.apply(listeners, arguments); + return value === listeners ? drag : value; + }; + drag.clickDistance = function(_) { + return arguments.length ? (clickDistance2 = (_ = +_) * _, drag) : Math.sqrt(clickDistance2); + }; + return drag; +} + +// node_modules/d3-color/src/define.js +function define_default(constructor, factory, prototype) { + constructor.prototype = factory.prototype = prototype; + prototype.constructor = constructor; +} +function extend(parent, definition) { + var prototype = Object.create(parent.prototype); + for (var key in definition) + prototype[key] = definition[key]; + return prototype; +} + +// node_modules/d3-color/src/color.js +function Color() { +} +var darker = 0.7; +var brighter = 1 / darker; +var reI = "\\s*([+-]?\\d+)\\s*"; +var reN = "\\s*([+-]?(?:\\d*\\.)?\\d+(?:[eE][+-]?\\d+)?)\\s*"; +var reP = "\\s*([+-]?(?:\\d*\\.)?\\d+(?:[eE][+-]?\\d+)?)%\\s*"; +var reHex = /^#([0-9a-f]{3,8})$/; +var reRgbInteger = new RegExp(`^rgb\\(${reI},${reI},${reI}\\)$`); +var reRgbPercent = new RegExp(`^rgb\\(${reP},${reP},${reP}\\)$`); +var reRgbaInteger = new RegExp(`^rgba\\(${reI},${reI},${reI},${reN}\\)$`); +var reRgbaPercent = new RegExp(`^rgba\\(${reP},${reP},${reP},${reN}\\)$`); +var reHslPercent = new RegExp(`^hsl\\(${reN},${reP},${reP}\\)$`); +var reHslaPercent = new RegExp(`^hsla\\(${reN},${reP},${reP},${reN}\\)$`); +var named = { + aliceblue: 15792383, + antiquewhite: 16444375, + aqua: 65535, + aquamarine: 8388564, + azure: 15794175, + beige: 16119260, + bisque: 16770244, + black: 0, + blanchedalmond: 16772045, + blue: 255, + blueviolet: 9055202, + brown: 10824234, + burlywood: 14596231, + cadetblue: 6266528, + chartreuse: 8388352, + chocolate: 13789470, + coral: 16744272, + cornflowerblue: 6591981, + cornsilk: 16775388, + crimson: 14423100, + cyan: 65535, + darkblue: 139, + darkcyan: 35723, + darkgoldenrod: 12092939, + darkgray: 11119017, + darkgreen: 25600, + darkgrey: 11119017, + darkkhaki: 12433259, + darkmagenta: 9109643, + darkolivegreen: 5597999, + darkorange: 16747520, + darkorchid: 10040012, + darkred: 9109504, + darksalmon: 15308410, + darkseagreen: 9419919, + darkslateblue: 4734347, + darkslategray: 3100495, + darkslategrey: 3100495, + darkturquoise: 52945, + darkviolet: 9699539, + deeppink: 16716947, + deepskyblue: 49151, + dimgray: 6908265, + dimgrey: 6908265, + dodgerblue: 2003199, + firebrick: 11674146, + floralwhite: 16775920, + forestgreen: 2263842, + fuchsia: 16711935, + gainsboro: 14474460, + ghostwhite: 16316671, + gold: 16766720, + goldenrod: 14329120, + gray: 8421504, + green: 32768, + greenyellow: 11403055, + grey: 8421504, + honeydew: 15794160, + hotpink: 16738740, + indianred: 13458524, + indigo: 4915330, + ivory: 16777200, + khaki: 15787660, + lavender: 15132410, + lavenderblush: 16773365, + lawngreen: 8190976, + lemonchiffon: 16775885, + lightblue: 11393254, + lightcoral: 15761536, + lightcyan: 14745599, + lightgoldenrodyellow: 16448210, + lightgray: 13882323, + lightgreen: 9498256, + lightgrey: 13882323, + lightpink: 16758465, + lightsalmon: 16752762, + lightseagreen: 2142890, + lightskyblue: 8900346, + lightslategray: 7833753, + lightslategrey: 7833753, + lightsteelblue: 11584734, + lightyellow: 16777184, + lime: 65280, + limegreen: 3329330, + linen: 16445670, + magenta: 16711935, + maroon: 8388608, + mediumaquamarine: 6737322, + mediumblue: 205, + mediumorchid: 12211667, + mediumpurple: 9662683, + mediumseagreen: 3978097, + mediumslateblue: 8087790, + mediumspringgreen: 64154, + mediumturquoise: 4772300, + mediumvioletred: 13047173, + midnightblue: 1644912, + mintcream: 16121850, + mistyrose: 16770273, + moccasin: 16770229, + navajowhite: 16768685, + navy: 128, + oldlace: 16643558, + olive: 8421376, + olivedrab: 7048739, + orange: 16753920, + orangered: 16729344, + orchid: 14315734, + palegoldenrod: 15657130, + palegreen: 10025880, + paleturquoise: 11529966, + palevioletred: 14381203, + papayawhip: 16773077, + peachpuff: 16767673, + peru: 13468991, + pink: 16761035, + plum: 14524637, + powderblue: 11591910, + purple: 8388736, + rebeccapurple: 6697881, + red: 16711680, + rosybrown: 12357519, + royalblue: 4286945, + saddlebrown: 9127187, + salmon: 16416882, + sandybrown: 16032864, + seagreen: 3050327, + seashell: 16774638, + sienna: 10506797, + silver: 12632256, + skyblue: 8900331, + slateblue: 6970061, + slategray: 7372944, + slategrey: 7372944, + snow: 16775930, + springgreen: 65407, + steelblue: 4620980, + tan: 13808780, + teal: 32896, + thistle: 14204888, + tomato: 16737095, + turquoise: 4251856, + violet: 15631086, + wheat: 16113331, + white: 16777215, + whitesmoke: 16119285, + yellow: 16776960, + yellowgreen: 10145074 +}; +define_default(Color, color, { + copy(channels) { + return Object.assign(new this.constructor(), this, channels); + }, + displayable() { + return this.rgb().displayable(); + }, + hex: color_formatHex, + // Deprecated! Use color.formatHex. + formatHex: color_formatHex, + formatHex8: color_formatHex8, + formatHsl: color_formatHsl, + formatRgb: color_formatRgb, + toString: color_formatRgb +}); +function color_formatHex() { + return this.rgb().formatHex(); +} +function color_formatHex8() { + return this.rgb().formatHex8(); +} +function color_formatHsl() { + return hslConvert(this).formatHsl(); +} +function color_formatRgb() { + return this.rgb().formatRgb(); +} +function color(format) { + var m2, l; + format = (format + "").trim().toLowerCase(); + return (m2 = reHex.exec(format)) ? (l = m2[1].length, m2 = parseInt(m2[1], 16), l === 6 ? rgbn(m2) : l === 3 ? new Rgb(m2 >> 8 & 15 | m2 >> 4 & 240, m2 >> 4 & 15 | m2 & 240, (m2 & 15) << 4 | m2 & 15, 1) : l === 8 ? rgba(m2 >> 24 & 255, m2 >> 16 & 255, m2 >> 8 & 255, (m2 & 255) / 255) : l === 4 ? rgba(m2 >> 12 & 15 | m2 >> 8 & 240, m2 >> 8 & 15 | m2 >> 4 & 240, m2 >> 4 & 15 | m2 & 240, ((m2 & 15) << 4 | m2 & 15) / 255) : null) : (m2 = reRgbInteger.exec(format)) ? new Rgb(m2[1], m2[2], m2[3], 1) : (m2 = reRgbPercent.exec(format)) ? new Rgb(m2[1] * 255 / 100, m2[2] * 255 / 100, m2[3] * 255 / 100, 1) : (m2 = reRgbaInteger.exec(format)) ? rgba(m2[1], m2[2], m2[3], m2[4]) : (m2 = reRgbaPercent.exec(format)) ? rgba(m2[1] * 255 / 100, m2[2] * 255 / 100, m2[3] * 255 / 100, m2[4]) : (m2 = reHslPercent.exec(format)) ? hsla(m2[1], m2[2] / 100, m2[3] / 100, 1) : (m2 = reHslaPercent.exec(format)) ? hsla(m2[1], m2[2] / 100, m2[3] / 100, m2[4]) : named.hasOwnProperty(format) ? rgbn(named[format]) : format === "transparent" ? new Rgb(NaN, NaN, NaN, 0) : null; +} +function rgbn(n) { + return new Rgb(n >> 16 & 255, n >> 8 & 255, n & 255, 1); +} +function rgba(r, g, b, a2) { + if (a2 <= 0) + r = g = b = NaN; + return new Rgb(r, g, b, a2); +} +function rgbConvert(o) { + if (!(o instanceof Color)) + o = color(o); + if (!o) + return new Rgb(); + o = o.rgb(); + return new Rgb(o.r, o.g, o.b, o.opacity); +} +function rgb(r, g, b, opacity) { + return arguments.length === 1 ? rgbConvert(r) : new Rgb(r, g, b, opacity == null ? 1 : opacity); +} +function Rgb(r, g, b, opacity) { + this.r = +r; + this.g = +g; + this.b = +b; + this.opacity = +opacity; +} +define_default(Rgb, rgb, extend(Color, { + brighter(k) { + k = k == null ? brighter : Math.pow(brighter, k); + return new Rgb(this.r * k, this.g * k, this.b * k, this.opacity); + }, + darker(k) { + k = k == null ? darker : Math.pow(darker, k); + return new Rgb(this.r * k, this.g * k, this.b * k, this.opacity); + }, + rgb() { + return this; + }, + clamp() { + return new Rgb(clampi(this.r), clampi(this.g), clampi(this.b), clampa(this.opacity)); + }, + displayable() { + return -0.5 <= this.r && this.r < 255.5 && (-0.5 <= this.g && this.g < 255.5) && (-0.5 <= this.b && this.b < 255.5) && (0 <= this.opacity && this.opacity <= 1); + }, + hex: rgb_formatHex, + // Deprecated! Use color.formatHex. + formatHex: rgb_formatHex, + formatHex8: rgb_formatHex8, + formatRgb: rgb_formatRgb, + toString: rgb_formatRgb +})); +function rgb_formatHex() { + return `#${hex(this.r)}${hex(this.g)}${hex(this.b)}`; +} +function rgb_formatHex8() { + return `#${hex(this.r)}${hex(this.g)}${hex(this.b)}${hex((isNaN(this.opacity) ? 1 : this.opacity) * 255)}`; +} +function rgb_formatRgb() { + const a2 = clampa(this.opacity); + return `${a2 === 1 ? "rgb(" : "rgba("}${clampi(this.r)}, ${clampi(this.g)}, ${clampi(this.b)}${a2 === 1 ? ")" : `, ${a2})`}`; +} +function clampa(opacity) { + return isNaN(opacity) ? 1 : Math.max(0, Math.min(1, opacity)); +} +function clampi(value) { + return Math.max(0, Math.min(255, Math.round(value) || 0)); +} +function hex(value) { + value = clampi(value); + return (value < 16 ? "0" : "") + value.toString(16); +} +function hsla(h, s, l, a2) { + if (a2 <= 0) + h = s = l = NaN; + else if (l <= 0 || l >= 1) + h = s = NaN; + else if (s <= 0) + h = NaN; + return new Hsl(h, s, l, a2); +} +function hslConvert(o) { + if (o instanceof Hsl) + return new Hsl(o.h, o.s, o.l, o.opacity); + if (!(o instanceof Color)) + o = color(o); + if (!o) + return new Hsl(); + if (o instanceof Hsl) + return o; + o = o.rgb(); + var r = o.r / 255, g = o.g / 255, b = o.b / 255, min2 = Math.min(r, g, b), max2 = Math.max(r, g, b), h = NaN, s = max2 - min2, l = (max2 + min2) / 2; + if (s) { + if (r === max2) + h = (g - b) / s + (g < b) * 6; + else if (g === max2) + h = (b - r) / s + 2; + else + h = (r - g) / s + 4; + s /= l < 0.5 ? max2 + min2 : 2 - max2 - min2; + h *= 60; + } else { + s = l > 0 && l < 1 ? 0 : h; + } + return new Hsl(h, s, l, o.opacity); +} +function hsl(h, s, l, opacity) { + return arguments.length === 1 ? hslConvert(h) : new Hsl(h, s, l, opacity == null ? 1 : opacity); +} +function Hsl(h, s, l, opacity) { + this.h = +h; + this.s = +s; + this.l = +l; + this.opacity = +opacity; +} +define_default(Hsl, hsl, extend(Color, { + brighter(k) { + k = k == null ? brighter : Math.pow(brighter, k); + return new Hsl(this.h, this.s, this.l * k, this.opacity); + }, + darker(k) { + k = k == null ? darker : Math.pow(darker, k); + return new Hsl(this.h, this.s, this.l * k, this.opacity); + }, + rgb() { + var h = this.h % 360 + (this.h < 0) * 360, s = isNaN(h) || isNaN(this.s) ? 0 : this.s, l = this.l, m2 = l + (l < 0.5 ? l : 1 - l) * s, m1 = 2 * l - m2; + return new Rgb( + hsl2rgb(h >= 240 ? h - 240 : h + 120, m1, m2), + hsl2rgb(h, m1, m2), + hsl2rgb(h < 120 ? h + 240 : h - 120, m1, m2), + this.opacity + ); + }, + clamp() { + return new Hsl(clamph(this.h), clampt(this.s), clampt(this.l), clampa(this.opacity)); + }, + displayable() { + return (0 <= this.s && this.s <= 1 || isNaN(this.s)) && (0 <= this.l && this.l <= 1) && (0 <= this.opacity && this.opacity <= 1); + }, + formatHsl() { + const a2 = clampa(this.opacity); + return `${a2 === 1 ? "hsl(" : "hsla("}${clamph(this.h)}, ${clampt(this.s) * 100}%, ${clampt(this.l) * 100}%${a2 === 1 ? ")" : `, ${a2})`}`; + } +})); +function clamph(value) { + value = (value || 0) % 360; + return value < 0 ? value + 360 : value; +} +function clampt(value) { + return Math.max(0, Math.min(1, value || 0)); +} +function hsl2rgb(h, m1, m2) { + return (h < 60 ? m1 + (m2 - m1) * h / 60 : h < 180 ? m2 : h < 240 ? m1 + (m2 - m1) * (240 - h) / 60 : m1) * 255; +} + +// node_modules/d3-interpolate/src/basis.js +function basis(t1, v0, v1, v2, v3) { + var t2 = t1 * t1, t3 = t2 * t1; + return ((1 - 3 * t1 + 3 * t2 - t3) * v0 + (4 - 6 * t2 + 3 * t3) * v1 + (1 + 3 * t1 + 3 * t2 - 3 * t3) * v2 + t3 * v3) / 6; +} +function basis_default(values) { + var n = values.length - 1; + return function(t) { + var i = t <= 0 ? t = 0 : t >= 1 ? (t = 1, n - 1) : Math.floor(t * n), v1 = values[i], v2 = values[i + 1], v0 = i > 0 ? values[i - 1] : 2 * v1 - v2, v3 = i < n - 1 ? values[i + 2] : 2 * v2 - v1; + return basis((t - i / n) * n, v0, v1, v2, v3); + }; +} + +// node_modules/d3-interpolate/src/basisClosed.js +function basisClosed_default(values) { + var n = values.length; + return function(t) { + var i = Math.floor(((t %= 1) < 0 ? ++t : t) * n), v0 = values[(i + n - 1) % n], v1 = values[i % n], v2 = values[(i + 1) % n], v3 = values[(i + 2) % n]; + return basis((t - i / n) * n, v0, v1, v2, v3); + }; +} + +// node_modules/d3-interpolate/src/constant.js +var constant_default3 = (x2) => () => x2; + +// node_modules/d3-interpolate/src/color.js +function linear(a2, d) { + return function(t) { + return a2 + t * d; + }; +} +function exponential(a2, b, y2) { + return a2 = Math.pow(a2, y2), b = Math.pow(b, y2) - a2, y2 = 1 / y2, function(t) { + return Math.pow(a2 + t * b, y2); + }; +} +function gamma(y2) { + return (y2 = +y2) === 1 ? nogamma : function(a2, b) { + return b - a2 ? exponential(a2, b, y2) : constant_default3(isNaN(a2) ? b : a2); + }; +} +function nogamma(a2, b) { + var d = b - a2; + return d ? linear(a2, d) : constant_default3(isNaN(a2) ? b : a2); +} + +// node_modules/d3-interpolate/src/rgb.js +var rgb_default = function rgbGamma(y2) { + var color2 = gamma(y2); + function rgb2(start2, end) { + var r = color2((start2 = rgb(start2)).r, (end = rgb(end)).r), g = color2(start2.g, end.g), b = color2(start2.b, end.b), opacity = nogamma(start2.opacity, end.opacity); + return function(t) { + start2.r = r(t); + start2.g = g(t); + start2.b = b(t); + start2.opacity = opacity(t); + return start2 + ""; + }; + } + rgb2.gamma = rgbGamma; + return rgb2; +}(1); +function rgbSpline(spline) { + return function(colors) { + var n = colors.length, r = new Array(n), g = new Array(n), b = new Array(n), i, color2; + for (i = 0; i < n; ++i) { + color2 = rgb(colors[i]); + r[i] = color2.r || 0; + g[i] = color2.g || 0; + b[i] = color2.b || 0; + } + r = spline(r); + g = spline(g); + b = spline(b); + color2.opacity = 1; + return function(t) { + color2.r = r(t); + color2.g = g(t); + color2.b = b(t); + return color2 + ""; + }; + }; +} +var rgbBasis = rgbSpline(basis_default); +var rgbBasisClosed = rgbSpline(basisClosed_default); + +// node_modules/d3-interpolate/src/number.js +function number_default(a2, b) { + return a2 = +a2, b = +b, function(t) { + return a2 * (1 - t) + b * t; + }; +} + +// node_modules/d3-interpolate/src/string.js +var reA = /[-+]?(?:\d+\.?\d*|\.?\d+)(?:[eE][-+]?\d+)?/g; +var reB = new RegExp(reA.source, "g"); +function zero(b) { + return function() { + return b; + }; +} +function one(b) { + return function(t) { + return b(t) + ""; + }; +} +function string_default(a2, b) { + var bi = reA.lastIndex = reB.lastIndex = 0, am, bm, bs, i = -1, s = [], q = []; + a2 = a2 + "", b = b + ""; + while ((am = reA.exec(a2)) && (bm = reB.exec(b))) { + if ((bs = bm.index) > bi) { + bs = b.slice(bi, bs); + if (s[i]) + s[i] += bs; + else + s[++i] = bs; + } + if ((am = am[0]) === (bm = bm[0])) { + if (s[i]) + s[i] += bm; + else + s[++i] = bm; + } else { + s[++i] = null; + q.push({ i, x: number_default(am, bm) }); + } + bi = reB.lastIndex; + } + if (bi < b.length) { + bs = b.slice(bi); + if (s[i]) + s[i] += bs; + else + s[++i] = bs; + } + return s.length < 2 ? q[0] ? one(q[0].x) : zero(b) : (b = q.length, function(t) { + for (var i2 = 0, o; i2 < b; ++i2) + s[(o = q[i2]).i] = o.x(t); + return s.join(""); + }); +} + +// node_modules/d3-interpolate/src/transform/decompose.js +var degrees = 180 / Math.PI; +var identity = { + translateX: 0, + translateY: 0, + rotate: 0, + skewX: 0, + scaleX: 1, + scaleY: 1 +}; +function decompose_default(a2, b, c2, d, e, f) { + var scaleX, scaleY, skewX; + if (scaleX = Math.sqrt(a2 * a2 + b * b)) + a2 /= scaleX, b /= scaleX; + if (skewX = a2 * c2 + b * d) + c2 -= a2 * skewX, d -= b * skewX; + if (scaleY = Math.sqrt(c2 * c2 + d * d)) + c2 /= scaleY, d /= scaleY, skewX /= scaleY; + if (a2 * d < b * c2) + a2 = -a2, b = -b, skewX = -skewX, scaleX = -scaleX; + return { + translateX: e, + translateY: f, + rotate: Math.atan2(b, a2) * degrees, + skewX: Math.atan(skewX) * degrees, + scaleX, + scaleY + }; +} + +// node_modules/d3-interpolate/src/transform/parse.js +var svgNode; +function parseCss(value) { + const m2 = new (typeof DOMMatrix === "function" ? DOMMatrix : WebKitCSSMatrix)(value + ""); + return m2.isIdentity ? identity : decompose_default(m2.a, m2.b, m2.c, m2.d, m2.e, m2.f); +} +function parseSvg(value) { + if (value == null) + return identity; + if (!svgNode) + svgNode = document.createElementNS("http://www.w3.org/2000/svg", "g"); + svgNode.setAttribute("transform", value); + if (!(value = svgNode.transform.baseVal.consolidate())) + return identity; + value = value.matrix; + return decompose_default(value.a, value.b, value.c, value.d, value.e, value.f); +} + +// node_modules/d3-interpolate/src/transform/index.js +function interpolateTransform(parse, pxComma, pxParen, degParen) { + function pop(s) { + return s.length ? s.pop() + " " : ""; + } + function translate(xa, ya, xb, yb, s, q) { + if (xa !== xb || ya !== yb) { + var i = s.push("translate(", null, pxComma, null, pxParen); + q.push({ i: i - 4, x: number_default(xa, xb) }, { i: i - 2, x: number_default(ya, yb) }); + } else if (xb || yb) { + s.push("translate(" + xb + pxComma + yb + pxParen); + } + } + function rotate(a2, b, s, q) { + if (a2 !== b) { + if (a2 - b > 180) + b += 360; + else if (b - a2 > 180) + a2 += 360; + q.push({ i: s.push(pop(s) + "rotate(", null, degParen) - 2, x: number_default(a2, b) }); + } else if (b) { + s.push(pop(s) + "rotate(" + b + degParen); + } + } + function skewX(a2, b, s, q) { + if (a2 !== b) { + q.push({ i: s.push(pop(s) + "skewX(", null, degParen) - 2, x: number_default(a2, b) }); + } else if (b) { + s.push(pop(s) + "skewX(" + b + degParen); + } + } + function scale(xa, ya, xb, yb, s, q) { + if (xa !== xb || ya !== yb) { + var i = s.push(pop(s) + "scale(", null, ",", null, ")"); + q.push({ i: i - 4, x: number_default(xa, xb) }, { i: i - 2, x: number_default(ya, yb) }); + } else if (xb !== 1 || yb !== 1) { + s.push(pop(s) + "scale(" + xb + "," + yb + ")"); + } + } + return function(a2, b) { + var s = [], q = []; + a2 = parse(a2), b = parse(b); + translate(a2.translateX, a2.translateY, b.translateX, b.translateY, s, q); + rotate(a2.rotate, b.rotate, s, q); + skewX(a2.skewX, b.skewX, s, q); + scale(a2.scaleX, a2.scaleY, b.scaleX, b.scaleY, s, q); + a2 = b = null; + return function(t) { + var i = -1, n = q.length, o; + while (++i < n) + s[(o = q[i]).i] = o.x(t); + return s.join(""); + }; + }; +} +var interpolateTransformCss = interpolateTransform(parseCss, "px, ", "px)", "deg)"); +var interpolateTransformSvg = interpolateTransform(parseSvg, ", ", ")", ")"); + +// node_modules/d3-interpolate/src/zoom.js +var epsilon2 = 1e-12; +function cosh(x2) { + return ((x2 = Math.exp(x2)) + 1 / x2) / 2; +} +function sinh(x2) { + return ((x2 = Math.exp(x2)) - 1 / x2) / 2; +} +function tanh(x2) { + return ((x2 = Math.exp(2 * x2)) - 1) / (x2 + 1); +} +var zoom_default = function zoomRho(rho, rho2, rho4) { + function zoom(p0, p1) { + var ux0 = p0[0], uy0 = p0[1], w0 = p0[2], ux1 = p1[0], uy1 = p1[1], w1 = p1[2], dx = ux1 - ux0, dy = uy1 - uy0, d2 = dx * dx + dy * dy, i, S; + if (d2 < epsilon2) { + S = Math.log(w1 / w0) / rho; + i = function(t) { + return [ + ux0 + t * dx, + uy0 + t * dy, + w0 * Math.exp(rho * t * S) + ]; + }; + } else { + var d1 = Math.sqrt(d2), b0 = (w1 * w1 - w0 * w0 + rho4 * d2) / (2 * w0 * rho2 * d1), b1 = (w1 * w1 - w0 * w0 - rho4 * d2) / (2 * w1 * rho2 * d1), r0 = Math.log(Math.sqrt(b0 * b0 + 1) - b0), r1 = Math.log(Math.sqrt(b1 * b1 + 1) - b1); + S = (r1 - r0) / rho; + i = function(t) { + var s = t * S, coshr0 = cosh(r0), u = w0 / (rho2 * d1) * (coshr0 * tanh(rho * s + r0) - sinh(r0)); + return [ + ux0 + u * dx, + uy0 + u * dy, + w0 * coshr0 / cosh(rho * s + r0) + ]; + }; + } + i.duration = S * 1e3 * rho / Math.SQRT2; + return i; + } + zoom.rho = function(_) { + var _1 = Math.max(1e-3, +_), _2 = _1 * _1, _4 = _2 * _2; + return zoomRho(_1, _2, _4); + }; + return zoom; +}(Math.SQRT2, 2, 4); + +// node_modules/d3-timer/src/timer.js +var frame = 0; +var timeout = 0; +var interval = 0; +var pokeDelay = 1e3; +var taskHead; +var taskTail; +var clockLast = 0; +var clockNow = 0; +var clockSkew = 0; +var clock = typeof performance === "object" && performance.now ? performance : Date; +var setFrame = typeof window === "object" && window.requestAnimationFrame ? window.requestAnimationFrame.bind(window) : function(f) { + setTimeout(f, 17); +}; +function now() { + return clockNow || (setFrame(clearNow), clockNow = clock.now() + clockSkew); +} +function clearNow() { + clockNow = 0; +} +function Timer() { + this._call = this._time = this._next = null; +} +Timer.prototype = timer.prototype = { + constructor: Timer, + restart: function(callback, delay, time) { + if (typeof callback !== "function") + throw new TypeError("callback is not a function"); + time = (time == null ? now() : +time) + (delay == null ? 0 : +delay); + if (!this._next && taskTail !== this) { + if (taskTail) + taskTail._next = this; + else + taskHead = this; + taskTail = this; + } + this._call = callback; + this._time = time; + sleep(); + }, + stop: function() { + if (this._call) { + this._call = null; + this._time = Infinity; + sleep(); + } + } +}; +function timer(callback, delay, time) { + var t = new Timer(); + t.restart(callback, delay, time); + return t; +} +function timerFlush() { + now(); + ++frame; + var t = taskHead, e; + while (t) { + if ((e = clockNow - t._time) >= 0) + t._call.call(void 0, e); + t = t._next; + } + --frame; +} +function wake() { + clockNow = (clockLast = clock.now()) + clockSkew; + frame = timeout = 0; + try { + timerFlush(); + } finally { + frame = 0; + nap(); + clockNow = 0; + } +} +function poke() { + var now2 = clock.now(), delay = now2 - clockLast; + if (delay > pokeDelay) + clockSkew -= delay, clockLast = now2; +} +function nap() { + var t0, t1 = taskHead, t2, time = Infinity; + while (t1) { + if (t1._call) { + if (time > t1._time) + time = t1._time; + t0 = t1, t1 = t1._next; + } else { + t2 = t1._next, t1._next = null; + t1 = t0 ? t0._next = t2 : taskHead = t2; + } + } + taskTail = t0; + sleep(time); +} +function sleep(time) { + if (frame) + return; + if (timeout) + timeout = clearTimeout(timeout); + var delay = time - clockNow; + if (delay > 24) { + if (time < Infinity) + timeout = setTimeout(wake, time - clock.now() - clockSkew); + if (interval) + interval = clearInterval(interval); + } else { + if (!interval) + clockLast = clock.now(), interval = setInterval(poke, pokeDelay); + frame = 1, setFrame(wake); + } +} + +// node_modules/d3-timer/src/timeout.js +function timeout_default(callback, delay, time) { + var t = new Timer(); + delay = delay == null ? 0 : +delay; + t.restart((elapsed) => { + t.stop(); + callback(elapsed + delay); + }, delay, time); + return t; +} + +// node_modules/d3-transition/src/transition/schedule.js +var emptyOn = dispatch_default("start", "end", "cancel", "interrupt"); +var emptyTween = []; +var CREATED = 0; +var SCHEDULED = 1; +var STARTING = 2; +var STARTED = 3; +var RUNNING = 4; +var ENDING = 5; +var ENDED = 6; +function schedule_default(node, name, id2, index2, group, timing) { + var schedules = node.__transition; + if (!schedules) + node.__transition = {}; + else if (id2 in schedules) + return; + create(node, id2, { + name, + index: index2, + // For context during callback. + group, + // For context during callback. + on: emptyOn, + tween: emptyTween, + time: timing.time, + delay: timing.delay, + duration: timing.duration, + ease: timing.ease, + timer: null, + state: CREATED + }); +} +function init(node, id2) { + var schedule = get2(node, id2); + if (schedule.state > CREATED) + throw new Error("too late; already scheduled"); + return schedule; +} +function set2(node, id2) { + var schedule = get2(node, id2); + if (schedule.state > STARTED) + throw new Error("too late; already running"); + return schedule; +} +function get2(node, id2) { + var schedule = node.__transition; + if (!schedule || !(schedule = schedule[id2])) + throw new Error("transition not found"); + return schedule; +} +function create(node, id2, self) { + var schedules = node.__transition, tween; + schedules[id2] = self; + self.timer = timer(schedule, 0, self.time); + function schedule(elapsed) { + self.state = SCHEDULED; + self.timer.restart(start2, self.delay, self.time); + if (self.delay <= elapsed) + start2(elapsed - self.delay); + } + function start2(elapsed) { + var i, j, n, o; + if (self.state !== SCHEDULED) + return stop(); + for (i in schedules) { + o = schedules[i]; + if (o.name !== self.name) + continue; + if (o.state === STARTED) + return timeout_default(start2); + if (o.state === RUNNING) { + o.state = ENDED; + o.timer.stop(); + o.on.call("interrupt", node, node.__data__, o.index, o.group); + delete schedules[i]; + } else if (+i < id2) { + o.state = ENDED; + o.timer.stop(); + o.on.call("cancel", node, node.__data__, o.index, o.group); + delete schedules[i]; + } + } + timeout_default(function() { + if (self.state === STARTED) { + self.state = RUNNING; + self.timer.restart(tick, self.delay, self.time); + tick(elapsed); + } + }); + self.state = STARTING; + self.on.call("start", node, node.__data__, self.index, self.group); + if (self.state !== STARTING) + return; + self.state = STARTED; + tween = new Array(n = self.tween.length); + for (i = 0, j = -1; i < n; ++i) { + if (o = self.tween[i].value.call(node, node.__data__, self.index, self.group)) { + tween[++j] = o; + } + } + tween.length = j + 1; + } + function tick(elapsed) { + var t = elapsed < self.duration ? self.ease.call(null, elapsed / self.duration) : (self.timer.restart(stop), self.state = ENDING, 1), i = -1, n = tween.length; + while (++i < n) { + tween[i].call(node, t); + } + if (self.state === ENDING) { + self.on.call("end", node, node.__data__, self.index, self.group); + stop(); + } + } + function stop() { + self.state = ENDED; + self.timer.stop(); + delete schedules[id2]; + for (var i in schedules) + return; + delete node.__transition; + } +} + +// node_modules/d3-transition/src/interrupt.js +function interrupt_default(node, name) { + var schedules = node.__transition, schedule, active, empty2 = true, i; + if (!schedules) + return; + name = name == null ? null : name + ""; + for (i in schedules) { + if ((schedule = schedules[i]).name !== name) { + empty2 = false; + continue; + } + active = schedule.state > STARTING && schedule.state < ENDING; + schedule.state = ENDED; + schedule.timer.stop(); + schedule.on.call(active ? "interrupt" : "cancel", node, node.__data__, schedule.index, schedule.group); + delete schedules[i]; + } + if (empty2) + delete node.__transition; +} + +// node_modules/d3-transition/src/selection/interrupt.js +function interrupt_default2(name) { + return this.each(function() { + interrupt_default(this, name); + }); +} + +// node_modules/d3-transition/src/transition/tween.js +function tweenRemove(id2, name) { + var tween0, tween1; + return function() { + var schedule = set2(this, id2), tween = schedule.tween; + if (tween !== tween0) { + tween1 = tween0 = tween; + for (var i = 0, n = tween1.length; i < n; ++i) { + if (tween1[i].name === name) { + tween1 = tween1.slice(); + tween1.splice(i, 1); + break; + } + } + } + schedule.tween = tween1; + }; +} +function tweenFunction(id2, name, value) { + var tween0, tween1; + if (typeof value !== "function") + throw new Error(); + return function() { + var schedule = set2(this, id2), tween = schedule.tween; + if (tween !== tween0) { + tween1 = (tween0 = tween).slice(); + for (var t = { name, value }, i = 0, n = tween1.length; i < n; ++i) { + if (tween1[i].name === name) { + tween1[i] = t; + break; + } + } + if (i === n) + tween1.push(t); + } + schedule.tween = tween1; + }; +} +function tween_default(name, value) { + var id2 = this._id; + name += ""; + if (arguments.length < 2) { + var tween = get2(this.node(), id2).tween; + for (var i = 0, n = tween.length, t; i < n; ++i) { + if ((t = tween[i]).name === name) { + return t.value; + } + } + return null; + } + return this.each((value == null ? tweenRemove : tweenFunction)(id2, name, value)); +} +function tweenValue(transition2, name, value) { + var id2 = transition2._id; + transition2.each(function() { + var schedule = set2(this, id2); + (schedule.value || (schedule.value = {}))[name] = value.apply(this, arguments); + }); + return function(node) { + return get2(node, id2).value[name]; + }; +} + +// node_modules/d3-transition/src/transition/interpolate.js +function interpolate_default(a2, b) { + var c2; + return (typeof b === "number" ? number_default : b instanceof color ? rgb_default : (c2 = color(b)) ? (b = c2, rgb_default) : string_default)(a2, b); +} + +// node_modules/d3-transition/src/transition/attr.js +function attrRemove2(name) { + return function() { + this.removeAttribute(name); + }; +} +function attrRemoveNS2(fullname) { + return function() { + this.removeAttributeNS(fullname.space, fullname.local); + }; +} +function attrConstant2(name, interpolate, value1) { + var string00, string1 = value1 + "", interpolate0; + return function() { + var string0 = this.getAttribute(name); + return string0 === string1 ? null : string0 === string00 ? interpolate0 : interpolate0 = interpolate(string00 = string0, value1); + }; +} +function attrConstantNS2(fullname, interpolate, value1) { + var string00, string1 = value1 + "", interpolate0; + return function() { + var string0 = this.getAttributeNS(fullname.space, fullname.local); + return string0 === string1 ? null : string0 === string00 ? interpolate0 : interpolate0 = interpolate(string00 = string0, value1); + }; +} +function attrFunction2(name, interpolate, value) { + var string00, string10, interpolate0; + return function() { + var string0, value1 = value(this), string1; + if (value1 == null) + return void this.removeAttribute(name); + string0 = this.getAttribute(name); + string1 = value1 + ""; + return string0 === string1 ? null : string0 === string00 && string1 === string10 ? interpolate0 : (string10 = string1, interpolate0 = interpolate(string00 = string0, value1)); + }; +} +function attrFunctionNS2(fullname, interpolate, value) { + var string00, string10, interpolate0; + return function() { + var string0, value1 = value(this), string1; + if (value1 == null) + return void this.removeAttributeNS(fullname.space, fullname.local); + string0 = this.getAttributeNS(fullname.space, fullname.local); + string1 = value1 + ""; + return string0 === string1 ? null : string0 === string00 && string1 === string10 ? interpolate0 : (string10 = string1, interpolate0 = interpolate(string00 = string0, value1)); + }; +} +function attr_default2(name, value) { + var fullname = namespace_default(name), i = fullname === "transform" ? interpolateTransformSvg : interpolate_default; + return this.attrTween(name, typeof value === "function" ? (fullname.local ? attrFunctionNS2 : attrFunction2)(fullname, i, tweenValue(this, "attr." + name, value)) : value == null ? (fullname.local ? attrRemoveNS2 : attrRemove2)(fullname) : (fullname.local ? attrConstantNS2 : attrConstant2)(fullname, i, value)); +} + +// node_modules/d3-transition/src/transition/attrTween.js +function attrInterpolate(name, i) { + return function(t) { + this.setAttribute(name, i.call(this, t)); + }; +} +function attrInterpolateNS(fullname, i) { + return function(t) { + this.setAttributeNS(fullname.space, fullname.local, i.call(this, t)); + }; +} +function attrTweenNS(fullname, value) { + var t0, i0; + function tween() { + var i = value.apply(this, arguments); + if (i !== i0) + t0 = (i0 = i) && attrInterpolateNS(fullname, i); + return t0; + } + tween._value = value; + return tween; +} +function attrTween(name, value) { + var t0, i0; + function tween() { + var i = value.apply(this, arguments); + if (i !== i0) + t0 = (i0 = i) && attrInterpolate(name, i); + return t0; + } + tween._value = value; + return tween; +} +function attrTween_default(name, value) { + var key = "attr." + name; + if (arguments.length < 2) + return (key = this.tween(key)) && key._value; + if (value == null) + return this.tween(key, null); + if (typeof value !== "function") + throw new Error(); + var fullname = namespace_default(name); + return this.tween(key, (fullname.local ? attrTweenNS : attrTween)(fullname, value)); +} + +// node_modules/d3-transition/src/transition/delay.js +function delayFunction(id2, value) { + return function() { + init(this, id2).delay = +value.apply(this, arguments); + }; +} +function delayConstant(id2, value) { + return value = +value, function() { + init(this, id2).delay = value; + }; +} +function delay_default(value) { + var id2 = this._id; + return arguments.length ? this.each((typeof value === "function" ? delayFunction : delayConstant)(id2, value)) : get2(this.node(), id2).delay; +} + +// node_modules/d3-transition/src/transition/duration.js +function durationFunction(id2, value) { + return function() { + set2(this, id2).duration = +value.apply(this, arguments); + }; +} +function durationConstant(id2, value) { + return value = +value, function() { + set2(this, id2).duration = value; + }; +} +function duration_default(value) { + var id2 = this._id; + return arguments.length ? this.each((typeof value === "function" ? durationFunction : durationConstant)(id2, value)) : get2(this.node(), id2).duration; +} + +// node_modules/d3-transition/src/transition/ease.js +function easeConstant(id2, value) { + if (typeof value !== "function") + throw new Error(); + return function() { + set2(this, id2).ease = value; + }; +} +function ease_default(value) { + var id2 = this._id; + return arguments.length ? this.each(easeConstant(id2, value)) : get2(this.node(), id2).ease; +} + +// node_modules/d3-transition/src/transition/easeVarying.js +function easeVarying(id2, value) { + return function() { + var v = value.apply(this, arguments); + if (typeof v !== "function") + throw new Error(); + set2(this, id2).ease = v; + }; +} +function easeVarying_default(value) { + if (typeof value !== "function") + throw new Error(); + return this.each(easeVarying(this._id, value)); +} + +// node_modules/d3-transition/src/transition/filter.js +function filter_default2(match) { + if (typeof match !== "function") + match = matcher_default(match); + for (var groups = this._groups, m2 = groups.length, subgroups = new Array(m2), j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, subgroup = subgroups[j] = [], node, i = 0; i < n; ++i) { + if ((node = group[i]) && match.call(node, node.__data__, i, group)) { + subgroup.push(node); + } + } + } + return new Transition(subgroups, this._parents, this._name, this._id); +} + +// node_modules/d3-transition/src/transition/merge.js +function merge_default2(transition2) { + if (transition2._id !== this._id) + throw new Error(); + for (var groups0 = this._groups, groups1 = transition2._groups, m0 = groups0.length, m1 = groups1.length, m2 = Math.min(m0, m1), merges = new Array(m0), j = 0; j < m2; ++j) { + for (var group0 = groups0[j], group1 = groups1[j], n = group0.length, merge = merges[j] = new Array(n), node, i = 0; i < n; ++i) { + if (node = group0[i] || group1[i]) { + merge[i] = node; + } + } + } + for (; j < m0; ++j) { + merges[j] = groups0[j]; + } + return new Transition(merges, this._parents, this._name, this._id); +} + +// node_modules/d3-transition/src/transition/on.js +function start(name) { + return (name + "").trim().split(/^|\s+/).every(function(t) { + var i = t.indexOf("."); + if (i >= 0) + t = t.slice(0, i); + return !t || t === "start"; + }); +} +function onFunction(id2, name, listener) { + var on0, on1, sit = start(name) ? init : set2; + return function() { + var schedule = sit(this, id2), on = schedule.on; + if (on !== on0) + (on1 = (on0 = on).copy()).on(name, listener); + schedule.on = on1; + }; +} +function on_default2(name, listener) { + var id2 = this._id; + return arguments.length < 2 ? get2(this.node(), id2).on.on(name) : this.each(onFunction(id2, name, listener)); +} + +// node_modules/d3-transition/src/transition/remove.js +function removeFunction(id2) { + return function() { + var parent = this.parentNode; + for (var i in this.__transition) + if (+i !== id2) + return; + if (parent) + parent.removeChild(this); + }; +} +function remove_default2() { + return this.on("end.remove", removeFunction(this._id)); +} + +// node_modules/d3-transition/src/transition/select.js +function select_default3(select) { + var name = this._name, id2 = this._id; + if (typeof select !== "function") + select = selector_default(select); + for (var groups = this._groups, m2 = groups.length, subgroups = new Array(m2), j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, subgroup = subgroups[j] = new Array(n), node, subnode, i = 0; i < n; ++i) { + if ((node = group[i]) && (subnode = select.call(node, node.__data__, i, group))) { + if ("__data__" in node) + subnode.__data__ = node.__data__; + subgroup[i] = subnode; + schedule_default(subgroup[i], name, id2, i, subgroup, get2(node, id2)); + } + } + } + return new Transition(subgroups, this._parents, name, id2); +} + +// node_modules/d3-transition/src/transition/selectAll.js +function selectAll_default3(select) { + var name = this._name, id2 = this._id; + if (typeof select !== "function") + select = selectorAll_default(select); + for (var groups = this._groups, m2 = groups.length, subgroups = [], parents = [], j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, node, i = 0; i < n; ++i) { + if (node = group[i]) { + for (var children2 = select.call(node, node.__data__, i, group), child, inherit2 = get2(node, id2), k = 0, l = children2.length; k < l; ++k) { + if (child = children2[k]) { + schedule_default(child, name, id2, k, children2, inherit2); + } + } + subgroups.push(children2); + parents.push(node); + } + } + } + return new Transition(subgroups, parents, name, id2); +} + +// node_modules/d3-transition/src/transition/selection.js +var Selection2 = selection_default.prototype.constructor; +function selection_default2() { + return new Selection2(this._groups, this._parents); +} + +// node_modules/d3-transition/src/transition/style.js +function styleNull(name, interpolate) { + var string00, string10, interpolate0; + return function() { + var string0 = styleValue(this, name), string1 = (this.style.removeProperty(name), styleValue(this, name)); + return string0 === string1 ? null : string0 === string00 && string1 === string10 ? interpolate0 : interpolate0 = interpolate(string00 = string0, string10 = string1); + }; +} +function styleRemove2(name) { + return function() { + this.style.removeProperty(name); + }; +} +function styleConstant2(name, interpolate, value1) { + var string00, string1 = value1 + "", interpolate0; + return function() { + var string0 = styleValue(this, name); + return string0 === string1 ? null : string0 === string00 ? interpolate0 : interpolate0 = interpolate(string00 = string0, value1); + }; +} +function styleFunction2(name, interpolate, value) { + var string00, string10, interpolate0; + return function() { + var string0 = styleValue(this, name), value1 = value(this), string1 = value1 + ""; + if (value1 == null) + string1 = value1 = (this.style.removeProperty(name), styleValue(this, name)); + return string0 === string1 ? null : string0 === string00 && string1 === string10 ? interpolate0 : (string10 = string1, interpolate0 = interpolate(string00 = string0, value1)); + }; +} +function styleMaybeRemove(id2, name) { + var on0, on1, listener0, key = "style." + name, event = "end." + key, remove2; + return function() { + var schedule = set2(this, id2), on = schedule.on, listener = schedule.value[key] == null ? remove2 || (remove2 = styleRemove2(name)) : void 0; + if (on !== on0 || listener0 !== listener) + (on1 = (on0 = on).copy()).on(event, listener0 = listener); + schedule.on = on1; + }; +} +function style_default2(name, value, priority) { + var i = (name += "") === "transform" ? interpolateTransformCss : interpolate_default; + return value == null ? this.styleTween(name, styleNull(name, i)).on("end.style." + name, styleRemove2(name)) : typeof value === "function" ? this.styleTween(name, styleFunction2(name, i, tweenValue(this, "style." + name, value))).each(styleMaybeRemove(this._id, name)) : this.styleTween(name, styleConstant2(name, i, value), priority).on("end.style." + name, null); +} + +// node_modules/d3-transition/src/transition/styleTween.js +function styleInterpolate(name, i, priority) { + return function(t) { + this.style.setProperty(name, i.call(this, t), priority); + }; +} +function styleTween(name, value, priority) { + var t, i0; + function tween() { + var i = value.apply(this, arguments); + if (i !== i0) + t = (i0 = i) && styleInterpolate(name, i, priority); + return t; + } + tween._value = value; + return tween; +} +function styleTween_default(name, value, priority) { + var key = "style." + (name += ""); + if (arguments.length < 2) + return (key = this.tween(key)) && key._value; + if (value == null) + return this.tween(key, null); + if (typeof value !== "function") + throw new Error(); + return this.tween(key, styleTween(name, value, priority == null ? "" : priority)); +} + +// node_modules/d3-transition/src/transition/text.js +function textConstant2(value) { + return function() { + this.textContent = value; + }; +} +function textFunction2(value) { + return function() { + var value1 = value(this); + this.textContent = value1 == null ? "" : value1; + }; +} +function text_default2(value) { + return this.tween("text", typeof value === "function" ? textFunction2(tweenValue(this, "text", value)) : textConstant2(value == null ? "" : value + "")); +} + +// node_modules/d3-transition/src/transition/textTween.js +function textInterpolate(i) { + return function(t) { + this.textContent = i.call(this, t); + }; +} +function textTween(value) { + var t0, i0; + function tween() { + var i = value.apply(this, arguments); + if (i !== i0) + t0 = (i0 = i) && textInterpolate(i); + return t0; + } + tween._value = value; + return tween; +} +function textTween_default(value) { + var key = "text"; + if (arguments.length < 1) + return (key = this.tween(key)) && key._value; + if (value == null) + return this.tween(key, null); + if (typeof value !== "function") + throw new Error(); + return this.tween(key, textTween(value)); +} + +// node_modules/d3-transition/src/transition/transition.js +function transition_default() { + var name = this._name, id0 = this._id, id1 = newId(); + for (var groups = this._groups, m2 = groups.length, j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, node, i = 0; i < n; ++i) { + if (node = group[i]) { + var inherit2 = get2(node, id0); + schedule_default(node, name, id1, i, group, { + time: inherit2.time + inherit2.delay + inherit2.duration, + delay: 0, + duration: inherit2.duration, + ease: inherit2.ease + }); + } + } + } + return new Transition(groups, this._parents, name, id1); +} + +// node_modules/d3-transition/src/transition/end.js +function end_default() { + var on0, on1, that = this, id2 = that._id, size = that.size(); + return new Promise(function(resolve, reject) { + var cancel = { value: reject }, end = { value: function() { + if (--size === 0) + resolve(); + } }; + that.each(function() { + var schedule = set2(this, id2), on = schedule.on; + if (on !== on0) { + on1 = (on0 = on).copy(); + on1._.cancel.push(cancel); + on1._.interrupt.push(cancel); + on1._.end.push(end); + } + schedule.on = on1; + }); + if (size === 0) + resolve(); + }); +} + +// node_modules/d3-transition/src/transition/index.js +var id = 0; +function Transition(groups, parents, name, id2) { + this._groups = groups; + this._parents = parents; + this._name = name; + this._id = id2; +} +function transition(name) { + return selection_default().transition(name); +} +function newId() { + return ++id; +} +var selection_prototype = selection_default.prototype; +Transition.prototype = transition.prototype = { + constructor: Transition, + select: select_default3, + selectAll: selectAll_default3, + selectChild: selection_prototype.selectChild, + selectChildren: selection_prototype.selectChildren, + filter: filter_default2, + merge: merge_default2, + selection: selection_default2, + transition: transition_default, + call: selection_prototype.call, + nodes: selection_prototype.nodes, + node: selection_prototype.node, + size: selection_prototype.size, + empty: selection_prototype.empty, + each: selection_prototype.each, + on: on_default2, + attr: attr_default2, + attrTween: attrTween_default, + style: style_default2, + styleTween: styleTween_default, + text: text_default2, + textTween: textTween_default, + remove: remove_default2, + tween: tween_default, + delay: delay_default, + duration: duration_default, + ease: ease_default, + easeVarying: easeVarying_default, + end: end_default, + [Symbol.iterator]: selection_prototype[Symbol.iterator] +}; + +// node_modules/d3-ease/src/cubic.js +function cubicInOut(t) { + return ((t *= 2) <= 1 ? t * t * t : (t -= 2) * t * t + 2) / 2; +} + +// node_modules/d3-transition/src/selection/transition.js +var defaultTiming = { + time: null, + // Set on use. + delay: 0, + duration: 250, + ease: cubicInOut +}; +function inherit(node, id2) { + var timing; + while (!(timing = node.__transition) || !(timing = timing[id2])) { + if (!(node = node.parentNode)) { + throw new Error(`transition ${id2} not found`); + } + } + return timing; +} +function transition_default2(name) { + var id2, timing; + if (name instanceof Transition) { + id2 = name._id, name = name._name; + } else { + id2 = newId(), (timing = defaultTiming).time = now(), name = name == null ? null : name + ""; + } + for (var groups = this._groups, m2 = groups.length, j = 0; j < m2; ++j) { + for (var group = groups[j], n = group.length, node, i = 0; i < n; ++i) { + if (node = group[i]) { + schedule_default(node, name, id2, i, group, timing || inherit(node, id2)); + } + } + } + return new Transition(groups, this._parents, name, id2); +} + +// node_modules/d3-transition/src/selection/index.js +selection_default.prototype.interrupt = interrupt_default2; +selection_default.prototype.transition = transition_default2; + +// node_modules/d3-brush/src/brush.js +var { abs, max, min } = Math; +function number1(e) { + return [+e[0], +e[1]]; +} +function number2(e) { + return [number1(e[0]), number1(e[1])]; +} +var X = { + name: "x", + handles: ["w", "e"].map(type), + input: function(x2, e) { + return x2 == null ? null : [[+x2[0], e[0][1]], [+x2[1], e[1][1]]]; + }, + output: function(xy) { + return xy && [xy[0][0], xy[1][0]]; + } +}; +var Y = { + name: "y", + handles: ["n", "s"].map(type), + input: function(y2, e) { + return y2 == null ? null : [[e[0][0], +y2[0]], [e[1][0], +y2[1]]]; + }, + output: function(xy) { + return xy && [xy[0][1], xy[1][1]]; + } +}; +var XY = { + name: "xy", + handles: ["n", "w", "e", "s", "nw", "ne", "sw", "se"].map(type), + input: function(xy) { + return xy == null ? null : number2(xy); + }, + output: function(xy) { + return xy; + } +}; +function type(t) { + return { type: t }; +} + +// node_modules/d3-force/src/center.js +function center_default(x2, y2) { + var nodes, strength = 1; + if (x2 == null) + x2 = 0; + if (y2 == null) + y2 = 0; + function force() { + var i, n = nodes.length, node, sx = 0, sy = 0; + for (i = 0; i < n; ++i) { + node = nodes[i], sx += node.x, sy += node.y; + } + for (sx = (sx / n - x2) * strength, sy = (sy / n - y2) * strength, i = 0; i < n; ++i) { + node = nodes[i], node.x -= sx, node.y -= sy; + } + } + force.initialize = function(_) { + nodes = _; + }; + force.x = function(_) { + return arguments.length ? (x2 = +_, force) : x2; + }; + force.y = function(_) { + return arguments.length ? (y2 = +_, force) : y2; + }; + force.strength = function(_) { + return arguments.length ? (strength = +_, force) : strength; + }; + return force; +} + +// node_modules/d3-quadtree/src/add.js +function add_default(d) { + const x2 = +this._x.call(null, d), y2 = +this._y.call(null, d); + return add(this.cover(x2, y2), x2, y2, d); +} +function add(tree, x2, y2, d) { + if (isNaN(x2) || isNaN(y2)) + return tree; + var parent, node = tree._root, leaf = { data: d }, x0 = tree._x0, y0 = tree._y0, x1 = tree._x1, y1 = tree._y1, xm, ym, xp, yp, right, bottom, i, j; + if (!node) + return tree._root = leaf, tree; + while (node.length) { + if (right = x2 >= (xm = (x0 + x1) / 2)) + x0 = xm; + else + x1 = xm; + if (bottom = y2 >= (ym = (y0 + y1) / 2)) + y0 = ym; + else + y1 = ym; + if (parent = node, !(node = node[i = bottom << 1 | right])) + return parent[i] = leaf, tree; + } + xp = +tree._x.call(null, node.data); + yp = +tree._y.call(null, node.data); + if (x2 === xp && y2 === yp) + return leaf.next = node, parent ? parent[i] = leaf : tree._root = leaf, tree; + do { + parent = parent ? parent[i] = new Array(4) : tree._root = new Array(4); + if (right = x2 >= (xm = (x0 + x1) / 2)) + x0 = xm; + else + x1 = xm; + if (bottom = y2 >= (ym = (y0 + y1) / 2)) + y0 = ym; + else + y1 = ym; + } while ((i = bottom << 1 | right) === (j = (yp >= ym) << 1 | xp >= xm)); + return parent[j] = node, parent[i] = leaf, tree; +} +function addAll(data) { + var d, i, n = data.length, x2, y2, xz = new Array(n), yz = new Array(n), x0 = Infinity, y0 = Infinity, x1 = -Infinity, y1 = -Infinity; + for (i = 0; i < n; ++i) { + if (isNaN(x2 = +this._x.call(null, d = data[i])) || isNaN(y2 = +this._y.call(null, d))) + continue; + xz[i] = x2; + yz[i] = y2; + if (x2 < x0) + x0 = x2; + if (x2 > x1) + x1 = x2; + if (y2 < y0) + y0 = y2; + if (y2 > y1) + y1 = y2; + } + if (x0 > x1 || y0 > y1) + return this; + this.cover(x0, y0).cover(x1, y1); + for (i = 0; i < n; ++i) { + add(this, xz[i], yz[i], data[i]); + } + return this; +} + +// node_modules/d3-quadtree/src/cover.js +function cover_default(x2, y2) { + if (isNaN(x2 = +x2) || isNaN(y2 = +y2)) + return this; + var x0 = this._x0, y0 = this._y0, x1 = this._x1, y1 = this._y1; + if (isNaN(x0)) { + x1 = (x0 = Math.floor(x2)) + 1; + y1 = (y0 = Math.floor(y2)) + 1; + } else { + var z = x1 - x0 || 1, node = this._root, parent, i; + while (x0 > x2 || x2 >= x1 || y0 > y2 || y2 >= y1) { + i = (y2 < y0) << 1 | x2 < x0; + parent = new Array(4), parent[i] = node, node = parent, z *= 2; + switch (i) { + case 0: + x1 = x0 + z, y1 = y0 + z; + break; + case 1: + x0 = x1 - z, y1 = y0 + z; + break; + case 2: + x1 = x0 + z, y0 = y1 - z; + break; + case 3: + x0 = x1 - z, y0 = y1 - z; + break; + } + } + if (this._root && this._root.length) + this._root = node; + } + this._x0 = x0; + this._y0 = y0; + this._x1 = x1; + this._y1 = y1; + return this; +} + +// node_modules/d3-quadtree/src/data.js +function data_default2() { + var data = []; + this.visit(function(node) { + if (!node.length) + do + data.push(node.data); + while (node = node.next); + }); + return data; +} + +// node_modules/d3-quadtree/src/extent.js +function extent_default(_) { + return arguments.length ? this.cover(+_[0][0], +_[0][1]).cover(+_[1][0], +_[1][1]) : isNaN(this._x0) ? void 0 : [[this._x0, this._y0], [this._x1, this._y1]]; +} + +// node_modules/d3-quadtree/src/quad.js +function quad_default(node, x0, y0, x1, y1) { + this.node = node; + this.x0 = x0; + this.y0 = y0; + this.x1 = x1; + this.y1 = y1; +} + +// node_modules/d3-quadtree/src/find.js +function find_default(x2, y2, radius) { + var data, x0 = this._x0, y0 = this._y0, x1, y1, x22, y22, x3 = this._x1, y3 = this._y1, quads = [], node = this._root, q, i; + if (node) + quads.push(new quad_default(node, x0, y0, x3, y3)); + if (radius == null) + radius = Infinity; + else { + x0 = x2 - radius, y0 = y2 - radius; + x3 = x2 + radius, y3 = y2 + radius; + radius *= radius; + } + while (q = quads.pop()) { + if (!(node = q.node) || (x1 = q.x0) > x3 || (y1 = q.y0) > y3 || (x22 = q.x1) < x0 || (y22 = q.y1) < y0) + continue; + if (node.length) { + var xm = (x1 + x22) / 2, ym = (y1 + y22) / 2; + quads.push( + new quad_default(node[3], xm, ym, x22, y22), + new quad_default(node[2], x1, ym, xm, y22), + new quad_default(node[1], xm, y1, x22, ym), + new quad_default(node[0], x1, y1, xm, ym) + ); + if (i = (y2 >= ym) << 1 | x2 >= xm) { + q = quads[quads.length - 1]; + quads[quads.length - 1] = quads[quads.length - 1 - i]; + quads[quads.length - 1 - i] = q; + } + } else { + var dx = x2 - +this._x.call(null, node.data), dy = y2 - +this._y.call(null, node.data), d2 = dx * dx + dy * dy; + if (d2 < radius) { + var d = Math.sqrt(radius = d2); + x0 = x2 - d, y0 = y2 - d; + x3 = x2 + d, y3 = y2 + d; + data = node.data; + } + } + } + return data; +} + +// node_modules/d3-quadtree/src/remove.js +function remove_default3(d) { + if (isNaN(x2 = +this._x.call(null, d)) || isNaN(y2 = +this._y.call(null, d))) + return this; + var parent, node = this._root, retainer, previous, next, x0 = this._x0, y0 = this._y0, x1 = this._x1, y1 = this._y1, x2, y2, xm, ym, right, bottom, i, j; + if (!node) + return this; + if (node.length) + while (true) { + if (right = x2 >= (xm = (x0 + x1) / 2)) + x0 = xm; + else + x1 = xm; + if (bottom = y2 >= (ym = (y0 + y1) / 2)) + y0 = ym; + else + y1 = ym; + if (!(parent = node, node = node[i = bottom << 1 | right])) + return this; + if (!node.length) + break; + if (parent[i + 1 & 3] || parent[i + 2 & 3] || parent[i + 3 & 3]) + retainer = parent, j = i; + } + while (node.data !== d) + if (!(previous = node, node = node.next)) + return this; + if (next = node.next) + delete node.next; + if (previous) + return next ? previous.next = next : delete previous.next, this; + if (!parent) + return this._root = next, this; + next ? parent[i] = next : delete parent[i]; + if ((node = parent[0] || parent[1] || parent[2] || parent[3]) && node === (parent[3] || parent[2] || parent[1] || parent[0]) && !node.length) { + if (retainer) + retainer[j] = node; + else + this._root = node; + } + return this; +} +function removeAll(data) { + for (var i = 0, n = data.length; i < n; ++i) + this.remove(data[i]); + return this; +} + +// node_modules/d3-quadtree/src/root.js +function root_default() { + return this._root; +} + +// node_modules/d3-quadtree/src/size.js +function size_default2() { + var size = 0; + this.visit(function(node) { + if (!node.length) + do + ++size; + while (node = node.next); + }); + return size; +} + +// node_modules/d3-quadtree/src/visit.js +function visit_default(callback) { + var quads = [], q, node = this._root, child, x0, y0, x1, y1; + if (node) + quads.push(new quad_default(node, this._x0, this._y0, this._x1, this._y1)); + while (q = quads.pop()) { + if (!callback(node = q.node, x0 = q.x0, y0 = q.y0, x1 = q.x1, y1 = q.y1) && node.length) { + var xm = (x0 + x1) / 2, ym = (y0 + y1) / 2; + if (child = node[3]) + quads.push(new quad_default(child, xm, ym, x1, y1)); + if (child = node[2]) + quads.push(new quad_default(child, x0, ym, xm, y1)); + if (child = node[1]) + quads.push(new quad_default(child, xm, y0, x1, ym)); + if (child = node[0]) + quads.push(new quad_default(child, x0, y0, xm, ym)); + } + } + return this; +} + +// node_modules/d3-quadtree/src/visitAfter.js +function visitAfter_default(callback) { + var quads = [], next = [], q; + if (this._root) + quads.push(new quad_default(this._root, this._x0, this._y0, this._x1, this._y1)); + while (q = quads.pop()) { + var node = q.node; + if (node.length) { + var child, x0 = q.x0, y0 = q.y0, x1 = q.x1, y1 = q.y1, xm = (x0 + x1) / 2, ym = (y0 + y1) / 2; + if (child = node[0]) + quads.push(new quad_default(child, x0, y0, xm, ym)); + if (child = node[1]) + quads.push(new quad_default(child, xm, y0, x1, ym)); + if (child = node[2]) + quads.push(new quad_default(child, x0, ym, xm, y1)); + if (child = node[3]) + quads.push(new quad_default(child, xm, ym, x1, y1)); + } + next.push(q); + } + while (q = next.pop()) { + callback(q.node, q.x0, q.y0, q.x1, q.y1); + } + return this; +} + +// node_modules/d3-quadtree/src/x.js +function defaultX(d) { + return d[0]; +} +function x_default(_) { + return arguments.length ? (this._x = _, this) : this._x; +} + +// node_modules/d3-quadtree/src/y.js +function defaultY(d) { + return d[1]; +} +function y_default(_) { + return arguments.length ? (this._y = _, this) : this._y; +} + +// node_modules/d3-quadtree/src/quadtree.js +function quadtree(nodes, x2, y2) { + var tree = new Quadtree(x2 == null ? defaultX : x2, y2 == null ? defaultY : y2, NaN, NaN, NaN, NaN); + return nodes == null ? tree : tree.addAll(nodes); +} +function Quadtree(x2, y2, x0, y0, x1, y1) { + this._x = x2; + this._y = y2; + this._x0 = x0; + this._y0 = y0; + this._x1 = x1; + this._y1 = y1; + this._root = void 0; +} +function leaf_copy(leaf) { + var copy = { data: leaf.data }, next = copy; + while (leaf = leaf.next) + next = next.next = { data: leaf.data }; + return copy; +} +var treeProto = quadtree.prototype = Quadtree.prototype; +treeProto.copy = function() { + var copy = new Quadtree(this._x, this._y, this._x0, this._y0, this._x1, this._y1), node = this._root, nodes, child; + if (!node) + return copy; + if (!node.length) + return copy._root = leaf_copy(node), copy; + nodes = [{ source: node, target: copy._root = new Array(4) }]; + while (node = nodes.pop()) { + for (var i = 0; i < 4; ++i) { + if (child = node.source[i]) { + if (child.length) + nodes.push({ source: child, target: node.target[i] = new Array(4) }); + else + node.target[i] = leaf_copy(child); + } + } + } + return copy; +}; +treeProto.add = add_default; +treeProto.addAll = addAll; +treeProto.cover = cover_default; +treeProto.data = data_default2; +treeProto.extent = extent_default; +treeProto.find = find_default; +treeProto.remove = remove_default3; +treeProto.removeAll = removeAll; +treeProto.root = root_default; +treeProto.size = size_default2; +treeProto.visit = visit_default; +treeProto.visitAfter = visitAfter_default; +treeProto.x = x_default; +treeProto.y = y_default; + +// node_modules/d3-force/src/constant.js +function constant_default5(x2) { + return function() { + return x2; + }; +} + +// node_modules/d3-force/src/jiggle.js +function jiggle_default(random) { + return (random() - 0.5) * 1e-6; +} + +// node_modules/d3-force/src/link.js +function index(d) { + return d.index; +} +function find2(nodeById, nodeId) { + var node = nodeById.get(nodeId); + if (!node) + throw new Error("node not found: " + nodeId); + return node; +} +function link_default(links) { + var id2 = index, strength = defaultStrength, strengths, distance = constant_default5(30), distances, nodes, count, bias, random, iterations = 1; + if (links == null) + links = []; + function defaultStrength(link) { + return 1 / Math.min(count[link.source.index], count[link.target.index]); + } + function force(alpha) { + for (var k = 0, n = links.length; k < iterations; ++k) { + for (var i = 0, link, source, target, x2, y2, l, b; i < n; ++i) { + link = links[i], source = link.source, target = link.target; + x2 = target.x + target.vx - source.x - source.vx || jiggle_default(random); + y2 = target.y + target.vy - source.y - source.vy || jiggle_default(random); + l = Math.sqrt(x2 * x2 + y2 * y2); + l = (l - distances[i]) / l * alpha * strengths[i]; + x2 *= l, y2 *= l; + target.vx -= x2 * (b = bias[i]); + target.vy -= y2 * b; + source.vx += x2 * (b = 1 - b); + source.vy += y2 * b; + } + } + } + function initialize() { + if (!nodes) + return; + var i, n = nodes.length, m2 = links.length, nodeById = new Map(nodes.map((d, i2) => [id2(d, i2, nodes), d])), link; + for (i = 0, count = new Array(n); i < m2; ++i) { + link = links[i], link.index = i; + if (typeof link.source !== "object") + link.source = find2(nodeById, link.source); + if (typeof link.target !== "object") + link.target = find2(nodeById, link.target); + count[link.source.index] = (count[link.source.index] || 0) + 1; + count[link.target.index] = (count[link.target.index] || 0) + 1; + } + for (i = 0, bias = new Array(m2); i < m2; ++i) { + link = links[i], bias[i] = count[link.source.index] / (count[link.source.index] + count[link.target.index]); + } + strengths = new Array(m2), initializeStrength(); + distances = new Array(m2), initializeDistance(); + } + function initializeStrength() { + if (!nodes) + return; + for (var i = 0, n = links.length; i < n; ++i) { + strengths[i] = +strength(links[i], i, links); + } + } + function initializeDistance() { + if (!nodes) + return; + for (var i = 0, n = links.length; i < n; ++i) { + distances[i] = +distance(links[i], i, links); + } + } + force.initialize = function(_nodes, _random) { + nodes = _nodes; + random = _random; + initialize(); + }; + force.links = function(_) { + return arguments.length ? (links = _, initialize(), force) : links; + }; + force.id = function(_) { + return arguments.length ? (id2 = _, force) : id2; + }; + force.iterations = function(_) { + return arguments.length ? (iterations = +_, force) : iterations; + }; + force.strength = function(_) { + return arguments.length ? (strength = typeof _ === "function" ? _ : constant_default5(+_), initializeStrength(), force) : strength; + }; + force.distance = function(_) { + return arguments.length ? (distance = typeof _ === "function" ? _ : constant_default5(+_), initializeDistance(), force) : distance; + }; + return force; +} + +// node_modules/d3-force/src/lcg.js +var a = 1664525; +var c = 1013904223; +var m = 4294967296; +function lcg_default() { + let s = 1; + return () => (s = (a * s + c) % m) / m; +} + +// node_modules/d3-force/src/simulation.js +function x(d) { + return d.x; +} +function y(d) { + return d.y; +} +var initialRadius = 10; +var initialAngle = Math.PI * (3 - Math.sqrt(5)); +function simulation_default(nodes) { + var simulation, alpha = 1, alphaMin = 1e-3, alphaDecay = 1 - Math.pow(alphaMin, 1 / 300), alphaTarget = 0, velocityDecay = 0.6, forces = /* @__PURE__ */ new Map(), stepper = timer(step), event = dispatch_default("tick", "end"), random = lcg_default(); + if (nodes == null) + nodes = []; + function step() { + tick(); + event.call("tick", simulation); + if (alpha < alphaMin) { + stepper.stop(); + event.call("end", simulation); + } + } + function tick(iterations) { + var i, n = nodes.length, node; + if (iterations === void 0) + iterations = 1; + for (var k = 0; k < iterations; ++k) { + alpha += (alphaTarget - alpha) * alphaDecay; + forces.forEach(function(force) { + force(alpha); + }); + for (i = 0; i < n; ++i) { + node = nodes[i]; + if (node.fx == null) + node.x += node.vx *= velocityDecay; + else + node.x = node.fx, node.vx = 0; + if (node.fy == null) + node.y += node.vy *= velocityDecay; + else + node.y = node.fy, node.vy = 0; + } + } + return simulation; + } + function initializeNodes() { + for (var i = 0, n = nodes.length, node; i < n; ++i) { + node = nodes[i], node.index = i; + if (node.fx != null) + node.x = node.fx; + if (node.fy != null) + node.y = node.fy; + if (isNaN(node.x) || isNaN(node.y)) { + var radius = initialRadius * Math.sqrt(0.5 + i), angle = i * initialAngle; + node.x = radius * Math.cos(angle); + node.y = radius * Math.sin(angle); + } + if (isNaN(node.vx) || isNaN(node.vy)) { + node.vx = node.vy = 0; + } + } + } + function initializeForce(force) { + if (force.initialize) + force.initialize(nodes, random); + return force; + } + initializeNodes(); + return simulation = { + tick, + restart: function() { + return stepper.restart(step), simulation; + }, + stop: function() { + return stepper.stop(), simulation; + }, + nodes: function(_) { + return arguments.length ? (nodes = _, initializeNodes(), forces.forEach(initializeForce), simulation) : nodes; + }, + alpha: function(_) { + return arguments.length ? (alpha = +_, simulation) : alpha; + }, + alphaMin: function(_) { + return arguments.length ? (alphaMin = +_, simulation) : alphaMin; + }, + alphaDecay: function(_) { + return arguments.length ? (alphaDecay = +_, simulation) : +alphaDecay; + }, + alphaTarget: function(_) { + return arguments.length ? (alphaTarget = +_, simulation) : alphaTarget; + }, + velocityDecay: function(_) { + return arguments.length ? (velocityDecay = 1 - _, simulation) : 1 - velocityDecay; + }, + randomSource: function(_) { + return arguments.length ? (random = _, forces.forEach(initializeForce), simulation) : random; + }, + force: function(name, _) { + return arguments.length > 1 ? (_ == null ? forces.delete(name) : forces.set(name, initializeForce(_)), simulation) : forces.get(name); + }, + find: function(x2, y2, radius) { + var i = 0, n = nodes.length, dx, dy, d2, node, closest; + if (radius == null) + radius = Infinity; + else + radius *= radius; + for (i = 0; i < n; ++i) { + node = nodes[i]; + dx = x2 - node.x; + dy = y2 - node.y; + d2 = dx * dx + dy * dy; + if (d2 < radius) + closest = node, radius = d2; + } + return closest; + }, + on: function(name, _) { + return arguments.length > 1 ? (event.on(name, _), simulation) : event.on(name); + } + }; +} + +// node_modules/d3-force/src/manyBody.js +function manyBody_default() { + var nodes, node, random, alpha, strength = constant_default5(-30), strengths, distanceMin2 = 1, distanceMax2 = Infinity, theta2 = 0.81; + function force(_) { + var i, n = nodes.length, tree = quadtree(nodes, x, y).visitAfter(accumulate); + for (alpha = _, i = 0; i < n; ++i) + node = nodes[i], tree.visit(apply); + } + function initialize() { + if (!nodes) + return; + var i, n = nodes.length, node2; + strengths = new Array(n); + for (i = 0; i < n; ++i) + node2 = nodes[i], strengths[node2.index] = +strength(node2, i, nodes); + } + function accumulate(quad) { + var strength2 = 0, q, c2, weight = 0, x2, y2, i; + if (quad.length) { + for (x2 = y2 = i = 0; i < 4; ++i) { + if ((q = quad[i]) && (c2 = Math.abs(q.value))) { + strength2 += q.value, weight += c2, x2 += c2 * q.x, y2 += c2 * q.y; + } + } + quad.x = x2 / weight; + quad.y = y2 / weight; + } else { + q = quad; + q.x = q.data.x; + q.y = q.data.y; + do + strength2 += strengths[q.data.index]; + while (q = q.next); + } + quad.value = strength2; + } + function apply(quad, x1, _, x2) { + if (!quad.value) + return true; + var x3 = quad.x - node.x, y2 = quad.y - node.y, w = x2 - x1, l = x3 * x3 + y2 * y2; + if (w * w / theta2 < l) { + if (l < distanceMax2) { + if (x3 === 0) + x3 = jiggle_default(random), l += x3 * x3; + if (y2 === 0) + y2 = jiggle_default(random), l += y2 * y2; + if (l < distanceMin2) + l = Math.sqrt(distanceMin2 * l); + node.vx += x3 * quad.value * alpha / l; + node.vy += y2 * quad.value * alpha / l; + } + return true; + } else if (quad.length || l >= distanceMax2) + return; + if (quad.data !== node || quad.next) { + if (x3 === 0) + x3 = jiggle_default(random), l += x3 * x3; + if (y2 === 0) + y2 = jiggle_default(random), l += y2 * y2; + if (l < distanceMin2) + l = Math.sqrt(distanceMin2 * l); + } + do + if (quad.data !== node) { + w = strengths[quad.data.index] * alpha / l; + node.vx += x3 * w; + node.vy += y2 * w; + } + while (quad = quad.next); + } + force.initialize = function(_nodes, _random) { + nodes = _nodes; + random = _random; + initialize(); + }; + force.strength = function(_) { + return arguments.length ? (strength = typeof _ === "function" ? _ : constant_default5(+_), initialize(), force) : strength; + }; + force.distanceMin = function(_) { + return arguments.length ? (distanceMin2 = _ * _, force) : Math.sqrt(distanceMin2); + }; + force.distanceMax = function(_) { + return arguments.length ? (distanceMax2 = _ * _, force) : Math.sqrt(distanceMax2); + }; + force.theta = function(_) { + return arguments.length ? (theta2 = _ * _, force) : Math.sqrt(theta2); + }; + return force; +} + +// node_modules/d3-zoom/src/constant.js +var constant_default6 = (x2) => () => x2; + +// node_modules/d3-zoom/src/event.js +function ZoomEvent(type2, { + sourceEvent, + target, + transform: transform2, + dispatch: dispatch2 +}) { + Object.defineProperties(this, { + type: { value: type2, enumerable: true, configurable: true }, + sourceEvent: { value: sourceEvent, enumerable: true, configurable: true }, + target: { value: target, enumerable: true, configurable: true }, + transform: { value: transform2, enumerable: true, configurable: true }, + _: { value: dispatch2 } + }); +} + +// node_modules/d3-zoom/src/transform.js +function Transform(k, x2, y2) { + this.k = k; + this.x = x2; + this.y = y2; +} +Transform.prototype = { + constructor: Transform, + scale: function(k) { + return k === 1 ? this : new Transform(this.k * k, this.x, this.y); + }, + translate: function(x2, y2) { + return x2 === 0 & y2 === 0 ? this : new Transform(this.k, this.x + this.k * x2, this.y + this.k * y2); + }, + apply: function(point) { + return [point[0] * this.k + this.x, point[1] * this.k + this.y]; + }, + applyX: function(x2) { + return x2 * this.k + this.x; + }, + applyY: function(y2) { + return y2 * this.k + this.y; + }, + invert: function(location) { + return [(location[0] - this.x) / this.k, (location[1] - this.y) / this.k]; + }, + invertX: function(x2) { + return (x2 - this.x) / this.k; + }, + invertY: function(y2) { + return (y2 - this.y) / this.k; + }, + rescaleX: function(x2) { + return x2.copy().domain(x2.range().map(this.invertX, this).map(x2.invert, x2)); + }, + rescaleY: function(y2) { + return y2.copy().domain(y2.range().map(this.invertY, this).map(y2.invert, y2)); + }, + toString: function() { + return "translate(" + this.x + "," + this.y + ") scale(" + this.k + ")"; + } +}; +var identity2 = new Transform(1, 0, 0); +transform.prototype = Transform.prototype; +function transform(node) { + while (!node.__zoom) + if (!(node = node.parentNode)) + return identity2; + return node.__zoom; +} + +// node_modules/d3-zoom/src/noevent.js +function nopropagation3(event) { + event.stopImmediatePropagation(); +} +function noevent_default3(event) { + event.preventDefault(); + event.stopImmediatePropagation(); +} + +// node_modules/d3-zoom/src/zoom.js +function defaultFilter2(event) { + return (!event.ctrlKey || event.type === "wheel") && !event.button; +} +function defaultExtent() { + var e = this; + if (e instanceof SVGElement) { + e = e.ownerSVGElement || e; + if (e.hasAttribute("viewBox")) { + e = e.viewBox.baseVal; + return [[e.x, e.y], [e.x + e.width, e.y + e.height]]; + } + return [[0, 0], [e.width.baseVal.value, e.height.baseVal.value]]; + } + return [[0, 0], [e.clientWidth, e.clientHeight]]; +} +function defaultTransform() { + return this.__zoom || identity2; +} +function defaultWheelDelta(event) { + return -event.deltaY * (event.deltaMode === 1 ? 0.05 : event.deltaMode ? 1 : 2e-3) * (event.ctrlKey ? 10 : 1); +} +function defaultTouchable2() { + return navigator.maxTouchPoints || "ontouchstart" in this; +} +function defaultConstrain(transform2, extent, translateExtent) { + var dx0 = transform2.invertX(extent[0][0]) - translateExtent[0][0], dx1 = transform2.invertX(extent[1][0]) - translateExtent[1][0], dy0 = transform2.invertY(extent[0][1]) - translateExtent[0][1], dy1 = transform2.invertY(extent[1][1]) - translateExtent[1][1]; + return transform2.translate( + dx1 > dx0 ? (dx0 + dx1) / 2 : Math.min(0, dx0) || Math.max(0, dx1), + dy1 > dy0 ? (dy0 + dy1) / 2 : Math.min(0, dy0) || Math.max(0, dy1) + ); +} +function zoom_default2() { + var filter2 = defaultFilter2, extent = defaultExtent, constrain = defaultConstrain, wheelDelta = defaultWheelDelta, touchable = defaultTouchable2, scaleExtent = [0, Infinity], translateExtent = [[-Infinity, -Infinity], [Infinity, Infinity]], duration = 250, interpolate = zoom_default, listeners = dispatch_default("start", "zoom", "end"), touchstarting, touchfirst, touchending, touchDelay = 500, wheelDelay = 150, clickDistance2 = 0, tapDistance = 10; + function zoom(selection2) { + selection2.property("__zoom", defaultTransform).on("wheel.zoom", wheeled, { passive: false }).on("mousedown.zoom", mousedowned).on("dblclick.zoom", dblclicked).filter(touchable).on("touchstart.zoom", touchstarted).on("touchmove.zoom", touchmoved).on("touchend.zoom touchcancel.zoom", touchended).style("-webkit-tap-highlight-color", "rgba(0,0,0,0)"); + } + zoom.transform = function(collection, transform2, point, event) { + var selection2 = collection.selection ? collection.selection() : collection; + selection2.property("__zoom", defaultTransform); + if (collection !== selection2) { + schedule(collection, transform2, point, event); + } else { + selection2.interrupt().each(function() { + gesture(this, arguments).event(event).start().zoom(null, typeof transform2 === "function" ? transform2.apply(this, arguments) : transform2).end(); + }); + } + }; + zoom.scaleBy = function(selection2, k, p, event) { + zoom.scaleTo(selection2, function() { + var k0 = this.__zoom.k, k1 = typeof k === "function" ? k.apply(this, arguments) : k; + return k0 * k1; + }, p, event); + }; + zoom.scaleTo = function(selection2, k, p, event) { + zoom.transform(selection2, function() { + var e = extent.apply(this, arguments), t0 = this.__zoom, p0 = p == null ? centroid(e) : typeof p === "function" ? p.apply(this, arguments) : p, p1 = t0.invert(p0), k1 = typeof k === "function" ? k.apply(this, arguments) : k; + return constrain(translate(scale(t0, k1), p0, p1), e, translateExtent); + }, p, event); + }; + zoom.translateBy = function(selection2, x2, y2, event) { + zoom.transform(selection2, function() { + return constrain(this.__zoom.translate( + typeof x2 === "function" ? x2.apply(this, arguments) : x2, + typeof y2 === "function" ? y2.apply(this, arguments) : y2 + ), extent.apply(this, arguments), translateExtent); + }, null, event); + }; + zoom.translateTo = function(selection2, x2, y2, p, event) { + zoom.transform(selection2, function() { + var e = extent.apply(this, arguments), t = this.__zoom, p0 = p == null ? centroid(e) : typeof p === "function" ? p.apply(this, arguments) : p; + return constrain(identity2.translate(p0[0], p0[1]).scale(t.k).translate( + typeof x2 === "function" ? -x2.apply(this, arguments) : -x2, + typeof y2 === "function" ? -y2.apply(this, arguments) : -y2 + ), e, translateExtent); + }, p, event); + }; + function scale(transform2, k) { + k = Math.max(scaleExtent[0], Math.min(scaleExtent[1], k)); + return k === transform2.k ? transform2 : new Transform(k, transform2.x, transform2.y); + } + function translate(transform2, p0, p1) { + var x2 = p0[0] - p1[0] * transform2.k, y2 = p0[1] - p1[1] * transform2.k; + return x2 === transform2.x && y2 === transform2.y ? transform2 : new Transform(transform2.k, x2, y2); + } + function centroid(extent2) { + return [(+extent2[0][0] + +extent2[1][0]) / 2, (+extent2[0][1] + +extent2[1][1]) / 2]; + } + function schedule(transition2, transform2, point, event) { + transition2.on("start.zoom", function() { + gesture(this, arguments).event(event).start(); + }).on("interrupt.zoom end.zoom", function() { + gesture(this, arguments).event(event).end(); + }).tween("zoom", function() { + var that = this, args = arguments, g = gesture(that, args).event(event), e = extent.apply(that, args), p = point == null ? centroid(e) : typeof point === "function" ? point.apply(that, args) : point, w = Math.max(e[1][0] - e[0][0], e[1][1] - e[0][1]), a2 = that.__zoom, b = typeof transform2 === "function" ? transform2.apply(that, args) : transform2, i = interpolate(a2.invert(p).concat(w / a2.k), b.invert(p).concat(w / b.k)); + return function(t) { + if (t === 1) + t = b; + else { + var l = i(t), k = w / l[2]; + t = new Transform(k, p[0] - l[0] * k, p[1] - l[1] * k); + } + g.zoom(null, t); + }; + }); + } + function gesture(that, args, clean) { + return !clean && that.__zooming || new Gesture(that, args); + } + function Gesture(that, args) { + this.that = that; + this.args = args; + this.active = 0; + this.sourceEvent = null; + this.extent = extent.apply(that, args); + this.taps = 0; + } + Gesture.prototype = { + event: function(event) { + if (event) + this.sourceEvent = event; + return this; + }, + start: function() { + if (++this.active === 1) { + this.that.__zooming = this; + this.emit("start"); + } + return this; + }, + zoom: function(key, transform2) { + if (this.mouse && key !== "mouse") + this.mouse[1] = transform2.invert(this.mouse[0]); + if (this.touch0 && key !== "touch") + this.touch0[1] = transform2.invert(this.touch0[0]); + if (this.touch1 && key !== "touch") + this.touch1[1] = transform2.invert(this.touch1[0]); + this.that.__zoom = transform2; + this.emit("zoom"); + return this; + }, + end: function() { + if (--this.active === 0) { + delete this.that.__zooming; + this.emit("end"); + } + return this; + }, + emit: function(type2) { + var d = select_default2(this.that).datum(); + listeners.call( + type2, + this.that, + new ZoomEvent(type2, { + sourceEvent: this.sourceEvent, + target: zoom, + type: type2, + transform: this.that.__zoom, + dispatch: listeners + }), + d + ); + } + }; + function wheeled(event, ...args) { + if (!filter2.apply(this, arguments)) + return; + var g = gesture(this, args).event(event), t = this.__zoom, k = Math.max(scaleExtent[0], Math.min(scaleExtent[1], t.k * Math.pow(2, wheelDelta.apply(this, arguments)))), p = pointer_default(event); + if (g.wheel) { + if (g.mouse[0][0] !== p[0] || g.mouse[0][1] !== p[1]) { + g.mouse[1] = t.invert(g.mouse[0] = p); + } + clearTimeout(g.wheel); + } else if (t.k === k) + return; + else { + g.mouse = [p, t.invert(p)]; + interrupt_default(this); + g.start(); + } + noevent_default3(event); + g.wheel = setTimeout(wheelidled, wheelDelay); + g.zoom("mouse", constrain(translate(scale(t, k), g.mouse[0], g.mouse[1]), g.extent, translateExtent)); + function wheelidled() { + g.wheel = null; + g.end(); + } + } + function mousedowned(event, ...args) { + if (touchending || !filter2.apply(this, arguments)) + return; + var currentTarget = event.currentTarget, g = gesture(this, args, true).event(event), v = select_default2(event.view).on("mousemove.zoom", mousemoved, true).on("mouseup.zoom", mouseupped, true), p = pointer_default(event, currentTarget), x0 = event.clientX, y0 = event.clientY; + nodrag_default(event.view); + nopropagation3(event); + g.mouse = [p, this.__zoom.invert(p)]; + interrupt_default(this); + g.start(); + function mousemoved(event2) { + noevent_default3(event2); + if (!g.moved) { + var dx = event2.clientX - x0, dy = event2.clientY - y0; + g.moved = dx * dx + dy * dy > clickDistance2; + } + g.event(event2).zoom("mouse", constrain(translate(g.that.__zoom, g.mouse[0] = pointer_default(event2, currentTarget), g.mouse[1]), g.extent, translateExtent)); + } + function mouseupped(event2) { + v.on("mousemove.zoom mouseup.zoom", null); + yesdrag(event2.view, g.moved); + noevent_default3(event2); + g.event(event2).end(); + } + } + function dblclicked(event, ...args) { + if (!filter2.apply(this, arguments)) + return; + var t0 = this.__zoom, p0 = pointer_default(event.changedTouches ? event.changedTouches[0] : event, this), p1 = t0.invert(p0), k1 = t0.k * (event.shiftKey ? 0.5 : 2), t1 = constrain(translate(scale(t0, k1), p0, p1), extent.apply(this, args), translateExtent); + noevent_default3(event); + if (duration > 0) + select_default2(this).transition().duration(duration).call(schedule, t1, p0, event); + else + select_default2(this).call(zoom.transform, t1, p0, event); + } + function touchstarted(event, ...args) { + if (!filter2.apply(this, arguments)) + return; + var touches = event.touches, n = touches.length, g = gesture(this, args, event.changedTouches.length === n).event(event), started, i, t, p; + nopropagation3(event); + for (i = 0; i < n; ++i) { + t = touches[i], p = pointer_default(t, this); + p = [p, this.__zoom.invert(p), t.identifier]; + if (!g.touch0) + g.touch0 = p, started = true, g.taps = 1 + !!touchstarting; + else if (!g.touch1 && g.touch0[2] !== p[2]) + g.touch1 = p, g.taps = 0; + } + if (touchstarting) + touchstarting = clearTimeout(touchstarting); + if (started) { + if (g.taps < 2) + touchfirst = p[0], touchstarting = setTimeout(function() { + touchstarting = null; + }, touchDelay); + interrupt_default(this); + g.start(); + } + } + function touchmoved(event, ...args) { + if (!this.__zooming) + return; + var g = gesture(this, args).event(event), touches = event.changedTouches, n = touches.length, i, t, p, l; + noevent_default3(event); + for (i = 0; i < n; ++i) { + t = touches[i], p = pointer_default(t, this); + if (g.touch0 && g.touch0[2] === t.identifier) + g.touch0[0] = p; + else if (g.touch1 && g.touch1[2] === t.identifier) + g.touch1[0] = p; + } + t = g.that.__zoom; + if (g.touch1) { + var p0 = g.touch0[0], l0 = g.touch0[1], p1 = g.touch1[0], l1 = g.touch1[1], dp = (dp = p1[0] - p0[0]) * dp + (dp = p1[1] - p0[1]) * dp, dl = (dl = l1[0] - l0[0]) * dl + (dl = l1[1] - l0[1]) * dl; + t = scale(t, Math.sqrt(dp / dl)); + p = [(p0[0] + p1[0]) / 2, (p0[1] + p1[1]) / 2]; + l = [(l0[0] + l1[0]) / 2, (l0[1] + l1[1]) / 2]; + } else if (g.touch0) + p = g.touch0[0], l = g.touch0[1]; + else + return; + g.zoom("touch", constrain(translate(t, p, l), g.extent, translateExtent)); + } + function touchended(event, ...args) { + if (!this.__zooming) + return; + var g = gesture(this, args).event(event), touches = event.changedTouches, n = touches.length, i, t; + nopropagation3(event); + if (touchending) + clearTimeout(touchending); + touchending = setTimeout(function() { + touchending = null; + }, touchDelay); + for (i = 0; i < n; ++i) { + t = touches[i]; + if (g.touch0 && g.touch0[2] === t.identifier) + delete g.touch0; + else if (g.touch1 && g.touch1[2] === t.identifier) + delete g.touch1; + } + if (g.touch1 && !g.touch0) + g.touch0 = g.touch1, delete g.touch1; + if (g.touch0) + g.touch0[1] = this.__zoom.invert(g.touch0[0]); + else { + g.end(); + if (g.taps === 2) { + t = pointer_default(t, this); + if (Math.hypot(touchfirst[0] - t[0], touchfirst[1] - t[1]) < tapDistance) { + var p = select_default2(this).on("dblclick.zoom"); + if (p) + p.apply(this, arguments); + } + } + } + } + zoom.wheelDelta = function(_) { + return arguments.length ? (wheelDelta = typeof _ === "function" ? _ : constant_default6(+_), zoom) : wheelDelta; + }; + zoom.filter = function(_) { + return arguments.length ? (filter2 = typeof _ === "function" ? _ : constant_default6(!!_), zoom) : filter2; + }; + zoom.touchable = function(_) { + return arguments.length ? (touchable = typeof _ === "function" ? _ : constant_default6(!!_), zoom) : touchable; + }; + zoom.extent = function(_) { + return arguments.length ? (extent = typeof _ === "function" ? _ : constant_default6([[+_[0][0], +_[0][1]], [+_[1][0], +_[1][1]]]), zoom) : extent; + }; + zoom.scaleExtent = function(_) { + return arguments.length ? (scaleExtent[0] = +_[0], scaleExtent[1] = +_[1], zoom) : [scaleExtent[0], scaleExtent[1]]; + }; + zoom.translateExtent = function(_) { + return arguments.length ? (translateExtent[0][0] = +_[0][0], translateExtent[1][0] = +_[1][0], translateExtent[0][1] = +_[0][1], translateExtent[1][1] = +_[1][1], zoom) : [[translateExtent[0][0], translateExtent[0][1]], [translateExtent[1][0], translateExtent[1][1]]]; + }; + zoom.constrain = function(_) { + return arguments.length ? (constrain = _, zoom) : constrain; + }; + zoom.duration = function(_) { + return arguments.length ? (duration = +_, zoom) : duration; + }; + zoom.interpolate = function(_) { + return arguments.length ? (interpolate = _, zoom) : interpolate; + }; + zoom.on = function() { + var value = listeners.on.apply(listeners, arguments); + return value === listeners ? zoom : value; + }; + zoom.clickDistance = function(_) { + return arguments.length ? (clickDistance2 = (_ = +_) * _, zoom) : Math.sqrt(clickDistance2); + }; + zoom.tapDistance = function(_) { + return arguments.length ? (tapDistance = +_, zoom) : tapDistance; + }; + return zoom; +} + +// quartz/components/scripts/util.ts +function registerEscapeHandler(outsideContainer, cb) { + if (!outsideContainer) + return; + function click(e) { + if (e.target !== this) + return; + e.preventDefault(); + cb(); + } + function esc(e) { + if (!e.key.startsWith("Esc")) + return; + e.preventDefault(); + cb(); + } + outsideContainer?.removeEventListener("click", click); + outsideContainer?.addEventListener("click", click); + document.removeEventListener("keydown", esc); + document.addEventListener("keydown", esc); +} +function removeAllChildren(node) { + while (node.firstChild) { + node.removeChild(node.firstChild); + } +} + +// node_modules/github-slugger/index.js +var own = Object.hasOwnProperty; + +// quartz/util/path.ts +function getFullSlug(window2) { + const res = window2.document.body.dataset.slug; + return res; +} +function simplifySlug(fp) { + return _stripSlashes(_trimSuffix(fp, "index"), true); +} +function pathToRoot(slug2) { + let rootPath = slug2.split("/").filter((x2) => x2 !== "").slice(0, -1).map((_) => "..").join("/"); + if (rootPath.length === 0) { + rootPath = "."; + } + return rootPath; +} +function resolveRelative(current, target) { + const res = joinSegments(pathToRoot(current), simplifySlug(target)); + return res; +} +function joinSegments(...args) { + return args.filter((segment) => segment !== "").join("/"); +} +function _endsWith(s, suffix) { + return s === suffix || s.endsWith("/" + suffix); +} +function _trimSuffix(s, suffix) { + if (_endsWith(s, suffix)) { + s = s.slice(0, -suffix.length); + } + return s; +} +function _stripSlashes(s, onlyStripPrefix) { + if (s.startsWith("/")) { + s = s.substring(1); + } + if (!onlyStripPrefix && s.endsWith("/")) { + s = s.slice(0, -1); + } + return s; +} + +// quartz/components/scripts/quartz/components/scripts/graph.inline.ts +var localStorageKey = "graph-visited"; +function getVisited() { + return new Set(JSON.parse(localStorage.getItem(localStorageKey) ?? "[]")); +} +function addToVisited(slug2) { + const visited = getVisited(); + visited.add(slug2); + localStorage.setItem(localStorageKey, JSON.stringify([...visited])); +} +async function renderGraph(container, fullSlug) { + const slug2 = simplifySlug(fullSlug); + const visited = getVisited(); + const graph = document.getElementById(container); + if (!graph) + return; + removeAllChildren(graph); + let { + drag: enableDrag, + zoom: enableZoom, + depth, + scale, + repelForce, + centerForce, + linkDistance, + fontSize, + opacityScale + } = JSON.parse(graph.dataset["cfg"]); + const data = await fetchData; + const links = []; + for (const [src, details] of Object.entries(data)) { + const source = simplifySlug(src); + const outgoing = details.links ?? []; + for (const dest of outgoing) { + if (dest in data) { + links.push({ source, target: dest }); + } + } + } + const neighbourhood = /* @__PURE__ */ new Set(); + const wl = [slug2, "__SENTINEL"]; + if (depth >= 0) { + while (depth >= 0 && wl.length > 0) { + const cur = wl.shift(); + if (cur === "__SENTINEL") { + depth--; + wl.push("__SENTINEL"); + } else { + neighbourhood.add(cur); + const outgoing = links.filter((l) => l.source === cur); + const incoming = links.filter((l) => l.target === cur); + wl.push(...outgoing.map((l) => l.target), ...incoming.map((l) => l.source)); + } + } + } else { + Object.keys(data).forEach((id2) => neighbourhood.add(simplifySlug(id2))); + } + const graphData = { + nodes: [...neighbourhood].map((url) => ({ + id: url, + text: data[url]?.title ?? url, + tags: data[url]?.tags ?? [] + })), + links: links.filter((l) => neighbourhood.has(l.source) && neighbourhood.has(l.target)) + }; + const simulation = simulation_default(graphData.nodes).force("charge", manyBody_default().strength(-100 * repelForce)).force( + "link", + link_default(graphData.links).id((d) => d.id).distance(linkDistance) + ).force("center", center_default().strength(centerForce)); + const height = Math.max(graph.offsetHeight, 250); + const width = graph.offsetWidth; + const svg = select_default2("#" + container).append("svg").attr("width", width).attr("height", height).attr("viewBox", [-width / 2 / scale, -height / 2 / scale, width / scale, height / scale]); + const link = svg.append("g").selectAll("line").data(graphData.links).join("line").attr("class", "link").attr("stroke", "var(--lightgray)").attr("stroke-width", 1); + const graphNode = svg.append("g").selectAll("g").data(graphData.nodes).enter().append("g"); + const color2 = (d) => { + const isCurrent = d.id === slug2; + if (isCurrent) { + return "var(--secondary)"; + } else if (visited.has(d.id)) { + return "var(--tertiary)"; + } else { + return "var(--gray)"; + } + }; + const drag = (simulation2) => { + function dragstarted(event, d) { + if (!event.active) + simulation2.alphaTarget(1).restart(); + d.fx = d.x; + d.fy = d.y; + } + function dragged(event, d) { + d.fx = event.x; + d.fy = event.y; + } + function dragended(event, d) { + if (!event.active) + simulation2.alphaTarget(0); + d.fx = null; + d.fy = null; + } + const noop2 = () => { + }; + return drag_default().on("start", enableDrag ? dragstarted : noop2).on("drag", enableDrag ? dragged : noop2).on("end", enableDrag ? dragended : noop2); + }; + function nodeRadius(d) { + const numLinks = links.filter((l) => l.source.id === d.id || l.target.id === d.id).length; + return 2 + Math.sqrt(numLinks); + } + const node = graphNode.append("circle").attr("class", "node").attr("id", (d) => d.id).attr("r", nodeRadius).attr("fill", color2).style("cursor", "pointer").on("click", (_, d) => { + const targ = resolveRelative(fullSlug, d.id); + window.spaNavigate(new URL(targ, window.location.toString())); + }).on("mouseover", function(_, d) { + const neighbours = data[fullSlug].links ?? []; + const neighbourNodes = selectAll_default2(".node").filter((d2) => neighbours.includes(d2.id)); + const currentId = d.id; + const linkNodes = selectAll_default2(".link").filter((d2) => d2.source.id === currentId || d2.target.id === currentId); + neighbourNodes.transition().duration(200).attr("fill", color2); + linkNodes.transition().duration(200).attr("stroke", "var(--gray)").attr("stroke-width", 1); + const bigFont = fontSize * 1.5; + const parent = this.parentNode; + select_default2(parent).raise().select("text").transition().duration(200).attr("opacityOld", select_default2(parent).select("text").style("opacity")).style("opacity", 1).style("font-size", bigFont + "em"); + }).on("mouseleave", function(_, d) { + const currentId = d.id; + const linkNodes = selectAll_default2(".link").filter((d2) => d2.source.id === currentId || d2.target.id === currentId); + linkNodes.transition().duration(200).attr("stroke", "var(--lightgray)"); + const parent = this.parentNode; + select_default2(parent).select("text").transition().duration(200).style("opacity", select_default2(parent).select("text").attr("opacityOld")).style("font-size", fontSize + "em"); + }).call(drag(simulation)); + const labels = graphNode.append("text").attr("dx", 0).attr("dy", (d) => -nodeRadius(d) + "px").attr("text-anchor", "middle").text( + (d) => data[d.id]?.title || (d.id.charAt(1).toUpperCase() + d.id.slice(2)).replace("-", " ") + ).style("opacity", (opacityScale - 1) / 3.75).style("pointer-events", "none").style("font-size", fontSize + "em").raise().call(drag(simulation)); + if (enableZoom) { + svg.call( + zoom_default2().extent([ + [0, 0], + [width, height] + ]).scaleExtent([0.25, 4]).on("zoom", ({ transform: transform2 }) => { + link.attr("transform", transform2); + node.attr("transform", transform2); + const scale2 = transform2.k * opacityScale; + const scaledOpacity = Math.max((scale2 - 1) / 3.75, 0); + labels.attr("transform", transform2).style("opacity", scaledOpacity); + }) + ); + } + simulation.on("tick", () => { + link.attr("x1", (d) => d.source.x).attr("y1", (d) => d.source.y).attr("x2", (d) => d.target.x).attr("y2", (d) => d.target.y); + node.attr("cx", (d) => d.x).attr("cy", (d) => d.y); + labels.attr("x", (d) => d.x).attr("y", (d) => d.y); + }); +} +function renderGlobalGraph() { + const slug2 = getFullSlug(window); + const container = document.getElementById("global-graph-outer"); + const sidebar = container?.closest(".sidebar"); + container?.classList.add("active"); + if (sidebar) { + sidebar.style.zIndex = "1"; + } + renderGraph("global-graph-container", slug2); + function hideGlobalGraph() { + container?.classList.remove("active"); + const graph = document.getElementById("global-graph-container"); + if (sidebar) { + sidebar.style.zIndex = "unset"; + } + if (!graph) + return; + removeAllChildren(graph); + } + registerEscapeHandler(container, hideGlobalGraph); +} +document.addEventListener("nav", async (e) => { + const slug2 = e.detail.url; + addToVisited(slug2); + await renderGraph("graph-container", slug2); + const containerIcon = document.getElementById("global-graph-icon"); + containerIcon?.removeEventListener("click", renderGlobalGraph); + containerIcon?.addEventListener("click", renderGlobalGraph); +}); +})(); +(function () {// node_modules/@floating-ui/core/dist/floating-ui.core.browser.min.mjs +function t(t2) { + return t2.split("-")[1]; +} +function e(t2) { + return "y" === t2 ? "height" : "width"; +} +function n(t2) { + return t2.split("-")[0]; +} +function o(t2) { + return ["top", "bottom"].includes(n(t2)) ? "x" : "y"; +} +function i(i3, r3, a3) { + let { reference: l3, floating: s3 } = i3; + const c3 = l3.x + l3.width / 2 - s3.width / 2, f3 = l3.y + l3.height / 2 - s3.height / 2, m3 = o(r3), u3 = e(m3), g3 = l3[u3] / 2 - s3[u3] / 2, d3 = "x" === m3; + let p4; + switch (n(r3)) { + case "top": + p4 = { x: c3, y: l3.y - s3.height }; + break; + case "bottom": + p4 = { x: c3, y: l3.y + l3.height }; + break; + case "right": + p4 = { x: l3.x + l3.width, y: f3 }; + break; + case "left": + p4 = { x: l3.x - s3.width, y: f3 }; + break; + default: + p4 = { x: l3.x, y: l3.y }; + } + switch (t(r3)) { + case "start": + p4[m3] -= g3 * (a3 && d3 ? -1 : 1); + break; + case "end": + p4[m3] += g3 * (a3 && d3 ? -1 : 1); + } + return p4; +} +var r = async (t2, e2, n3) => { + const { placement: o3 = "bottom", strategy: r3 = "absolute", middleware: a3 = [], platform: l3 } = n3, s3 = a3.filter(Boolean), c3 = await (null == l3.isRTL ? void 0 : l3.isRTL(e2)); + let f3 = await l3.getElementRects({ reference: t2, floating: e2, strategy: r3 }), { x: m3, y: u3 } = i(f3, o3, c3), g3 = o3, d3 = {}, p4 = 0; + for (let n4 = 0; n4 < s3.length; n4++) { + const { name: a4, fn: h3 } = s3[n4], { x: y2, y: x3, data: w3, reset: v3 } = await h3({ x: m3, y: u3, initialPlacement: o3, placement: g3, strategy: r3, middlewareData: d3, rects: f3, platform: l3, elements: { reference: t2, floating: e2 } }); + m3 = null != y2 ? y2 : m3, u3 = null != x3 ? x3 : u3, d3 = { ...d3, [a4]: { ...d3[a4], ...w3 } }, v3 && p4 <= 50 && (p4++, "object" == typeof v3 && (v3.placement && (g3 = v3.placement), v3.rects && (f3 = true === v3.rects ? await l3.getElementRects({ reference: t2, floating: e2, strategy: r3 }) : v3.rects), { x: m3, y: u3 } = i(f3, g3, c3)), n4 = -1); + } + return { x: m3, y: u3, placement: g3, strategy: r3, middlewareData: d3 }; +}; +function a(t2, e2) { + return "function" == typeof t2 ? t2(e2) : t2; +} +function l(t2) { + return "number" != typeof t2 ? function(t3) { + return { top: 0, right: 0, bottom: 0, left: 0, ...t3 }; + }(t2) : { top: t2, right: t2, bottom: t2, left: t2 }; +} +function s(t2) { + return { ...t2, top: t2.y, left: t2.x, right: t2.x + t2.width, bottom: t2.y + t2.height }; +} +async function c(t2, e2) { + var n3; + void 0 === e2 && (e2 = {}); + const { x: o3, y: i3, platform: r3, rects: c3, elements: f3, strategy: m3 } = t2, { boundary: u3 = "clippingAncestors", rootBoundary: g3 = "viewport", elementContext: d3 = "floating", altBoundary: p4 = false, padding: h3 = 0 } = a(e2, t2), y2 = l(h3), x3 = f3[p4 ? "floating" === d3 ? "reference" : "floating" : d3], w3 = s(await r3.getClippingRect({ element: null == (n3 = await (null == r3.isElement ? void 0 : r3.isElement(x3))) || n3 ? x3 : x3.contextElement || await (null == r3.getDocumentElement ? void 0 : r3.getDocumentElement(f3.floating)), boundary: u3, rootBoundary: g3, strategy: m3 })), v3 = "floating" === d3 ? { ...c3.floating, x: o3, y: i3 } : c3.reference, b3 = await (null == r3.getOffsetParent ? void 0 : r3.getOffsetParent(f3.floating)), A3 = await (null == r3.isElement ? void 0 : r3.isElement(b3)) && await (null == r3.getScale ? void 0 : r3.getScale(b3)) || { x: 1, y: 1 }, R2 = s(r3.convertOffsetParentRelativeRectToViewportRelativeRect ? await r3.convertOffsetParentRelativeRectToViewportRelativeRect({ rect: v3, offsetParent: b3, strategy: m3 }) : v3); + return { top: (w3.top - R2.top + y2.top) / A3.y, bottom: (R2.bottom - w3.bottom + y2.bottom) / A3.y, left: (w3.left - R2.left + y2.left) / A3.x, right: (R2.right - w3.right + y2.right) / A3.x }; +} +var f = Math.min; +var m = Math.max; +function u(t2, e2, n3) { + return m(t2, f(e2, n3)); +} +var d = ["top", "right", "bottom", "left"]; +var p = d.reduce((t2, e2) => t2.concat(e2, e2 + "-start", e2 + "-end"), []); +var h = { left: "right", right: "left", bottom: "top", top: "bottom" }; +function y(t2) { + return t2.replace(/left|right|bottom|top/g, (t3) => h[t3]); +} +function x(n3, i3, r3) { + void 0 === r3 && (r3 = false); + const a3 = t(n3), l3 = o(n3), s3 = e(l3); + let c3 = "x" === l3 ? a3 === (r3 ? "end" : "start") ? "right" : "left" : "start" === a3 ? "bottom" : "top"; + return i3.reference[s3] > i3.floating[s3] && (c3 = y(c3)), { main: c3, cross: y(c3) }; +} +var w = { start: "end", end: "start" }; +function v(t2) { + return t2.replace(/start|end/g, (t3) => w[t3]); +} +var A = function(e2) { + return void 0 === e2 && (e2 = {}), { name: "flip", options: e2, async fn(o3) { + var i3; + const { placement: r3, middlewareData: l3, rects: s3, initialPlacement: f3, platform: m3, elements: u3 } = o3, { mainAxis: g3 = true, crossAxis: d3 = true, fallbackPlacements: p4, fallbackStrategy: h3 = "bestFit", fallbackAxisSideDirection: w3 = "none", flipAlignment: b3 = true, ...A3 } = a(e2, o3), R2 = n(r3), P2 = n(f3) === f3, E3 = await (null == m3.isRTL ? void 0 : m3.isRTL(u3.floating)), T3 = p4 || (P2 || !b3 ? [y(f3)] : function(t2) { + const e3 = y(t2); + return [v(t2), e3, v(e3)]; + }(f3)); + p4 || "none" === w3 || T3.push(...function(e3, o4, i4, r4) { + const a3 = t(e3); + let l4 = function(t2, e4, n3) { + const o5 = ["left", "right"], i5 = ["right", "left"], r5 = ["top", "bottom"], a4 = ["bottom", "top"]; + switch (t2) { + case "top": + case "bottom": + return n3 ? e4 ? i5 : o5 : e4 ? o5 : i5; + case "left": + case "right": + return e4 ? r5 : a4; + default: + return []; + } + }(n(e3), "start" === i4, r4); + return a3 && (l4 = l4.map((t2) => t2 + "-" + a3), o4 && (l4 = l4.concat(l4.map(v)))), l4; + }(f3, b3, w3, E3)); + const D3 = [f3, ...T3], L3 = await c(o3, A3), k2 = []; + let O3 = (null == (i3 = l3.flip) ? void 0 : i3.overflows) || []; + if (g3 && k2.push(L3[R2]), d3) { + const { main: t2, cross: e3 } = x(r3, s3, E3); + k2.push(L3[t2], L3[e3]); + } + if (O3 = [...O3, { placement: r3, overflows: k2 }], !k2.every((t2) => t2 <= 0)) { + var B3, C3; + const t2 = ((null == (B3 = l3.flip) ? void 0 : B3.index) || 0) + 1, e3 = D3[t2]; + if (e3) + return { data: { index: t2, overflows: O3 }, reset: { placement: e3 } }; + let n3 = null == (C3 = O3.filter((t3) => t3.overflows[0] <= 0).sort((t3, e4) => t3.overflows[1] - e4.overflows[1])[0]) ? void 0 : C3.placement; + if (!n3) + switch (h3) { + case "bestFit": { + var H2; + const t3 = null == (H2 = O3.map((t4) => [t4.placement, t4.overflows.filter((t5) => t5 > 0).reduce((t5, e4) => t5 + e4, 0)]).sort((t4, e4) => t4[1] - e4[1])[0]) ? void 0 : H2[0]; + t3 && (n3 = t3); + break; + } + case "initialPlacement": + n3 = f3; + } + if (r3 !== n3) + return { reset: { placement: n3 } }; + } + return {}; + } }; +}; +function T(t2) { + const e2 = f(...t2.map((t3) => t3.left)), n3 = f(...t2.map((t3) => t3.top)); + return { x: e2, y: n3, width: m(...t2.map((t3) => t3.right)) - e2, height: m(...t2.map((t3) => t3.bottom)) - n3 }; +} +var D = function(t2) { + return void 0 === t2 && (t2 = {}), { name: "inline", options: t2, async fn(e2) { + const { placement: i3, elements: r3, rects: c3, platform: u3, strategy: g3 } = e2, { padding: d3 = 2, x: p4, y: h3 } = a(t2, e2), y2 = Array.from(await (null == u3.getClientRects ? void 0 : u3.getClientRects(r3.reference)) || []), x3 = function(t3) { + const e3 = t3.slice().sort((t4, e4) => t4.y - e4.y), n3 = []; + let o3 = null; + for (let t4 = 0; t4 < e3.length; t4++) { + const i4 = e3[t4]; + !o3 || i4.y - o3.y > o3.height / 2 ? n3.push([i4]) : n3[n3.length - 1].push(i4), o3 = i4; + } + return n3.map((t4) => s(T(t4))); + }(y2), w3 = s(T(y2)), v3 = l(d3); + const b3 = await u3.getElementRects({ reference: { getBoundingClientRect: function() { + if (2 === x3.length && x3[0].left > x3[1].right && null != p4 && null != h3) + return x3.find((t3) => p4 > t3.left - v3.left && p4 < t3.right + v3.right && h3 > t3.top - v3.top && h3 < t3.bottom + v3.bottom) || w3; + if (x3.length >= 2) { + if ("x" === o(i3)) { + const t4 = x3[0], e4 = x3[x3.length - 1], o3 = "top" === n(i3), r5 = t4.top, a4 = e4.bottom, l4 = o3 ? t4.left : e4.left, s4 = o3 ? t4.right : e4.right; + return { top: r5, bottom: a4, left: l4, right: s4, width: s4 - l4, height: a4 - r5, x: l4, y: r5 }; + } + const t3 = "left" === n(i3), e3 = m(...x3.map((t4) => t4.right)), r4 = f(...x3.map((t4) => t4.left)), a3 = x3.filter((n3) => t3 ? n3.left === r4 : n3.right === e3), l3 = a3[0].top, s3 = a3[a3.length - 1].bottom; + return { top: l3, bottom: s3, left: r4, right: e3, width: e3 - r4, height: s3 - l3, x: r4, y: l3 }; + } + return w3; + } }, floating: r3.floating, strategy: g3 }); + return c3.reference.x !== b3.reference.x || c3.reference.y !== b3.reference.y || c3.reference.width !== b3.reference.width || c3.reference.height !== b3.reference.height ? { reset: { rects: b3 } } : {}; + } }; +}; +function k(t2) { + return "x" === t2 ? "y" : "x"; +} +var O = function(t2) { + return void 0 === t2 && (t2 = {}), { name: "shift", options: t2, async fn(e2) { + const { x: i3, y: r3, placement: l3 } = e2, { mainAxis: s3 = true, crossAxis: f3 = false, limiter: m3 = { fn: (t3) => { + let { x: e3, y: n3 } = t3; + return { x: e3, y: n3 }; + } }, ...g3 } = a(t2, e2), d3 = { x: i3, y: r3 }, p4 = await c(e2, g3), h3 = o(n(l3)), y2 = k(h3); + let x3 = d3[h3], w3 = d3[y2]; + if (s3) { + const t3 = "y" === h3 ? "bottom" : "right"; + x3 = u(x3 + p4["y" === h3 ? "top" : "left"], x3, x3 - p4[t3]); + } + if (f3) { + const t3 = "y" === y2 ? "bottom" : "right"; + w3 = u(w3 + p4["y" === y2 ? "top" : "left"], w3, w3 - p4[t3]); + } + const v3 = m3.fn({ ...e2, [h3]: x3, [y2]: w3 }); + return { ...v3, data: { x: v3.x - i3, y: v3.y - r3 } }; + } }; +}; + +// node_modules/@floating-ui/dom/dist/floating-ui.dom.browser.min.mjs +function n2(t2) { + var e2; + return (null == (e2 = t2.ownerDocument) ? void 0 : e2.defaultView) || window; +} +function o2(t2) { + return n2(t2).getComputedStyle(t2); +} +function i2(t2) { + return t2 instanceof n2(t2).Node; +} +function r2(t2) { + return i2(t2) ? (t2.nodeName || "").toLowerCase() : "#document"; +} +function c2(t2) { + return t2 instanceof n2(t2).HTMLElement; +} +function l2(t2) { + return t2 instanceof n2(t2).Element; +} +function s2(t2) { + return "undefined" != typeof ShadowRoot && (t2 instanceof n2(t2).ShadowRoot || t2 instanceof ShadowRoot); +} +function f2(t2) { + const { overflow: e2, overflowX: n3, overflowY: i3, display: r3 } = o2(t2); + return /auto|scroll|overlay|hidden|clip/.test(e2 + i3 + n3) && !["inline", "contents"].includes(r3); +} +function u2(t2) { + return ["table", "td", "th"].includes(r2(t2)); +} +function a2(t2) { + const e2 = d2(), n3 = o2(t2); + return "none" !== n3.transform || "none" !== n3.perspective || !!n3.containerType && "normal" !== n3.containerType || !e2 && !!n3.backdropFilter && "none" !== n3.backdropFilter || !e2 && !!n3.filter && "none" !== n3.filter || ["transform", "perspective", "filter"].some((t3) => (n3.willChange || "").includes(t3)) || ["paint", "layout", "strict", "content"].some((t3) => (n3.contain || "").includes(t3)); +} +function d2() { + return !("undefined" == typeof CSS || !CSS.supports) && CSS.supports("-webkit-backdrop-filter", "none"); +} +function h2(t2) { + return ["html", "body", "#document"].includes(r2(t2)); +} +var p2 = Math.min; +var m2 = Math.max; +var g2 = Math.round; +var w2 = (t2) => ({ x: t2, y: t2 }); +function x2(t2) { + const e2 = o2(t2); + let n3 = parseFloat(e2.width) || 0, i3 = parseFloat(e2.height) || 0; + const r3 = c2(t2), l3 = r3 ? t2.offsetWidth : n3, s3 = r3 ? t2.offsetHeight : i3, f3 = g2(n3) !== l3 || g2(i3) !== s3; + return f3 && (n3 = l3, i3 = s3), { width: n3, height: i3, $: f3 }; +} +function v2(t2) { + return l2(t2) ? t2 : t2.contextElement; +} +function b2(t2) { + const e2 = v2(t2); + if (!c2(e2)) + return w2(1); + const n3 = e2.getBoundingClientRect(), { width: o3, height: i3, $: r3 } = x2(e2); + let l3 = (r3 ? g2(n3.width) : n3.width) / o3, s3 = (r3 ? g2(n3.height) : n3.height) / i3; + return l3 && Number.isFinite(l3) || (l3 = 1), s3 && Number.isFinite(s3) || (s3 = 1), { x: l3, y: s3 }; +} +var L2 = w2(0); +function T2(t2, e2, o3) { + var i3, r3; + if (void 0 === e2 && (e2 = true), !d2()) + return L2; + const c3 = t2 ? n2(t2) : window; + return !o3 || e2 && o3 !== c3 ? L2 : { x: (null == (i3 = c3.visualViewport) ? void 0 : i3.offsetLeft) || 0, y: (null == (r3 = c3.visualViewport) ? void 0 : r3.offsetTop) || 0 }; +} +function R(e2, o3, i3, r3) { + void 0 === o3 && (o3 = false), void 0 === i3 && (i3 = false); + const c3 = e2.getBoundingClientRect(), s3 = v2(e2); + let f3 = w2(1); + o3 && (r3 ? l2(r3) && (f3 = b2(r3)) : f3 = b2(e2)); + const u3 = T2(s3, i3, r3); + let a3 = (c3.left + u3.x) / f3.x, d3 = (c3.top + u3.y) / f3.y, h3 = c3.width / f3.x, p4 = c3.height / f3.y; + if (s3) { + const t2 = n2(s3), e3 = r3 && l2(r3) ? n2(r3) : r3; + let o4 = t2.frameElement; + for (; o4 && r3 && e3 !== t2; ) { + const t3 = b2(o4), e4 = o4.getBoundingClientRect(), i4 = getComputedStyle(o4), r4 = e4.left + (o4.clientLeft + parseFloat(i4.paddingLeft)) * t3.x, c4 = e4.top + (o4.clientTop + parseFloat(i4.paddingTop)) * t3.y; + a3 *= t3.x, d3 *= t3.y, h3 *= t3.x, p4 *= t3.y, a3 += r4, d3 += c4, o4 = n2(o4).frameElement; + } + } + return s({ width: h3, height: p4, x: a3, y: d3 }); +} +function S(t2) { + return ((i2(t2) ? t2.ownerDocument : t2.document) || window.document).documentElement; +} +function E2(t2) { + return l2(t2) ? { scrollLeft: t2.scrollLeft, scrollTop: t2.scrollTop } : { scrollLeft: t2.pageXOffset, scrollTop: t2.pageYOffset }; +} +function C2(t2) { + return R(S(t2)).left + E2(t2).scrollLeft; +} +function F(t2) { + if ("html" === r2(t2)) + return t2; + const e2 = t2.assignedSlot || t2.parentNode || s2(t2) && t2.host || S(t2); + return s2(e2) ? e2.host : e2; +} +function O2(t2) { + const e2 = F(t2); + return h2(e2) ? t2.ownerDocument ? t2.ownerDocument.body : t2.body : c2(e2) && f2(e2) ? e2 : O2(e2); +} +function D2(t2, e2) { + var o3; + void 0 === e2 && (e2 = []); + const i3 = O2(t2), r3 = i3 === (null == (o3 = t2.ownerDocument) ? void 0 : o3.body), c3 = n2(i3); + return r3 ? e2.concat(c3, c3.visualViewport || [], f2(i3) ? i3 : []) : e2.concat(i3, D2(i3)); +} +function W(e2, i3, r3) { + let s3; + if ("viewport" === i3) + s3 = function(t2, e3) { + const o3 = n2(t2), i4 = S(t2), r4 = o3.visualViewport; + let c3 = i4.clientWidth, l3 = i4.clientHeight, s4 = 0, f3 = 0; + if (r4) { + c3 = r4.width, l3 = r4.height; + const t3 = d2(); + (!t3 || t3 && "fixed" === e3) && (s4 = r4.offsetLeft, f3 = r4.offsetTop); + } + return { width: c3, height: l3, x: s4, y: f3 }; + }(e2, r3); + else if ("document" === i3) + s3 = function(t2) { + const e3 = S(t2), n3 = E2(t2), i4 = t2.ownerDocument.body, r4 = m2(e3.scrollWidth, e3.clientWidth, i4.scrollWidth, i4.clientWidth), c3 = m2(e3.scrollHeight, e3.clientHeight, i4.scrollHeight, i4.clientHeight); + let l3 = -n3.scrollLeft + C2(t2); + const s4 = -n3.scrollTop; + return "rtl" === o2(i4).direction && (l3 += m2(e3.clientWidth, i4.clientWidth) - r4), { width: r4, height: c3, x: l3, y: s4 }; + }(S(e2)); + else if (l2(i3)) + s3 = function(t2, e3) { + const n3 = R(t2, true, "fixed" === e3), o3 = n3.top + t2.clientTop, i4 = n3.left + t2.clientLeft, r4 = c2(t2) ? b2(t2) : w2(1); + return { width: t2.clientWidth * r4.x, height: t2.clientHeight * r4.y, x: i4 * r4.x, y: o3 * r4.y }; + }(i3, r3); + else { + const t2 = T2(e2); + s3 = { ...i3, x: i3.x - t2.x, y: i3.y - t2.y }; + } + return s(s3); +} +function H(t2, e2) { + const n3 = F(t2); + return !(n3 === e2 || !l2(n3) || h2(n3)) && ("fixed" === o2(n3).position || H(n3, e2)); +} +function z(t2, e2) { + return c2(t2) && "fixed" !== o2(t2).position ? e2 ? e2(t2) : t2.offsetParent : null; +} +function M(t2, e2) { + const i3 = n2(t2); + if (!c2(t2)) + return i3; + let l3 = z(t2, e2); + for (; l3 && u2(l3) && "static" === o2(l3).position; ) + l3 = z(l3, e2); + return l3 && ("html" === r2(l3) || "body" === r2(l3) && "static" === o2(l3).position && !a2(l3)) ? i3 : l3 || function(t3) { + let e3 = F(t3); + for (; c2(e3) && !h2(e3); ) { + if (a2(e3)) + return e3; + e3 = F(e3); + } + return null; + }(t2) || i3; +} +function P(t2, e2, n3) { + const o3 = c2(e2), i3 = S(e2), l3 = "fixed" === n3, s3 = R(t2, true, l3, e2); + let u3 = { scrollLeft: 0, scrollTop: 0 }; + const a3 = w2(0); + if (o3 || !o3 && !l3) + if (("body" !== r2(e2) || f2(i3)) && (u3 = E2(e2)), c2(e2)) { + const t3 = R(e2, true, l3, e2); + a3.x = t3.x + e2.clientLeft, a3.y = t3.y + e2.clientTop; + } else + i3 && (a3.x = C2(i3)); + return { x: s3.left + u3.scrollLeft - a3.x, y: s3.top + u3.scrollTop - a3.y, width: s3.width, height: s3.height }; +} +var A2 = { getClippingRect: function(t2) { + let { element: e2, boundary: n3, rootBoundary: i3, strategy: c3 } = t2; + const s3 = "clippingAncestors" === n3 ? function(t3, e3) { + const n4 = e3.get(t3); + if (n4) + return n4; + let i4 = D2(t3).filter((t4) => l2(t4) && "body" !== r2(t4)), c4 = null; + const s4 = "fixed" === o2(t3).position; + let u4 = s4 ? F(t3) : t3; + for (; l2(u4) && !h2(u4); ) { + const e4 = o2(u4), n5 = a2(u4); + n5 || "fixed" !== e4.position || (c4 = null), (s4 ? !n5 && !c4 : !n5 && "static" === e4.position && c4 && ["absolute", "fixed"].includes(c4.position) || f2(u4) && !n5 && H(t3, u4)) ? i4 = i4.filter((t4) => t4 !== u4) : c4 = e4, u4 = F(u4); + } + return e3.set(t3, i4), i4; + }(e2, this._c) : [].concat(n3), u3 = [...s3, i3], d3 = u3[0], g3 = u3.reduce((t3, n4) => { + const o3 = W(e2, n4, c3); + return t3.top = m2(o3.top, t3.top), t3.right = p2(o3.right, t3.right), t3.bottom = p2(o3.bottom, t3.bottom), t3.left = m2(o3.left, t3.left), t3; + }, W(e2, d3, c3)); + return { width: g3.right - g3.left, height: g3.bottom - g3.top, x: g3.left, y: g3.top }; +}, convertOffsetParentRelativeRectToViewportRelativeRect: function(t2) { + let { rect: e2, offsetParent: n3, strategy: o3 } = t2; + const i3 = c2(n3), l3 = S(n3); + if (n3 === l3) + return e2; + let s3 = { scrollLeft: 0, scrollTop: 0 }, u3 = w2(1); + const a3 = w2(0); + if ((i3 || !i3 && "fixed" !== o3) && (("body" !== r2(n3) || f2(l3)) && (s3 = E2(n3)), c2(n3))) { + const t3 = R(n3); + u3 = b2(n3), a3.x = t3.x + n3.clientLeft, a3.y = t3.y + n3.clientTop; + } + return { width: e2.width * u3.x, height: e2.height * u3.y, x: e2.x * u3.x - s3.scrollLeft * u3.x + a3.x, y: e2.y * u3.y - s3.scrollTop * u3.y + a3.y }; +}, isElement: l2, getDimensions: function(t2) { + return x2(t2); +}, getOffsetParent: M, getDocumentElement: S, getScale: b2, async getElementRects(t2) { + let { reference: e2, floating: n3, strategy: o3 } = t2; + const i3 = this.getOffsetParent || M, r3 = this.getDimensions; + return { reference: P(e2, await i3(n3), o3), floating: { x: 0, y: 0, ...await r3(n3) } }; +}, getClientRects: (t2) => Array.from(t2.getClientRects()), isRTL: (t2) => "rtl" === o2(t2).direction }; +var B2 = (t2, n3, o3) => { + const i3 = /* @__PURE__ */ new Map(), r3 = { platform: A2, ...o3 }, c3 = { ...r3.platform, _c: i3 }; + return r(t2, n3, { ...r3, platform: c3 }); +}; + +// quartz/components/scripts/quartz/components/scripts/popover.inline.ts +function normalizeRelativeURLs(el, base) { + const update = (el2, attr, base2) => { + el2.setAttribute(attr, new URL(el2.getAttribute(attr), base2).pathname); + }; + el.querySelectorAll('[href^="./"], [href^="../"]').forEach((item) => update(item, "href", base)); + el.querySelectorAll('[src^="./"], [src^="../"]').forEach((item) => update(item, "src", base)); +} +var p3 = new DOMParser(); +async function mouseEnterHandler({ clientX, clientY }) { + const link = this; + async function setPosition(popoverElement2) { + const { x: x3, y: y2 } = await B2(link, popoverElement2, { + middleware: [D({ x: clientX, y: clientY }), O(), A()] + }); + Object.assign(popoverElement2.style, { + left: `${x3}px`, + top: `${y2}px` + }); + } + if ([...link.children].some((child) => child.classList.contains("popover"))) { + return setPosition(link.lastChild); + } + const thisUrl = new URL(document.location.href); + thisUrl.hash = ""; + thisUrl.search = ""; + const targetUrl = new URL(link.href); + const hash = targetUrl.hash; + targetUrl.hash = ""; + targetUrl.search = ""; + if (thisUrl.toString() === targetUrl.toString()) + return; + const contents = await fetch(`${targetUrl}`).then((res) => res.text()).catch((err) => { + console.error(err); + }); + if (!contents) + return; + const html = p3.parseFromString(contents, "text/html"); + normalizeRelativeURLs(html, targetUrl); + const elts = [...html.getElementsByClassName("popover-hint")]; + if (elts.length === 0) + return; + const popoverElement = document.createElement("div"); + popoverElement.classList.add("popover"); + const popoverInner = document.createElement("div"); + popoverInner.classList.add("popover-inner"); + popoverElement.appendChild(popoverInner); + elts.forEach((elt) => popoverInner.appendChild(elt)); + setPosition(popoverElement); + link.appendChild(popoverElement); + if (hash !== "") { + const heading = popoverInner.querySelector(hash); + if (heading) { + popoverInner.scroll({ top: heading.offsetTop - 12, behavior: "instant" }); + } + } +} +document.addEventListener("nav", () => { + const links = [...document.getElementsByClassName("internal")]; + for (const link of links) { + link.removeEventListener("mouseenter", mouseEnterHandler); + link.addEventListener("mouseenter", mouseEnterHandler); + } +}); +})(); +(function () {// node_modules/plausible-tracker/build/module/lib/request.js +function sendEvent(eventName, data, options) { + const isLocalhost = /^localhost$|^127(?:\.[0-9]+){0,2}\.[0-9]+$|^(?:0*:)*?:?0*1$/.test(location.hostname) || location.protocol === "file:"; + if (!data.trackLocalhost && isLocalhost) { + return console.warn("[Plausible] Ignoring event because website is running locally"); + } + try { + if (window.localStorage.plausible_ignore === "true") { + return console.warn('[Plausible] Ignoring event because "plausible_ignore" is set to "true" in localStorage'); + } + } catch (e) { + null; + } + const payload = { + n: eventName, + u: data.url, + d: data.domain, + r: data.referrer, + w: data.deviceWidth, + h: data.hashMode ? 1 : 0, + p: options && options.props ? JSON.stringify(options.props) : void 0 + }; + const req = new XMLHttpRequest(); + req.open("POST", `${data.apiHost}/api/event`, true); + req.setRequestHeader("Content-Type", "text/plain"); + req.send(JSON.stringify(payload)); + req.onreadystatechange = () => { + if (req.readyState !== 4) + return; + if (options && options.callback) { + options.callback(); + } + }; +} + +// node_modules/plausible-tracker/build/module/lib/tracker.js +function Plausible(defaults) { + const getConfig = () => ({ + hashMode: false, + trackLocalhost: false, + url: location.href, + domain: location.hostname, + referrer: document.referrer || null, + deviceWidth: window.innerWidth, + apiHost: "https://plausible.io", + ...defaults + }); + const trackEvent = (eventName, options, eventData) => { + sendEvent(eventName, { ...getConfig(), ...eventData }, options); + }; + const trackPageview2 = (eventData, options) => { + trackEvent("pageview", options, eventData); + }; + const enableAutoPageviews = () => { + const page = () => trackPageview2(); + const originalPushState = history.pushState; + if (originalPushState) { + history.pushState = function(data, title, url) { + originalPushState.apply(this, [data, title, url]); + page(); + }; + addEventListener("popstate", page); + } + if (defaults && defaults.hashMode) { + addEventListener("hashchange", page); + } + trackPageview2(); + return function cleanup() { + if (originalPushState) { + history.pushState = originalPushState; + removeEventListener("popstate", page); + } + if (defaults && defaults.hashMode) { + removeEventListener("hashchange", page); + } + }; + }; + const enableAutoOutboundTracking = (targetNode = document, observerInit = { + subtree: true, + childList: true, + attributes: true, + attributeFilter: ["href"] + }) => { + function trackClick(event) { + trackEvent("Outbound Link: Click", { props: { url: this.href } }); + if (!(typeof process !== "undefined" && process && false)) { + setTimeout(() => { + location.href = this.href; + }, 150); + } + event.preventDefault(); + } + const tracked = /* @__PURE__ */ new Set(); + function addNode(node) { + if (node instanceof HTMLAnchorElement) { + if (node.host !== location.host) { + node.addEventListener("click", trackClick); + tracked.add(node); + } + } else if ("querySelectorAll" in node) { + node.querySelectorAll("a").forEach(addNode); + } + } + function removeNode(node) { + if (node instanceof HTMLAnchorElement) { + node.removeEventListener("click", trackClick); + tracked.delete(node); + } else if ("querySelectorAll" in node) { + node.querySelectorAll("a").forEach(removeNode); + } + } + const observer = new MutationObserver((mutations) => { + mutations.forEach((mutation) => { + if (mutation.type === "attributes") { + removeNode(mutation.target); + addNode(mutation.target); + } else if (mutation.type === "childList") { + mutation.addedNodes.forEach(addNode); + mutation.removedNodes.forEach(removeNode); + } + }); + }); + targetNode.querySelectorAll("a").forEach(addNode); + observer.observe(targetNode, observerInit); + return function cleanup() { + tracked.forEach((a) => { + a.removeEventListener("click", trackClick); + }); + tracked.clear(); + observer.disconnect(); + }; + }; + return { + trackEvent, + trackPageview: trackPageview2, + enableAutoPageviews, + enableAutoOutboundTracking + }; +} + +// node_modules/plausible-tracker/build/module/index.js +var module_default = Plausible; + +// quartz/components/scripts/quartz/components/scripts/plausible.inline.ts +var { trackPageview } = module_default(); +document.addEventListener("nav", () => trackPageview()); +})(); +(function () {// node_modules/micromorph/dist/index.js +var T = (e) => (t, r) => t[`node${e}`] === r[`node${e}`]; +var b = T("Name"); +var C = T("Type"); +var g = T("Value"); +function M(e, t) { + if (e.attributes.length === 0 && t.attributes.length === 0) + return []; + let r = [], n = /* @__PURE__ */ new Map(), o = /* @__PURE__ */ new Map(); + for (let s of e.attributes) + n.set(s.name, s.value); + for (let s of t.attributes) { + let a = n.get(s.name); + s.value === a ? n.delete(s.name) : (typeof a < "u" && n.delete(s.name), o.set(s.name, s.value)); + } + for (let s of n.keys()) + r.push({ type: 5, name: s }); + for (let [s, a] of o.entries()) + r.push({ type: 4, name: s, value: a }); + return r; +} +function N(e, t = true) { + let r = `${e.localName}`; + for (let { name: n, value: o } of e.attributes) + t && n.startsWith("data-") || (r += `[${n}=${o}]`); + return r += e.innerHTML, r; +} +function h(e) { + switch (e.tagName) { + case "BASE": + case "TITLE": + return e.localName; + case "META": { + if (e.hasAttribute("name")) + return `meta[name="${e.getAttribute("name")}"]`; + if (e.hasAttribute("property")) + return `meta[name="${e.getAttribute("property")}"]`; + break; + } + case "LINK": { + if (e.hasAttribute("rel") && e.hasAttribute("href")) + return `link[rel="${e.getAttribute("rel")}"][href="${e.getAttribute("href")}"]`; + if (e.hasAttribute("href")) + return `link[href="${e.getAttribute("href")}"]`; + break; + } + } + return N(e); +} +function x(e) { + let [t, r = ""] = e.split("?"); + return `${t}?t=${Date.now()}&${r.replace(/t=\d+/g, "")}`; +} +function c(e) { + if (e.nodeType === 1 && e.hasAttribute("data-persist")) + return e; + if (e.nodeType === 1 && e.localName === "script") { + let t = document.createElement("script"); + for (let { name: r, value: n } of e.attributes) + r === "src" && (n = x(n)), t.setAttribute(r, n); + return t.innerHTML = e.innerHTML, t; + } + return e.cloneNode(true); +} +function R(e, t) { + if (e.children.length === 0 && t.children.length === 0) + return []; + let r = [], n = /* @__PURE__ */ new Map(), o = /* @__PURE__ */ new Map(), s = /* @__PURE__ */ new Map(); + for (let a of e.children) + n.set(h(a), a); + for (let a of t.children) { + let i = h(a), u = n.get(i); + u ? N(a, false) !== N(u, false) && o.set(i, c(a)) : s.set(i, c(a)), n.delete(i); + } + for (let a of e.childNodes) { + if (a.nodeType === 1) { + let i = h(a); + if (n.has(i)) { + r.push({ type: 1 }); + continue; + } else if (o.has(i)) { + let u = o.get(i); + r.push({ type: 3, attributes: M(a, u), children: I(a, u) }); + continue; + } + } + r.push(void 0); + } + for (let a of s.values()) + r.push({ type: 0, node: c(a) }); + return r; +} +function I(e, t) { + let r = [], n = Math.max(e.childNodes.length, t.childNodes.length); + for (let o = 0; o < n; o++) { + let s = e.childNodes.item(o), a = t.childNodes.item(o); + r[o] = p(s, a); + } + return r; +} +function p(e, t) { + if (!e) + return { type: 0, node: c(t) }; + if (!t) + return { type: 1 }; + if (C(e, t)) { + if (e.nodeType === 3) { + let r = e.nodeValue, n = t.nodeValue; + if (r.trim().length === 0 && n.trim().length === 0) + return; + } + if (e.nodeType === 1) { + if (b(e, t)) { + let r = e.tagName === "HEAD" ? R : I; + return { type: 3, attributes: M(e, t), children: r(e, t) }; + } + return { type: 2, node: c(t) }; + } else + return e.nodeType === 9 ? p(e.documentElement, t.documentElement) : g(e, t) ? void 0 : { type: 2, value: t.nodeValue }; + } + return { type: 2, node: c(t) }; +} +function $(e, t) { + if (t.length !== 0) + for (let { type: r, name: n, value: o } of t) + r === 5 ? e.removeAttribute(n) : r === 4 && e.setAttribute(n, o); +} +async function O(e, t, r) { + if (!t) + return; + let n; + switch (e.nodeType === 9 ? (e = e.documentElement, n = e) : r ? n = r : n = e, t.type) { + case 0: { + let { node: o } = t; + e.appendChild(o); + return; + } + case 1: { + if (!n) + return; + e.removeChild(n); + return; + } + case 2: { + if (!n) + return; + let { node: o, value: s } = t; + if (typeof s == "string") { + n.nodeValue = s; + return; + } + n.replaceWith(o); + return; + } + case 3: { + if (!n) + return; + let { attributes: o, children: s } = t; + $(n, o); + let a = Array.from(n.childNodes); + await Promise.all(s.map((i, u) => O(n, i, a[u]))); + return; + } + } +} +function P(e, t) { + let r = p(e, t); + return O(e, r); +} + +// node_modules/github-slugger/index.js +var own = Object.hasOwnProperty; + +// quartz/util/path.ts +function getFullSlug(window2) { + const res = window2.document.body.dataset.slug; + return res; +} + +// quartz/components/scripts/quartz/components/scripts/spa.inline.ts +var NODE_TYPE_ELEMENT = 1; +var announcer = document.createElement("route-announcer"); +var isElement = (target) => target?.nodeType === NODE_TYPE_ELEMENT; +var isLocalUrl = (href) => { + try { + const url = new URL(href); + if (window.location.origin === url.origin) { + return true; + } + } catch (e) { + } + return false; +}; +var getOpts = ({ target }) => { + if (!isElement(target)) + return; + const a = target.closest("a"); + if (!a) + return; + if ("routerIgnore" in a.dataset) + return; + const { href } = a; + if (!isLocalUrl(href)) + return; + return { url: new URL(href), scroll: "routerNoscroll" in a.dataset ? false : void 0 }; +}; +function notifyNav(url) { + const event = new CustomEvent("nav", { detail: { url } }); + document.dispatchEvent(event); +} +var p2; +async function navigate(url, isBack = false) { + p2 = p2 || new DOMParser(); + const contents = await fetch(`${url}`).then((res) => res.text()).catch(() => { + window.location.assign(url); + }); + if (!contents) + return; + const html = p2.parseFromString(contents, "text/html"); + let title = html.querySelector("title")?.textContent; + if (title) { + document.title = title; + } else { + const h1 = document.querySelector("h1"); + title = h1?.innerText ?? h1?.textContent ?? url.pathname; + } + if (announcer.textContent !== title) { + announcer.textContent = title; + } + announcer.dataset.persist = ""; + html.body.appendChild(announcer); + P(document.body, html.body); + if (!isBack) { + if (url.hash) { + const el = document.getElementById(url.hash.substring(1)); + el?.scrollIntoView(); + } else { + window.scrollTo({ top: 0 }); + } + } + const elementsToRemove = document.head.querySelectorAll(":not([spa-preserve])"); + elementsToRemove.forEach((el) => el.remove()); + const elementsToAdd = html.head.querySelectorAll(":not([spa-preserve])"); + elementsToAdd.forEach((el) => document.head.appendChild(el)); + history.pushState({}, "", url); + notifyNav(getFullSlug(window)); + delete announcer.dataset.persist; +} +window.spaNavigate = navigate; +function createRouter() { + if (typeof window !== "undefined") { + window.addEventListener("click", async (event) => { + const { url } = getOpts(event) ?? {}; + if (!url) + return; + event.preventDefault(); + try { + navigate(url, false); + } catch (e) { + window.location.assign(url); + } + }); + window.addEventListener("popstate", (event) => { + const { url } = getOpts(event) ?? {}; + if (window.location.hash && window.location.pathname === url?.pathname) + return; + try { + navigate(new URL(window.location.toString()), true); + } catch (e) { + window.location.reload(); + } + return; + }); + } + return new class Router { + go(pathname) { + const url = new URL(pathname, window.location.toString()); + return navigate(url, false); + } + back() { + return window.history.back(); + } + forward() { + return window.history.forward(); + } + }(); +} +createRouter(); +notifyNav(getFullSlug(window)); +if (!customElements.get("route-announcer")) { + const attrs = { + "aria-live": "assertive", + "aria-atomic": "true", + style: "position: absolute; left: 0; top: 0; clip: rect(0 0 0 0); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px" + }; + customElements.define( + "route-announcer", + class RouteAnnouncer extends HTMLElement { + constructor() { + super(); + } + connectedCallback() { + for (const [key, value] of Object.entries(attrs)) { + this.setAttribute(key, value); + } + } + } + ); +} +})(); \ No newline at end of file diff --git a/prescript.js b/prescript.js new file mode 100644 index 000000000..6225cabab --- /dev/null +++ b/prescript.js @@ -0,0 +1,22 @@ +(function () {// quartz/components/scripts/quartz/components/scripts/darkmode.inline.ts +var userPref = window.matchMedia("(prefers-color-scheme: light)").matches ? "light" : "dark"; +var currentTheme = localStorage.getItem("theme") ?? userPref; +document.documentElement.setAttribute("saved-theme", currentTheme); +document.addEventListener("nav", () => { + const switchTheme = (e) => { + if (e.target.checked) { + document.documentElement.setAttribute("saved-theme", "dark"); + localStorage.setItem("theme", "dark"); + } else { + document.documentElement.setAttribute("saved-theme", "light"); + localStorage.setItem("theme", "light"); + } + }; + const toggleSwitch = document.querySelector("#darkmode-toggle"); + toggleSwitch.removeEventListener("change", switchTheme); + toggleSwitch.addEventListener("change", switchTheme); + if (currentTheme === "dark") { + toggleSwitch.checked = true; + } +}); +})(); \ No newline at end of file diff --git a/roadmap/acid/index.html b/roadmap/acid/index.html new file mode 100644 index 000000000..8b9a8892d --- /dev/null +++ b/roadmap/acid/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/acid

1 items under this folder.

\ No newline at end of file diff --git a/roadmap/acid/milestones-overview.html b/roadmap/acid/milestones-overview.html new file mode 100644 index 000000000..b4b0bf465 --- /dev/null +++ b/roadmap/acid/milestones-overview.html @@ -0,0 +1,66 @@ + +Comms Milestones Overview
\ No newline at end of file diff --git a/roadmap/acid/milestones-overview/index.html b/roadmap/acid/milestones-overview/index.html deleted file mode 100644 index 6d0e3a7f4..000000000 --- a/roadmap/acid/milestones-overview/index.html +++ /dev/null @@ -1,373 +0,0 @@ - - - - - - - - Comms Milestones Overview - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/acid/updates/2023-08-02.html b/roadmap/acid/updates/2023-08-02.html new file mode 100644 index 000000000..9f61963f9 --- /dev/null +++ b/roadmap/acid/updates/2023-08-02.html @@ -0,0 +1,100 @@ + +2023-08-02 Acid weekly

Leads roundup - acid

+

Al / Comms

+
    +
  • Status app relaunch comms campaign plan in the works. Approx. date for launch 31.08.
  • +
  • Logos comms + growth plan post launch is next up TBD.
  • +
  • Will be waiting for specs for data room, raise etc.
  • +
  • Hires: split the role for content studio to be more realistic in getting top level talent.
  • +
+

Matt / Copy

+
    +
  • Initiative updating old documentation like CC guide to reflect broader scope of BUs
  • +
  • Brand guidelines/ modes of presentation are in process
  • +
  • Wikipedia entry on network states and virtual states is live on
  • +
+

Eddy / Digital Comms

+
    +
  • Logos Discord will be completed by EOD.
  • +
  • Codex Discord will be done tomorrow.
  • +
  • LPE rollout plan, currently working on it, will be ready EOW
  • +
  • Podcast rollout needs some
  • +
  • Overarching BU plan will be ready in next couple of weeks as things on top have taken priority.
  • +
+

Amir / Studio

+
    +
  • Started execution of LPE for new requirements, broken down in smaller deliveries. Looking to have it working and live by EOM.
  • +
  • Hires: still looking for 3 positions with main focus on developer side.
  • +
+

Jonny / Podcast

+
    +
  • Podcast timelines are being set. In production right now. Nick delivered graphics for HiO but we need a full pack.
  • +
  • First HiO episode is in the works. Will be ready in 2 weeks to fit in the rollout of the LPE.
  • +
+

Louisa / Events

+
    +
  • Global strategy paper for wider comms plan.
  • +
  • Template for processes and executions when preparing events.
  • +
  • Decision made with Carl to move Network State event to November in satellite of other events. Looking into ETH Lisbon / Staking Summit etc.
  • +
  • Seoul Q4 hackathon is already in the works. Needs bounty planning.
  • +
\ No newline at end of file diff --git a/roadmap/acid/updates/2023-08-02/index.html b/roadmap/acid/updates/2023-08-02/index.html deleted file mode 100644 index c287f3d60..000000000 --- a/roadmap/acid/updates/2023-08-02/index.html +++ /dev/null @@ -1,403 +0,0 @@ - - - - - - - - 2023-08-02 Acid weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-02 Acid weekly

-

- Last updated -Aug 3, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Leads roundup - acid

-

Al / Comms

-
    -
  • Status app relaunch comms campaign plan in the works. Approx. date for launch 31.08.
  • -
  • Logos comms + growth plan post launch is next up TBD.
  • -
  • Will be waiting for specs for data room, raise etc.
  • -
  • Hires: split the role for content studio to be more realistic in getting top level talent.
  • -
-

Matt / Copy

-
    -
  • Initiative updating old documentation like CC guide to reflect broader scope of BUs
  • -
  • Brand guidelines/ modes of presentation are in process
  • -
  • Wikipedia entry on network states and virtual states is live on
  • -
-

Eddy / Digital Comms

-
    -
  • Logos Discord will be completed by EOD.
  • -
  • Codex Discord will be done tomorrow.
  • -
  • LPE rollout plan, currently working on it, will be ready EOW
  • -
  • Podcast rollout needs some
  • -
  • Overarching BU plan will be ready in next couple of weeks as things on top have taken priority.
  • -
-

Amir / Studio

-
    -
  • Started execution of LPE for new requirements, broken down in smaller deliveries. Looking to have it working and live by EOM.
  • -
  • Hires: still looking for 3 positions with main focus on developer side.
  • -
-

Jonny / Podcast

-
    -
  • Podcast timelines are being set. In production right now. Nick delivered graphics for HiO but we need a full pack.
  • -
  • First HiO episode is in the works. Will be ready in 2 weeks to fit in the rollout of the LPE.
  • -
-

Louisa / Events

-
    -
  • Global strategy paper for wider comms plan.
  • -
  • Template for processes and executions when preparing events.
  • -
  • Decision made with Carl to move Network State event to November in satellite of other events. Looking into ETH Lisbon / Staking Summit etc.
  • -
  • Seoul Q4 hackathon is already in the works. Needs bounty planning.
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/acid/updates/2023-08-09.html b/roadmap/acid/updates/2023-08-09.html new file mode 100644 index 000000000..9086490a4 --- /dev/null +++ b/roadmap/acid/updates/2023-08-09.html @@ -0,0 +1,125 @@ + +2023-08-09 Acid weekly

Top level priorities:

+

Logos Growth Plan +Status Relaunch +Launch of LPE +Podcasts (Target: Every week one podcast out) +Hiring: TD studio and DC studio roles

+

Movement Building:

+
    +
  • Logos collective comms plan skeleton ready - will be applied for all BUs as next step
  • +
  • Goal is to have plan + overview to set realistic KPIs and expectations
  • +
  • Discord Server update on various views
  • +
  • Status relaunch comms plan is ready for input from John et al.
  • +
  • Reach out to BUs for needs and deliverables
  • +
+

TD Studio

+

Full focus on LPE:

+
    +
  • On track, target of end of august
  • +
  • review of options, more diverse landscape of content
  • +
  • Episodes page proposals
  • +
  • Players in progress
  • +
  • refactoring from prev code base
  • +
  • structure of content ready in GDrive
  • +
+

Copy

+
    +
  • Content around LPE
  • +
  • Content for podcast launches
  • +
  • Status launch - content requirements to receive
  • +
  • Organization of doc sites review
  • +
  • TBD what type of content and how the generation workflows will look like
  • +
+

Podcast

+
    +
  • Good state in editing and producing the shows
  • +
  • First interview edited end to end with XMTP is ready. 2 weeks with social assets and all included.
  • +
  • LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.
  • +
  • 3 recorded for HIO, motion graphics in progress
  • +
  • First E2E podcast ready in 2 weeks for LPE
  • +
  • LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.
  • +
+

DC Studio

+
    +
  • Brand guidelines for HiO are ready and set. Thanks Shmeda!
  • +
  • Logos State branding assets are being developed
  • +
  • Presentation templates update
  • +
+

Events

+
    +
  • Network State event probably in Istanbul in November re: Devconnect will confirm shortly.
  • +
  • Program elements and speakers are top priority
  • +
  • Hackathon in Seoul in Q1 2024 - late Febuary probably
  • +
  • Jarrad will be speaking at HCPP and EthRome
  • +
  • Global event strategy written and in review
  • +
  • Lou presented social media and event KPIs on Paris event
  • +
+

CRM & Marketing tool

+
    +
  • Get feedback from stakeholders and users
  • +
  • PM implementation to be planned (+- 3 month time TBD) with working group
  • +
  • LPE KPI: Collecting email addresses of relevant people
  • +
  • Careful on how we manage and use data, important for BizDev
  • +
  • Careful on which segments of the project to manage using the CRM as it can be very off brand
  • +
\ No newline at end of file diff --git a/roadmap/acid/updates/2023-08-09/index.html b/roadmap/acid/updates/2023-08-09/index.html deleted file mode 100644 index c907a85a7..000000000 --- a/roadmap/acid/updates/2023-08-09/index.html +++ /dev/null @@ -1,434 +0,0 @@ - - - - - - - - 2023-08-09 Acid weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-09 Acid weekly

-

- Last updated -Aug 9, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Top level priorities:

-

Logos Growth Plan -Status Relaunch -Launch of LPE -Podcasts (Target: Every week one podcast out) -Hiring: TD studio and DC studio roles

-

# Movement Building:

-
    -
  • Logos collective comms plan skeleton ready - will be applied for all BUs as next step
  • -
  • Goal is to have plan + overview to set realistic KPIs and expectations
  • -
  • Discord Server update on various views
  • -
  • Status relaunch comms plan is ready for input from John et al.
  • -
  • Reach out to BUs for needs and deliverables
  • -
-

# TD Studio

-

Full focus on LPE:

-
    -
  • On track, target of end of august
  • -
  • review of options, more diverse landscape of content
  • -
  • Episodes page proposals
  • -
  • Players in progress
  • -
  • refactoring from prev code base
  • -
  • structure of content ready in GDrive
  • -
-

# Copy

-
    -
  • Content around LPE
  • -
  • Content for podcast launches
  • -
  • Status launch - content requirements to receive
  • -
  • Organization of doc sites review
  • -
  • TBD what type of content and how the generation workflows will look like
  • -
-

# Podcast

-
    -
  • Good state in editing and producing the shows
  • -
  • First interview edited end to end with XMTP is ready. 2 weeks with social assets and all included.
  • -
  • LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.
  • -
  • 3 recorded for HIO, motion graphics in progress
  • -
  • First E2E podcast ready in 2 weeks for LPE
  • -
  • LSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.
  • -
-

# DC Studio

-
    -
  • Brand guidelines for HiO are ready and set. Thanks Shmeda!
  • -
  • Logos State branding assets are being developed
  • -
  • Presentation templates update
  • -
-

# Events

-
    -
  • Network State event probably in Istanbul in November re: Devconnect will confirm shortly.
  • -
  • Program elements and speakers are top priority
  • -
  • Hackathon in Seoul in Q1 2024 - late Febuary probably
  • -
  • Jarrad will be speaking at HCPP and EthRome
  • -
  • Global event strategy written and in review
  • -
  • Lou presented social media and event KPIs on Paris event
  • -
-

# CRM & Marketing tool

-
    -
  • Get feedback from stakeholders and users
  • -
  • PM implementation to be planned (+- 3 month time TBD) with working group
  • -
  • LPE KPI: Collecting email addresses of relevant people
  • -
  • Careful on how we manage and use data, important for BizDev
  • -
  • Careful on which segments of the project to manage using the CRM as it can be very off brand
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/acid/updates/index.html b/roadmap/acid/updates/index.html new file mode 100644 index 000000000..32e0345f8 --- /dev/null +++ b/roadmap/acid/updates/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/acid/updates

2 items under this folder.

\ No newline at end of file diff --git a/roadmap/codex/index.html b/roadmap/codex/index.html new file mode 100644 index 000000000..9ebd33d27 --- /dev/null +++ b/roadmap/codex/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/codex

1 items under this folder.

\ No newline at end of file diff --git a/roadmap/codex/milestones-overview.html b/roadmap/codex/milestones-overview.html new file mode 100644 index 000000000..e9ef51bf8 --- /dev/null +++ b/roadmap/codex/milestones-overview.html @@ -0,0 +1,66 @@ + +Codex Milestones Overview
\ No newline at end of file diff --git a/roadmap/codex/milestones-overview/index.html b/roadmap/codex/milestones-overview/index.html deleted file mode 100644 index bdb341259..000000000 --- a/roadmap/codex/milestones-overview/index.html +++ /dev/null @@ -1,371 +0,0 @@ - - - - - - - - Codex Milestones Overview - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/codex/updates/2023-07-21.html b/roadmap/codex/updates/2023-07-21.html new file mode 100644 index 000000000..691ee39fb --- /dev/null +++ b/roadmap/codex/updates/2023-07-21.html @@ -0,0 +1,353 @@ + +2023-07-21 Codex weekly

Codex update 07/12/2023 to 07/21/2023

+

Overall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc…

+

Our main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. A lot of grunt work is being done to make that possible. Progress is steady, but there are lots of stabilization and testing & infra related work going on.

+

We’re also onboarding several new members to the team (4 to be precise), this will ultimately accelerate our progress, but it requires some upfront investment from some of the more experienced team members.

+

DevOps/Infrastructure:

+
    +
  • Adopted nim-codex Docker builds for Dist Tests.
  • +
  • Ordered Dedicated node on Hetzner.
  • +
  • Configured Hetzner StorageBox for local backup on Dedicated server.
  • +
  • Configured new Logs shipper and Grafana in Dist-Tests cluster.
  • +
  • Created Geth and Prometheus Docker images for Dist-Tests.
  • +
  • Created a separate codex-contracts-eth Docker image for Dist-Tests.
  • +
  • Set up Ingress Controller in Dist-Tests cluster.
  • +
+

Testing:

+
    +
  • Set up deployer to gather metrics.
  • +
  • Debugging and identifying potential deadlock in the Codex client.
  • +
  • Added metrics, built image, and ran tests.
  • +
  • Updated dist-test log for Kibana compatibility.
  • +
  • Ran dist-tests on a new master image.
  • +
  • Debugging continuous tests.
  • +
+

Development:

+
    +
  • Worked on codex-dht nimble updates and fixing key format issue.
  • +
  • Updated CI and split Windows CI tests to run on two CI machines.
  • +
  • Continued updating dependencies in codex-dht.
  • +
  • Fixed decoding large manifests (PR #479).
  • +
  • Explored the existing implementation of NAT Traversal techniques in nim-libp2p.
  • +
+

Research

+
    +
  • Exploring additional directions for remote verification techniques and the interplay of different encoding approaches and cryptographic primitives + +
  • +
  • Onboarding Balázs as our ZK researcher/engineer
  • +
  • Continued research in DAS related topics +
      +
    • Running simulation on newly setup infrastructure
    • +
    +
  • +
  • Devised a new direction to reduce metadata overhead and enable remote verification metadata-overhead.md
  • +
  • Looked into NAT Traversal (issue #166).
  • +
+

Cross-functional (Combination of DevOps/Testing/Development):

+
    +
  • Fixed discovery related issues.
  • +
  • Planned Codex Demo update for the Logos event and prepared environment for the demo.
  • +
  • Described requirements for Dist Tests logs format.
  • +
  • Configured new Logs shipper and Grafana in Dist-Tests cluster.
  • +
  • Dist Tests logs adoption requirements - Updated log format for Kibana compatibility.
  • +
  • Hetzner Dedicated server was configured.
  • +
  • Set up Hetzner StorageBox for local backup on Dedicated server.
  • +
  • Configured new Logs shipper in Dist-Tests cluster.
  • +
  • Setup Grafana in Dist-Tests cluster.
  • +
  • Created a separate codex-contracts-eth Docker image for Dist-Tests.
  • +
  • Setup Ingress Controller in Dist-Tests cluster.
  • +
+
+

Conversations

+
    +
  1. zk_id 07/24/2023 11:59 AM
  2. +
+
+

We’ve explored VDI for rollups ourselves in the last week, curious to know your thoughts

+
+
    +
  1. dryajov 07/25/2023 1:28 PM
  2. +
+
+

It depends on what you mean, from a high level (A)VID is probably the closest thing to DAS in academic research, in fact DAS is probably either a subset or a superset of VID, so it’s definitely worth digging into. But I’m not sure what exactly you’re interested in, in the context of rollups…

+
+
    +
  1. +

    zk_id 07/25/2023 3:28 PM

    +

    The part of the rollups seems to be the base for choosing proofs that scale linearly with the amount of nodes (which makes it impractical for large numbers of nodes). The protocol is very simple, and would only need to instead provide constant proofs with the Kate commitments (at the cost of large computational resources is my understanding). This was at least the rationale that I get from reading the paper and the conversation with Bunz, one of the founders of the Espresso shared sequencer (which is where I found the first reference to this paper). I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn’t need to do that for the agreement of the dispersal. You still would need the sampling for the light clients though, of course.

    +
  2. +
  3. +

    dryajov 07/25/2023 8:31 PM

    +
    +

    I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn’t need to do that for the agreement of the dispersal.

    +
    +

    Yeah, great question. What follows is strictly IMO, as I haven’t seen this formally contrasted anywhere, so my reasoning can be wrong in subtle ways.

    +
      +
    • (A)VID - dispersing and storing data in a verifiable manner
    • +
    • Sampling - verifying already dispersed data
    • +
    +

    tl;dr Sampling allows light nodes to protect against dishonest majority attacks. In other words, a light node cannot be tricked to follow an incorrect chain by a dishonest validator majority that withholds data. More details are here - data-availability-checks.html ------------- First, DAS implies (A)VID, as there is an initial phase where data is distributed to some subset of nodes. Moreover, these nodes, usually the validators, attest that they received the data and that it is correct. If a majority of validators accepts, then the block is considered correct, otherwise it is rejected. This is the verifiable dispersal part. But what if the majority of validators are dishonest? Can you prevent them from tricking the rest of the network from following the chain?

    +

    Dankrad Feist

    +

    Data availability checks

    +

    Primer on data availability checks

    +
  4. +
  5. +

    [8:31 PM]

    +

    Dealing with dishonest majorities

    +

    This is easy if all the data is downloaded by all nodes all the time, but we’re trying to avoid just that. But lets assume, for the sake of the argument, that there are full nodes in the network that download all the data and are able to construct fraud proofs for missing data, can this mitigate the problem? It turns out that it can’t, because proving data (un)availability isn’t a directly attributable fault - in other words, you can observe/detect it but there is no way you can prove it to the rest of the network reliably. More details here A-note-on-data-availability-and-erasure-coding So, if there isn’t much that can be done by detecting that a block isn’t available, what good is it for? Well nodes can still avoid following the unavailable chain and thus be tricked by a dishonest majority. However, simply attesting that data has been publishing is not enough to prevent a dishonest majority from attacking the network. (edited)

    +
  6. +
  7. +

    dryajov 07/25/2023 9:06 PM

    +

    To complement, the relevant quote from A-note-on-data-availability-and-erasure-coding, is:

    +
    +

    Here, fraud proofs are not a solution, because not publishing data is not a uniquely attributable fault - in any scheme where a node (“fisherman”) has the ability to “raise the alarm” about some piece of data not being available, if the publisher then publishes the remaining data, all nodes who were not paying attention to that specific piece of data at that exact time cannot determine whether it was the publisher that was maliciously withholding data or whether it was the fisherman that was maliciously making a false alarm.

    +
    +

    The relevant quote from from data-availability-checks.html, is:

    +
    +

    There is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a consensus supermajority has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? Honest full nodes, obviously, will not follow this chain, as they can’t download the data. But light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.

    +
    +

    Both articles are a bit old, but the intuitions still hold.

    +
  8. +
+

July 26, 2023

+
    +
  1. +

    zk_id 07/26/2023 10:42 AM

    +

    Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it’s not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.

    +
  2. +
  3. +

    [10:45 AM]

    +

    The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs

    +
  4. +
  5. +

    zk_id

    +

    Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it’s not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.

    +

    dryajov 07/26/2023 4:42 PM

    +

    Great! Glad to help anytime

    +
  6. +
  7. +

    zk_id

    +

    The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs

    +

    dryajov 07/26/2023 4:43 PM

    +

    Yes, I’d argue it is crucial in a network with distributed validation, where all nodes are either fully light or partially light nodes.

    +
  8. +
  9. +

    [4:46 PM]

    +

    Btw, there is probably more we can share/compare notes on in this problem space, we’re looking at similar things, perhaps from a slightly different perspective in Codex’s case, but the work done on DAS with the EF directly is probably very relevant for you as well

    +
  10. +
+

July 27, 2023

+
    +
  1. +

    zk_id 07/27/2023 3:05 AM

    +

    I would love to. Do you have those notes somewhere?

    +
  2. +
  3. +

    zk_id 07/27/2023 4:01 AM

    +

    all the links you have, anything, would be useful

    +
  4. +
  5. +

    zk_id

    +

    I would love to. Do you have those notes somewhere?

    +

    dryajov 07/27/2023 4:50 PM

    +

    A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere

    +
  6. +
+

July 28, 2023

+
    +
  1. +

    zk_id 07/28/2023 5:47 AM

    +

    Would love to see anything that is possible

    +
  2. +
  3. +

    [5:47 AM]

    +

    Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us

    +
  4. +
  5. +

    zk_id

    +

    Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us

    +

    dryajov 07/28/2023 4:07 PM

    +

    Yes, we’re also working in this direction as this is crucial for us as well. There should be some result coming soon(tm), now that @bkomuves is helping us with this part.

    +
  6. +
  7. +

    zk_id

    +

    Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us

    +

    bkomuves 07/28/2023 4:44 PM

    +

    my current view (it’s changing pretty often :) is that there is tension between:

    +
      +
    • commitment cost
    • +
    • proof cost
    • +
    • and verification cost
    • +
    +

    the holy grail which is the best for all of them doesn’t seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what’s possible, there are external restrictions)

    +
  8. +
+

July 29, 2023

+
    +
  1. +

    bkomuves

    +

    my current view (it’s changing pretty often :) is that there is tension between: 

    +
      +
    • commitment cost
    • +
    • proof cost
    • +
    • and verification cost
    • +
    +

     the holy grail which is the best for all of them doesn’t seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what’s possible, there are external restrictions)

    +

    zk_id 07/29/2023 4:23 AM

    +

    I agree. That’s also my understanding (although surely much more superficial).

    +
  2. +
  3. +

    [4:24 AM]

    +

    There is also the dimension of computation vs size cost

    +
  4. +
  5. +

    [4:25 AM]

    +

    ie the VID scheme (of the paper that kickstarted this conversation) has all the properties we need, but it scales n^2 in message complexity which makes it lose the properties we are looking for after 1k nodes. We need to scale confortably to 10k nodes.

    +
  6. +
  7. +

    [4:29 AM]

    +

    So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is:

    +
      +
    • Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).
    • +
    • If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don’t think we will pursue this, but we will have to if this scheme doesn’t scale with the first option.
    • +
    +
  8. +
+

August 1, 2023

+
    +
  1. +

    dryajov

    +

    A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere

    +

    Leobago 08/01/2023 1:13 PM

    +

    Note much public write-ups yet. You can find some content here:

    + +

    We also have a few Jupiter notebooks but they are not public yet. As soon as that content is out we can let you know 🙂

    +

    Codex Storage Blog

    +

    Data Availability Sampling

    +

    The Codex team is busy building a new web3 decentralized storage platform with the latest advances in erasure coding and verification systems. Part of the challenge of deploying decentralized storage infrastructure is to guarantee that the data that has been stored and is available for retrieval from the beginning until

    +

    GitHub

    +

    das-research: This repository hosts all the …

    +

    This repository hosts all the research on DAS for the collaboration between Codex and the EF. - GitHub - codex-storage/das-research: This repository hosts all the research on DAS for the collabora…

    +

    +

    GitHub - codex-storage/das-research: This repository hosts all the ...

    +
  2. +
  3. +

    zk_id

    +

    So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is: 

    +
      +
    • Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).
    • +
    • If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don’t think we will pursue this, but we will have to if this scheme doesn’t scale with the first option.
    • +
    +

    dryajov 08/01/2023 1:55 PM

    +

    This might interest you as well - combining-kzg-and-erasure-coding-fc903dc78f1a

    +

    Medium

    +

    Combining KZG and erasure coding

    +

    The Hitchhiker’s Guide to Subspace  — Episode II

    +

    +

    Combining KZG and erasure coding

    +
  4. +
  5. +

    [1:56 PM]

    +

    This is a great analysis of the current state of the art in structure of data + commitment and the interplay. I would also recoment reading the first article of the series which it also links to

    +
  6. +
  7. +

    zk_id 08/01/2023 3:04 PM

    +

    Thanks @dryajov @Leobago ! Much appreciated!

    +
  8. +
  9. +

    [3:05 PM]

    +

    Very glad that we can discuss these things with you. Maybe I have some specific questions once I finish reading a huge pile of pending docs that I’m tackling starting today…

    +
  10. +
  11. +

    zk_id 08/01/2023 6:34 PM

    +

    @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?

    +
  12. +
  13. +

    zk_id

    +

    @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?

    +

    Leobago 08/01/2023 6:36 PM

    +

    Yes, checkout the visual branch and make sure to enable plotting in the config file, it should produce a bunch of figures 🙂

    +
  14. +
  15. +

    [6:37 PM]

    +

    You might find also some bugs here and there on that branch 😅

    +
  16. +
  17. +

    zk_id 08/01/2023 7:44 PM

    +

    Thanks!

    +
  18. +
\ No newline at end of file diff --git a/roadmap/codex/updates/2023-07-21/index.html b/roadmap/codex/updates/2023-07-21/index.html deleted file mode 100644 index 2a60b270e..000000000 --- a/roadmap/codex/updates/2023-07-21/index.html +++ /dev/null @@ -1,750 +0,0 @@ - - - - - - - - 2023-07-21 Codex weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-21 Codex weekly

-

- Last updated -Jul 21, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Codex update 07/12/2023 to 07/21/2023

-

Overall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc…

-

Our main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. A lot of grunt work is being done to make that possible. Progress is steady, but there are lots of stabilization and testing & infra related work going on.

-

We’re also onboarding several new members to the team (4 to be precise), this will ultimately accelerate our progress, but it requires some upfront investment from some of the more experienced team members.

-

# DevOps/Infrastructure:

-
    -
  • Adopted nim-codex Docker builds for Dist Tests.
  • -
  • Ordered Dedicated node on Hetzner.
  • -
  • Configured Hetzner StorageBox for local backup on Dedicated server.
  • -
  • Configured new Logs shipper and Grafana in Dist-Tests cluster.
  • -
  • Created Geth and Prometheus Docker images for Dist-Tests.
  • -
  • Created a separate codex-contracts-eth Docker image for Dist-Tests.
  • -
  • Set up Ingress Controller in Dist-Tests cluster.
  • -
-

# Testing:

-
    -
  • Set up deployer to gather metrics.
  • -
  • Debugging and identifying potential deadlock in the Codex client.
  • -
  • Added metrics, built image, and ran tests.
  • -
  • Updated dist-test log for Kibana compatibility.
  • -
  • Ran dist-tests on a new master image.
  • -
  • Debugging continuous tests.
  • -
-

# Development:

-
    -
  • Worked on codex-dht nimble updates and fixing key format issue.
  • -
  • Updated CI and split Windows CI tests to run on two CI machines.
  • -
  • Continued updating dependencies in codex-dht.
  • -
  • Fixed decoding large manifests ( - -PR #479).
  • -
  • Explored the existing implementation of NAT Traversal techniques in nim-libp2p.
  • -
-

# Research

- -

# Cross-functional (Combination of DevOps/Testing/Development):

-
    -
  • Fixed discovery related issues.
  • -
  • Planned Codex Demo update for the Logos event and prepared environment for the demo.
  • -
  • Described requirements for Dist Tests logs format.
  • -
  • Configured new Logs shipper and Grafana in Dist-Tests cluster.
  • -
  • Dist Tests logs adoption requirements - Updated log format for Kibana compatibility.
  • -
  • Hetzner Dedicated server was configured.
  • -
  • Set up Hetzner StorageBox for local backup on Dedicated server.
  • -
  • Configured new Logs shipper in Dist-Tests cluster.
  • -
  • Setup Grafana in Dist-Tests cluster.
  • -
  • Created a separate codex-contracts-eth Docker image for Dist-Tests.
  • -
  • Setup Ingress Controller in Dist-Tests cluster.
  • -
-
-

# Conversations

-
    -
  1. zk_id 07/24/2023 11:59 AM
  2. -
-
-

We’ve explored VDI for rollups ourselves in the last week, curious to know your thoughts

-
-
    -
  1. dryajov 07/25/2023 1:28 PM
  2. -
-
-

It depends on what you mean, from a high level (A)VID is probably the closest thing to DAS in academic research, in fact DAS is probably either a subset or a superset of VID, so it’s definitely worth digging into. But I’m not sure what exactly you’re interested in, in the context of rollups…

-
-
    -
  1. -

    zk_id 07/25/2023 3:28 PM

    -

    The part of the rollups seems to be the base for choosing proofs that scale linearly with the amount of nodes (which makes it impractical for large numbers of nodes). The protocol is very simple, and would only need to instead provide constant proofs with the Kate commitments (at the cost of large computational resources is my understanding). This was at least the rationale that I get from reading the paper and the conversation with Bunz, one of the founders of the Espresso shared sequencer (which is where I found the first reference to this paper). I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn’t need to do that for the agreement of the dispersal. You still would need the sampling for the light clients though, of course.

    -
  2. -
  3. -

    dryajov 07/25/2023 8:31 PM

    -
    -

    I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn’t need to do that for the agreement of the dispersal.

    -
    -

    Yeah, great question. What follows is strictly IMO, as I haven’t seen this formally contrasted anywhere, so my reasoning can be wrong in subtle ways.

    -
      -
    • (A)VID - dispersing and storing data in a verifiable manner
    • -
    • Sampling - verifying already dispersed data
    • -
    -

    tl;dr Sampling allows light nodes to protect against dishonest majority attacks. In other words, a light node cannot be tricked to follow an incorrect chain by a dishonest validator majority that withholds data. More details are here - - -https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html ————- First, DAS implies (A)VID, as there is an initial phase where data is distributed to some subset of nodes. Moreover, these nodes, usually the validators, attest that they received the data and that it is correct. If a majority of validators accepts, then the block is considered correct, otherwise it is rejected. This is the verifiable dispersal part. But what if the majority of validators are dishonest? Can you prevent them from tricking the rest of the network from following the chain?

    -

    Dankrad Feist

    -

    - -Data availability checks

    -

    Primer on data availability checks

    -
  4. -
  5. -

    [8:31 PM]

    -

    # Dealing with dishonest majorities

    -

    This is easy if all the data is downloaded by all nodes all the time, but we’re trying to avoid just that. But lets assume, for the sake of the argument, that there are full nodes in the network that download all the data and are able to construct fraud proofs for missing data, can this mitigate the problem? It turns out that it can’t, because proving data (un)availability isn’t a directly attributable fault - in other words, you can observe/detect it but there is no way you can prove it to the rest of the network reliably. More details here - -https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding So, if there isn’t much that can be done by detecting that a block isn’t available, what good is it for? Well nodes can still avoid following the unavailable chain and thus be tricked by a dishonest majority. However, simply attesting that data has been publishing is not enough to prevent a dishonest majority from attacking the network. (edited)

    -
  6. -
  7. -

    dryajov 07/25/2023 9:06 PM

    -

    To complement, the relevant quote from - -https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding, is:

    -
    -

    Here, fraud proofs are not a solution, because not publishing data is not a uniquely attributable fault - in any scheme where a node (“fisherman”) has the ability to “raise the alarm” about some piece of data not being available, if the publisher then publishes the remaining data, all nodes who were not paying attention to that specific piece of data at that exact time cannot determine whether it was the publisher that was maliciously withholding data or whether it was the fisherman that was maliciously making a false alarm.

    -
    -

    The relevant quote from from - -https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html, is:

    -
    -

    There is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a consensus supermajority has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? Honest full nodes, obviously, will not follow this chain, as they can’t download the data. But light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.

    -
    -

    Both articles are a bit old, but the intuitions still hold.

    -
  8. -
-

July 26, 2023

-
    -
  1. -

    zk_id 07/26/2023 10:42 AM

    -

    Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it’s not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.

    -
  2. -
  3. -

    [10:45 AM]

    -

    The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs

    -
  4. -
  5. -

    zk_id

    -

    Thanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it’s not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.

    -

    # dryajov 07/26/2023 4:42 PM

    -

    Great! Glad to help anytime

    -
  6. -
  7. -

    zk_id

    -

    The dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs

    -

    dryajov 07/26/2023 4:43 PM

    -

    Yes, I’d argue it is crucial in a network with distributed validation, where all nodes are either fully light or partially light nodes.

    -
  8. -
  9. -

    [4:46 PM]

    -

    Btw, there is probably more we can share/compare notes on in this problem space, we’re looking at similar things, perhaps from a slightly different perspective in Codex’s case, but the work done on DAS with the EF directly is probably very relevant for you as well

    -
  10. -
-

July 27, 2023

-
    -
  1. -

    zk_id 07/27/2023 3:05 AM

    -

    I would love to. Do you have those notes somewhere?

    -
  2. -
  3. -

    zk_id 07/27/2023 4:01 AM

    -

    all the links you have, anything, would be useful

    -
  4. -
  5. -

    zk_id

    -

    I would love to. Do you have those notes somewhere?

    -

    dryajov 07/27/2023 4:50 PM

    -

    A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere

    -
  6. -
-

July 28, 2023

-
    -
  1. -

    zk_id 07/28/2023 5:47 AM

    -

    Would love to see anything that is possible

    -
  2. -
  3. -

    [5:47 AM]

    -

    Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us

    -
  4. -
  5. -

    zk_id

    -

    Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us

    -

    dryajov 07/28/2023 4:07 PM

    -

    Yes, we’re also working in this direction as this is crucial for us as well. There should be some result coming soon(tm), now that @bkomuves is helping us with this part.

    -
  6. -
  7. -

    zk_id

    -

    Our setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us

    -

    bkomuves 07/28/2023 4:44 PM

    -

    my current view (it’s changing pretty often :) is that there is tension between:

    -
      -
    • commitment cost
    • -
    • proof cost
    • -
    • and verification cost
    • -
    -

    the holy grail which is the best for all of them doesn’t seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what’s possible, there are external restrictions)

    -
  8. -
-

July 29, 2023

-
    -
  1. -

    bkomuves

    -

    my current view (it’s changing pretty often :) is that there is tension between: 

    -
      -
    • commitment cost
    • -
    • proof cost
    • -
    • and verification cost
    • -
    -

     the holy grail which is the best for all of them doesn’t seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what’s possible, there are external restrictions)

    -

    zk_id 07/29/2023 4:23 AM

    -

    I agree. That’s also my understanding (although surely much more superficial).

    -
  2. -
  3. -

    [4:24 AM]

    -

    There is also the dimension of computation vs size cost

    -
  4. -
  5. -

    [4:25 AM]

    -

    ie the VID scheme (of the paper that kickstarted this conversation) has all the properties we need, but it scales n^2 in message complexity which makes it lose the properties we are looking for after 1k nodes. We need to scale confortably to 10k nodes.

    -
  6. -
  7. -

    [4:29 AM]

    -

    So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is:

    -
      -
    • Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).
    • -
    • If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don’t think we will pursue this, but we will have to if this scheme doesn’t scale with the first option.
    • -
    -
  8. -
-

August 1, 2023

-
    -
  1. -

    dryajov

    -

    A bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere

    -

    Leobago 08/01/2023 1:13 PM

    -

    Note much public write-ups yet. You can find some content here:

    - -

    We also have a few Jupiter notebooks but they are not public yet. As soon as that content is out we can let you know - -🙂

    -

    Codex Storage Blog

    -

    - -Data Availability Sampling

    -

    The Codex team is busy building a new web3 decentralized storage platform with the latest advances in erasure coding and verification systems. Part of the challenge of deploying decentralized storage infrastructure is to guarantee that the data that has been stored and is available for retrieval from the beginning until

    -

    GitHub

    -

    - -GitHub - codex-storage/das-research: This repository hosts all the …

    -

    This repository hosts all the research on DAS for the collaboration between Codex and the EF. - GitHub - codex-storage/das-research: This repository hosts all the research on DAS for the collabora…

    -

    - -

    -

    - -GitHub - codex-storage/das-research: This repository hosts all the &hellip;

    -
  2. -
  3. -

    zk_id

    -

    So we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is: 

    -
      -
    • Our rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).
    • -
    • If we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don’t think we will pursue this, but we will have to if this scheme doesn’t scale with the first option.
    • -
    -

    dryajov 08/01/2023 1:55 PM

    -

    This might interest you as well - - -https://blog.subspace.network/combining-kzg-and-erasure-coding-fc903dc78f1a

    -

    Medium

    -

    - -Combining KZG and erasure coding

    -

    The Hitchhiker’s Guide to Subspace  — Episode II

    -

    - -

    -

    - -Combining KZG and erasure coding

    -
  4. -
  5. -

    [1:56 PM]

    -

    This is a great analysis of the current state of the art in structure of data + commitment and the interplay. I would also recoment reading the first article of the series which it also links to

    -
  6. -
  7. -

    zk_id 08/01/2023 3:04 PM

    -

    Thanks @dryajov @Leobago ! Much appreciated!

    -
  8. -
  9. -

    [3:05 PM]

    -

    Very glad that we can discuss these things with you. Maybe I have some specific questions once I finish reading a huge pile of pending docs that I’m tackling starting today…

    -
  10. -
  11. -

    zk_id 08/01/2023 6:34 PM

    -

    @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?

    -
  12. -
  13. -

    zk_id

    -

    @Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?

    -

    Leobago 08/01/2023 6:36 PM

    -

    Yes, checkout the visual branch and make sure to enable plotting in the config file, it should produce a bunch of figures - -🙂

    -
  14. -
  15. -

    [6:37 PM]

    -

    You might find also some bugs here and there on that branch - -😅

    -
  16. -
  17. -

    zk_id 08/01/2023 7:44 PM

    -

    Thanks!

    -
  18. -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/codex/updates/2023-08-01.html b/roadmap/codex/updates/2023-08-01.html new file mode 100644 index 000000000..02a6fb2e1 --- /dev/null +++ b/roadmap/codex/updates/2023-08-01.html @@ -0,0 +1,126 @@ + +2023-08-01 Codex weekly

Codex update Aug 1st

+

Client

+

Milestone: Merkelizing block data

+
    +
  • Initial design writeup metadata-overhead.md +
      +
    • Work break down and review for Ben and Tomasz (epic coming up)
    • +
    • This is required to integrate the proving system
    • +
    +
  • +
+

Milestone: Block discovery and retrieval

+
    +
  • Some initial work break down and milestones here - edit + +
  • +
+

Milestone: Distributed Client Testing

+
    +
  • Lots of work around log collection/analysis and monitoring +
      +
    • Details here 41
    • +
    +
  • +
+

Marketplace

+

Milestone: L2

+
    +
  • Taiko L2 integration +
      +
    • This is a first try of running against an L2
    • +
    • Mostly done, waiting on related fixes to land before merge - 483
    • +
    +
  • +
+

Milestone: Reservations and slot management

+
    +
  • Lots of work around slot reservation and queuing 455
  • +
+

Remote auditing

+

Milestone: Implement Poseidon2

+
    +
  • First pass at an implementation by Balazs +
      +
    • private repo, but can give access if anyone is interested
    • +
    +
  • +
+

Milestone: Refine proving system

+
    +
  • Lost of thinking around storage proofs and proving systems +
      +
    • private repo, but can give access if anyone is interested
    • +
    +
  • +
+

DAS

+

Milestone: DHT simulations

+
    +
  • Implementing a DHT in Python for the DAS simulator.
  • +
  • Implemented logical error-rates and delays to interactions between DHT clients.
  • +
\ No newline at end of file diff --git a/roadmap/codex/updates/2023-08-01/index.html b/roadmap/codex/updates/2023-08-01/index.html deleted file mode 100644 index 73342d37d..000000000 --- a/roadmap/codex/updates/2023-08-01/index.html +++ /dev/null @@ -1,431 +0,0 @@ - - - - - - - - 2023-08-01 Codex weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-01 Codex weekly

-

- Last updated -Aug 1, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Codex update Aug 1st

-

# Client

-

# Milestone: Merkelizing block data

- -

# Milestone: Block discovery and retrieval

- -

# Milestone: Distributed Client Testing

- -

# Marketplace

-

# Milestone: L2

- -

# Milestone: Reservations and slot management

- -

# Remote auditing

-

# Milestone: Implement Poseidon2

-
    -
  • First pass at an implementation by Balazs -
      -
    • private repo, but can give access if anyone is interested
    • -
    -
  • -
-

# Milestone: Refine proving system

-
    -
  • Lost of thinking around storage proofs and proving systems -
      -
    • private repo, but can give access if anyone is interested
    • -
    -
  • -
-

# DAS

-

# Milestone: DHT simulations

-
    -
  • Implementing a DHT in Python for the DAS simulator.
  • -
  • Implemented logical error-rates and delays to interactions between DHT clients.
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/codex/updates/2023-08-11.html b/roadmap/codex/updates/2023-08-11.html new file mode 100644 index 000000000..4b7954646 --- /dev/null +++ b/roadmap/codex/updates/2023-08-11.html @@ -0,0 +1,150 @@ + +2023-08-11 Codex weekly

Codex update August 11

+
+

Client

+

Milestone: Merkelizing block data

+
    +
  • Initial Merkle Tree implementation - 504
  • +
  • Work on persisting/serializing Merkle Tree is underway, PR upcoming
  • +
+

Milestone: Block discovery and retrieval

+ +

Milestone: Distributed Client Testing

+
    +
  • Continuing working on log collection/analysis and monitoring +
      +
    • Details here 41
    • +
    • More related issues/PRs: + +
    • +
    +
  • +
  • Testing and debugging Condex in continuous testing environment +
      +
    • Debugging continuous tests 44
    • +
    • pod labeling 39
    • +
    +
  • +
+
+

Infra

+

Milestone: Kubernetes Configuration and Management

+
    +
  • Move Dist-Tests cluster to OVH and define naming conventions
  • +
  • Configure Ingress Controller for Kibana/Grafana
  • +
  • Create documentation for Kubernetes management
  • +
  • Configure Dist/Continuous-Tests Pods logs shipping
  • +
+

Milestone: Continuous Testing and Labeling

+
    +
  • Watch the Continuous tests demo
  • +
  • Implement and configure Dist-Tests labeling
  • +
  • Set up logs shipping based on labels
  • +
  • Improve Docker workflows and add ‘latest’ tag
  • +
+

Milestone: CI/CD and Synchronization

+
    +
  • Set up synchronization by codex-storage
  • +
  • Configure Codex Storage and Demo CI/CD environments
  • +
+
+

Marketplace

+

Milestone: L2

+
    +
  • Taiko L2 integration +
      +
    • Done but merge is blocked by a few issues - 483
    • +
    +
  • +
+

Milestone: Marketplace Sales

+
    +
  • Lots of cleanup and refactoring +
      +
    • Finished refactoring state machine PR link
    • +
    • Added support for loading node’s slots during Sale’s module start link
    • +
    +
  • +
+
+

DAS

+

Milestone: DHT simulations

+
    +
  • Implementing a DHT in Python for the DAS simulator - py-dht.
  • +
+

NOTE: Several people are/where out during the last few weeks, so some milestones are paused until they are back

\ No newline at end of file diff --git a/roadmap/codex/updates/2023-08-11/index.html b/roadmap/codex/updates/2023-08-11/index.html deleted file mode 100644 index c4ed8c1c5..000000000 --- a/roadmap/codex/updates/2023-08-11/index.html +++ /dev/null @@ -1,467 +0,0 @@ - - - - - - - - 2023-08-11 Codex weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-11 Codex weekly

-

- Last updated -Aug 11, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Codex update August 11

-
-

# Client

-

# Milestone: Merkelizing block data

- -

# Milestone: Block discovery and retrieval

- -

# Milestone: Distributed Client Testing

- -
-

# Infra

-

# Milestone: Kubernetes Configuration and Management

-
    -
  • Move Dist-Tests cluster to OVH and define naming conventions
  • -
  • Configure Ingress Controller for Kibana/Grafana
  • -
  • Create documentation for Kubernetes management
  • -
  • Configure Dist/Continuous-Tests Pods logs shipping
  • -
-

# Milestone: Continuous Testing and Labeling

-
    -
  • Watch the Continuous tests demo
  • -
  • Implement and configure Dist-Tests labeling
  • -
  • Set up logs shipping based on labels
  • -
  • Improve Docker workflows and add ’latest’ tag
  • -
-

# Milestone: CI/CD and Synchronization

-
    -
  • Set up synchronization by codex-storage
  • -
  • Configure Codex Storage and Demo CI/CD environments
  • -
-
-

# Marketplace

-

# Milestone: L2

- -

# Milestone: Marketplace Sales

-
    -
  • Lots of cleanup and refactoring -
      -
    • Finished refactoring state machine PR - -link
    • -
    • Added support for loading node’s slots during Sale’s module start - -link
    • -
    -
  • -
-
-

# DAS

-

# Milestone: DHT simulations

- -

NOTE: Several people are/where out during the last few weeks, so some milestones are paused until they are back

- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/codex/updates/index.html b/roadmap/codex/updates/index.html new file mode 100644 index 000000000..a16300adc --- /dev/null +++ b/roadmap/codex/updates/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/codex/updates
\ No newline at end of file diff --git a/roadmap/index.html b/roadmap/index.html deleted file mode 100644 index 52566f08d..000000000 --- a/roadmap/index.html +++ /dev/null @@ -1,459 +0,0 @@ - - - - - - - - Roadmaps - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

All Roadmaps

- - - - - -
- -
- -
- -
- - - diff --git a/roadmap/index.xml b/roadmap/index.xml deleted file mode 100644 index 5b772f24c..000000000 --- a/roadmap/index.xml +++ /dev/null @@ -1,297 +0,0 @@ - - - - Roadmaps on - https://roadmap.logos.co/roadmap/ - Recent content in Roadmaps on - Hugo -- gohugo.io - en-us - Mon, 21 Aug 2023 00:00:00 +0000 - - 2023-08-21 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-21/ - Mon, 21 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-21/ - Vac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 Vac Github Repos: https://www.notion.so/Vac-Repositories-75f7feb3861048f897f0fe95ead08b06 -Vac week 34 August 21th vsu::P2P vac:p2p:nim-libp2p:vac:maintenance Test-plans for the perf protocol (99%: need to find why the executable doesn&rsquo;t work) https://github. - - - - Comms Milestones Overview - https://roadmap.logos.co/roadmap/acid/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/milestones-overview/ - Comms Roadmap Comms Projects Comms planner deadlines - - - - Innovation Lab Milestones Overview - https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview/ - iLab Milestones can be found on the Notion Page - - - - Nomos Milestones Overview - https://roadmap.logos.co/roadmap/nomos/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/milestones-overview/ - Milestones Overview Notion Page - - - - 2023-08-14 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-08-14/ - 2023-08-14 Waku weekly Epics Waku Network Can Support 10K Users {E:2023-10k-users} -All software has been delivered. Pending items are: -Running stress testing on PostgreSQL to confirm performance gain https://github. - - - - 2023-08-17 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14/ - Nomos weekly report 14th August Network Privacy and Mixnet Research Mixnet architecture discussions. Potential agreement on architecture not very different from PoC Mixnet preliminary design [https://www. - - - - 2023-08-17 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-14/ - Vac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 -Vac week 33 August 14th vsu::P2P vac:p2p:nim-libp2p:vac:maintenance Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920 delivered: Perf protocol https://github.com/status-im/nim-libp2p/pull/925 delivered: Test-plans for the perf protocol https://github. - - - - 2023-08-11 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-08-11/ - Fri, 11 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-08-11/ - Codex update August 11 Client Milestone: Merkelizing block data Initial Merkle Tree implementation - https://github.com/codex-storage/nim-codex/pull/504 Work on persisting/serializing Merkle Tree is underway, PR upcoming Milestone: Block discovery and retrieval Continued analysis of block discovery and retrieval - https://hackmd. - - - - 2023-08-17 <TEAM> weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11/ - Fri, 11 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11/ - Logos Lab 11th of August Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - 2023-08-09 Acid weekly - https://roadmap.logos.co/roadmap/acid/updates/2023-08-09/ - Wed, 09 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/updates/2023-08-09/ - Top level priorities: Logos Growth Plan Status Relaunch Launch of LPE Podcasts (Target: Every week one podcast out) Hiring: TD studio and DC studio roles - - - - 2023-08-06 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-08-06/ - Tue, 08 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-08-06/ - Milestones for current works are created and used. Next steps are: -Refine scope of research work for rest of the year and create matching milestones for research and waku clients Review work not coming from research and setting dates Note that format matches the Notion page but can be changed easily as it&rsquo;s scripted nwaku Release Process Improvements {E:2023-qa} - - - - 2023-08-07 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07/ - Nomos weekly report Network implementation and Mixnet: Research Researched the Nym mixnet architecture in depth in order to design our prototype architecture. - - - - 2023-08-07 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-07/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-07/ - More info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week): https://www. - - - - Codex Milestones Overview - https://roadmap.logos.co/roadmap/codex/milestones-overview/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/milestones-overview/ - Milestones Zenhub Tracker Miro Tracker - - - - Milestone: Waku Network supports 10k Users - https://roadmap.logos.co/roadmap/waku/milestone-waku-10-users/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/milestone-waku-10-users/ - %%{ init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#BB2528', 'primaryTextColor': '#fff', 'primaryBorderColor': '#7C0000', 'lineColor': '#F8B229', 'secondaryColor': '#006100', 'tertiaryColor': '#fff' } } }%% gantt dateFormat YYYY-MM-DD section Scaling 10k Users :done, 2023-01-20, 2023-07-31 Completion Deliverable TBD - - - - Waku Milestones Overview - https://roadmap.logos.co/roadmap/waku/milestones-overview/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/milestones-overview/ - 90% - Waku Network support for 10k users 80% - Waku Network support for 1MM users 65% - Restricted-run (light node) protocols are production ready 60% - Peer management strategy for relay and light nodes are defined and implemented 10% - Quality processes are implemented for nwaku and go-waku 80% - Define and track network and community metrics for continuous monitoring improvement 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties) 15% - Dogfooding of RLN by platforms has started 06% - First protocol to incentivize operators has been defined - - - - 2023-08-02 Acid weekly - https://roadmap.logos.co/roadmap/acid/updates/2023-08-02/ - Thu, 03 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/updates/2023-08-02/ - Leads roundup - acid Al / Comms -Status app relaunch comms campaign plan in the works. Approx. date for launch 31. - - - - 2023-08-03 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-24/ - Thu, 03 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-24/ - NOTE: This is a first experimental version moving towards the new reporting structure: -Last week -vc vc::Deep Research milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission related work section milestone (15%, 2023/08/31) Nimbus Tor-push PoC basic torpush encode/decode ( https://github. - - - - 2023-08-02 Innovation Lab weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02/ - Wed, 02 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02/ - Logos Lab 2nd of August Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - 2023-08-01 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-08-01/ - Tue, 01 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-08-01/ - Codex update Aug 1st Client Milestone: Merkelizing block data Initial design writeup https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md Work break down and review for Ben and Tomasz (epic coming up) This is required to integrate the proving system Milestone: Block discovery and retrieval Some initial work break down and milestones here - https://docs. - - - - 2023-07-31 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31/ - Nomos 31st July -[Network implementation and Mixnet]: -Research -Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder. - - - - 2023-07-31 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-31/ - vc::Deep Research milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission proposed solution section milestone (15%, 2023/08/31) Nimbus Tor-push PoC establishing torswitch and testing code milestone (15%, 2023/11/30) paper on Tor push validator privacy addressed feedback on current version of paper vsu::P2P nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH&rsquo;s EIP-4844 Merged IDontWant ( https://github. - - - - 2023-07-31 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-07-31/ - Docs Milestone: Docs general improvement/incorporating feedback (continuous) next: rewrite docs in British English Milestone: Running nwaku in the cloud next: publish guides for Digital Ocean, Oracle, Fly. - - - - 2023-07-24 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24/ - Mon, 24 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24/ - Research -Milestone 1: Understanding Data Availability (DA) Problem High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris. - - - - 2023-07-24 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-07-24/ - Mon, 24 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-07-24/ - Disclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones. - - - - 2023-07-21 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-07-21/ - Fri, 21 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-07-21/ - Codex update 07/12/2023 to 07/21/2023 Overall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc&hellip; -Our main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. - - - - 2023-07-17 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-17/ - Mon, 17 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-17/ - Last week -vc Vac day in Paris (13th) vc::Deep Research working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node working towards comprehensive current/related work study on gossipsub scaling vsu::P2P Paris offsite Paris (all CCs) vsu::Tokenomics Bugs found and solved in the SNT staking contract attend events in Paris vsu::Distributed Systems Testing Events in Paris QoS on all four infras Continue work on theoretical gossipsub analysis (varying regular graph sizes) Peer extraction using WLS (almost finished) Discv5 testing Wakurtosis CI improvements Provide offline data vip::zkVM onboarding new researcher Prepared and presented ZKVM work during VAC offsite Deep research on Nova vs Stark in terms of performance and related open questions researching Sangria Worked on NEscience document ( https://www. - - - - 2023-07-12 Innovation Lab Weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12/ - Wed, 12 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12/ - Logos Lab 12th of July Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - 2023-07-10 Vac Weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-10/ - Mon, 10 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-10/ - vc::Deep Research refined deep research roadmaps https://github.com/vacp2p/research/issues/190, https://github.com/vacp2p/research/issues/192 working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus working towards comprehensive current/related work study on gossipsub scaling vsu::P2P Prepared Paris talks Implemented perf protocol to compare the performances with other libp2ps https://github. - - - - Vac Milestones Overview - https://roadmap.logos.co/roadmap/vac/milestones-overview/ - Mon, 01 Jan 0001 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/milestones-overview/ - Overview Notion Page - Information copied here for now -Info Structure of milestone names: vac:&lt;unit&gt;:&lt;tag&gt;:&lt;for_project&gt;:&lt;title&gt;_&lt;counter&gt; -vac indicates it is a vac milestone unit indicates the vac unit p2p, dst, tke, acz, sc, zkvm, dr, rfc tag tags a specific area / project / epic within the respective vac unit, e. - - - - diff --git a/roadmap/innovation_lab/index.html b/roadmap/innovation_lab/index.html new file mode 100644 index 000000000..a8dc913bc --- /dev/null +++ b/roadmap/innovation_lab/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/innovation_lab

1 items under this folder.

\ No newline at end of file diff --git a/roadmap/innovation_lab/milestones-overview.html b/roadmap/innovation_lab/milestones-overview.html new file mode 100644 index 000000000..6d8ad246f --- /dev/null +++ b/roadmap/innovation_lab/milestones-overview.html @@ -0,0 +1,62 @@ + +Innovation Lab Milestones Overview
\ No newline at end of file diff --git a/roadmap/innovation_lab/milestones-overview/index.html b/roadmap/innovation_lab/milestones-overview/index.html deleted file mode 100644 index ed14c9242..000000000 --- a/roadmap/innovation_lab/milestones-overview/index.html +++ /dev/null @@ -1,365 +0,0 @@ - - - - - - - - Innovation Lab Milestones Overview - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

Innovation Lab Milestones Overview

-

- Last updated -Aug 17, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

iLab Milestones can be found on the - -Notion Page

- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/innovation_lab/updates/2023-07-12.html b/roadmap/innovation_lab/updates/2023-07-12.html new file mode 100644 index 000000000..de9a900de --- /dev/null +++ b/roadmap/innovation_lab/updates/2023-07-12.html @@ -0,0 +1,92 @@ + +2023-07-12 Innovation Lab Weekly

Logos Lab 12th of July +Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects.

+

Milestone: deliver the first transactional Waku Object called Payggy (attached some design screenshots).

+

It is now possible to make transactions on the blockchain and the objects send notifications over the messaging layer (e.g. Waku) to the other participants. What is left is the proper transaction status management and some polishing.

+

There is also work being done on supporting external objects, this enables creating the objects with any web technology. This work will guide the separation of the interfaces between the app and the objects and lead us to release it as an SDK.

+

Next milestone: group chat support

+

The design is already done for the group chat functionality. There is ongoing design work for a new Waku Object that would showcase what can be done in a group chat context.

+

Deployed version of the main branch: +waku-objects-playground.vercel.app

+

Link to Payggy design files: +64ae9e965652632169060c7d

+

Main development repo: +waku-objects-playground

+

Contact: +You can find us at 1118949151225413872 or join our discord at UtVHf2EU

+
+

Conversation

+
    +
  1. +

    petty 07/15/2023 5:49 AM

    +

    the waku-objects repo is empty. Where is the code storing that part vs the playground that is using them?

    +
  2. +
  3. +

    petty

    +

    the waku-objects repo is empty. Where is the code storing that part vs the playground that is using them?

    +
  4. +
  5. +

    attila🍀 07/15/2023 6:18 AM

    +

    at the moment most of the code is in the waku-objects-playground repo later we may split it to several repos here is the link: waku-objects-playground

    +
  6. +
\ No newline at end of file diff --git a/roadmap/innovation_lab/updates/2023-07-12/index.html b/roadmap/innovation_lab/updates/2023-07-12/index.html deleted file mode 100644 index 81853f452..000000000 --- a/roadmap/innovation_lab/updates/2023-07-12/index.html +++ /dev/null @@ -1,410 +0,0 @@ - - - - - - - - 2023-07-12 Innovation Lab Weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-12 Innovation Lab Weekly

-

- Last updated -Jul 12, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Logos Lab 12th of July -Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects.

-

Milestone: deliver the first transactional Waku Object called Payggy (attached some design screenshots).

-

It is now possible to make transactions on the blockchain and the objects send notifications over the messaging layer (e.g. Waku) to the other participants. What is left is the proper transaction status management and some polishing.

-

There is also work being done on supporting external objects, this enables creating the objects with any web technology. This work will guide the separation of the interfaces between the app and the objects and lead us to release it as an SDK.

-

Next milestone: group chat support

-

The design is already done for the group chat functionality. There is ongoing design work for a new Waku Object that would showcase what can be done in a group chat context.

-

Deployed version of the main branch: - - -https://waku-objects-playground.vercel.app/

-

Link to Payggy design files: - - -https://scene.zeplin.io/project/64ae9e965652632169060c7d

-

Main development repo: - - -https://github.com/logos-innovation-lab/waku-objects-playground

-

Contact: -You can find us at - -https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at - -https://discord.gg/UtVHf2EU

-
-

# Conversation

-
    -
  1. -

    petty 07/15/2023 5:49 AM

    -

    the waku-objects repo is empty. Where is the code storing that part vs the playground that is using them?

    -
  2. -
  3. -

    petty

    -

    the waku-objects repo is empty. Where is the code storing that part vs the playground that is using them?

    -
  4. -
  5. -

    attila🍀 07/15/2023 6:18 AM

    -

    at the moment most of the code is in the waku-objects-playground repo later we may split it to several repos here is the link: - -https://github.com/logos-innovation-lab/waku-objects-playground

    -
  6. -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/innovation_lab/updates/2023-08-02.html b/roadmap/innovation_lab/updates/2023-08-02.html new file mode 100644 index 000000000..860d06e1b --- /dev/null +++ b/roadmap/innovation_lab/updates/2023-08-02.html @@ -0,0 +1,119 @@ + +2023-08-02 Innovation Lab weekly

Logos Lab 2nd of August +Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects.

+

The last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite.

+

Still, a lot of progress were made and the team released the first version of a color system in the form of an npm package, which lets the users to choose any color they like to customize their app. It is based on grayscale design and uses luminance, hence the name of the library. Try it in the Playground app or check the links below.

+

Milestone: group chat support

+

There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.

+

Next milestone: Splitter Waku Object supporting group chats and smart contracts

+

This will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions.

+

Deployed version of the main branch: +waku-objects-playground.vercel.app

+

Main development repo: +waku-objects-playground

+

Grayscale design: +grayscale.design

+

Luminance package on npm: +luminance

+

Contact: +You can find us at 1118949151225413872 or join our discord at ZMU4yyWG

+
+

Conversation

+
    +
  1. +

    fryorcraken Yesterday at 10:58 PM

    +
    +

    There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.

    +
    +

    While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).

    +
  2. +
+

August 3, 2023

+
    +
  1. +

    fryorcraken

    +
    +

     > There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation. While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).

    +
    +
  2. +
  3. +

    attila🍀 Today at 4:21 AM

    +

    This is great news and I think it will help with adoption. I did not find a JS API for status (maybe I was looking at the wrong places), the closest was the status-js-api project but that still uses whisper and the repo recommends to use js-waku instead 🙂 status-js-api Also I also found the 56/STATUS-COMMUNITIES spec: 56 It seems to be quite a complete solution for community management with all the bells and whistles. However our use case is a private group chat for your existing contacts, so it seems to be a bit overkill for that.

    +
  4. +
  5. +

    fryorcraken Today at 5:32 AM

    +

    The repo is status-im/status-web

    +
  6. +
  7. +

    [5:33 AM]

    +

    Spec is 55

    +
  8. +
  9. +

    fryorcraken

    +

    The repo is status-im/status-web

    +
  10. +
  11. +

    attila🍀 Today at 6:05 AM

    +

    As constructive feedback I can tell you that it is not trivial to find it and use it in other projects It is presented as a React component without documentation and by looking at the code it seems to provide you the whole chat UI of the desktop app, which is not necessarily what you need if you want to embed it in your app It seems to be using this package: js Which also does not have documentation I assume that package is built from this: status-js This looks promising, but again there is no documentation. Of course you can use the code to figure out things, but at least I would be interested in what are the requirements and high level architecture (does it require an ethereum RPC endpoint, where does it store data, etc.) so that I can evaluate if this is the right approach for me. So maybe a lesson here is to put effort in the documentation and the presentation as well and if you have the budget then have someone on the team whose main responsibility is that (like a devrel or dev evangelist role)

    +
  12. +
\ No newline at end of file diff --git a/roadmap/innovation_lab/updates/2023-08-02/index.html b/roadmap/innovation_lab/updates/2023-08-02/index.html deleted file mode 100644 index a23690228..000000000 --- a/roadmap/innovation_lab/updates/2023-08-02/index.html +++ /dev/null @@ -1,463 +0,0 @@ - - - - - - - - 2023-08-02 Innovation Lab weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-02 Innovation Lab weekly

-

- Last updated -Aug 2, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Logos Lab 2nd of August -Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects.

-

The last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite.

-

Still, a lot of progress were made and the team released the first version of a color system in the form of an npm package, which lets the users to choose any color they like to customize their app. It is based on grayscale design and uses luminance, hence the name of the library. Try it in the Playground app or check the links below.

-

Milestone: group chat support

-

There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.

-

Next milestone: Splitter Waku Object supporting group chats and smart contracts

-

This will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions.

-

Deployed version of the main branch: - - -https://waku-objects-playground.vercel.app/

-

Main development repo: - - -https://github.com/logos-innovation-lab/waku-objects-playground

-

Grayscale design: - - -https://grayscale.design/

-

Luminance package on npm: - - -https://www.npmjs.com/package/@waku-objects/luminance

-

Contact: -You can find us at - -https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at - -https://discord.gg/ZMU4yyWG

-
-

# Conversation

-
    -
  1. -

    fryorcraken Yesterday at 10:58 PM

    -
    -

    There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.

    -
    -

    While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).

    -
  2. -
-

August 3, 2023

-
    -
  1. -

    fryorcraken

    -
    -

     > There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation. While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).

    -
    -
  2. -
  3. -

    attila🍀 Today at 4:21 AM

    -

    This is great news and I think it will help with adoption. I did not find a JS API for status (maybe I was looking at the wrong places), the closest was the status-js-api project but that still uses whisper and the repo recommends to use js-waku instead - -🙂 - -https://github.com/status-im/status-js-api Also I also found the 56/STATUS-COMMUNITIES spec: - -https://rfc.vac.dev/spec/56/ It seems to be quite a complete solution for community management with all the bells and whistles. However our use case is a private group chat for your existing contacts, so it seems to be a bit overkill for that.

    -
  4. -
  5. -

    fryorcraken Today at 5:32 AM

    -

    The repo is status-im/status-web

    -
  6. -
  7. -

    [5:33 AM]

    -

    Spec is - -https://rfc.vac.dev/spec/55/

    -
  8. -
  9. -

    fryorcraken

    -

    The repo is status-im/status-web

    -
  10. -
  11. -

    attila🍀 Today at 6:05 AM

    -

    As constructive feedback I can tell you that it is not trivial to find it and use it in other projects It is presented as a React component without documentation and by looking at the code it seems to provide you the whole chat UI of the desktop app, which is not necessarily what you need if you want to embed it in your app It seems to be using this package: - -https://www.npmjs.com/package/@status-im/js Which also does not have documentation I assume that package is built from this: - -https://github.com/status-im/status-web/tree/main/packages/status-js This looks promising, but again there is no documentation. Of course you can use the code to figure out things, but at least I would be interested in what are the requirements and high level architecture (does it require an ethereum RPC endpoint, where does it store data, etc.) so that I can evaluate if this is the right approach for me. So maybe a lesson here is to put effort in the documentation and the presentation as well and if you have the budget then have someone on the team whose main responsibility is that (like a devrel or dev evangelist role)

    -
  12. -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/innovation_lab/updates/2023-08-11.html b/roadmap/innovation_lab/updates/2023-08-11.html new file mode 100644 index 000000000..523f0761f --- /dev/null +++ b/roadmap/innovation_lab/updates/2023-08-11.html @@ -0,0 +1,74 @@ + +2023-08-17 <TEAM> weekly

Logos Lab 11th of August

+

Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects.

+

We merged the group chat but it surfaced plenty of issues that were not a problem with 1on1 chats, both with our Waku integration and from product perspective as well. Spent the bigger part of the week with fixing these. We also registered a new domain, wakuplay.im where the latest version is deployed. It uses the Gnosis chain for transactions and currently the xDai and Gno tokens are supported, but it is easy to add other ERC-20 tokens now.

+

Next milestone: Splitter Waku Object supporting group chats and smart contracts

+

This will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions. The design is ready and the implementaton has started.

+

Next milestone: Basic Waku Objects website

+

Work started toward having a structure for a website and the content is shaping up nicely. The implementation has been started on it as well.

+

Deployed version of the main branch: +www.wakuplay.im

+

Main development repo: +waku-objects-playground

+

Contact: +You can find us at 1118949151225413872 or join our discord at eaYVgSUG

\ No newline at end of file diff --git a/roadmap/innovation_lab/updates/2023-08-11/index.html b/roadmap/innovation_lab/updates/2023-08-11/index.html deleted file mode 100644 index 15c82a581..000000000 --- a/roadmap/innovation_lab/updates/2023-08-11/index.html +++ /dev/null @@ -1,373 +0,0 @@ - - - - - - - - 2023-08-17 <TEAM> weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-17 <TEAM> weekly

-

- Last updated -Aug 11, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Logos Lab 11th of August

-

Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects.

-

We merged the group chat but it surfaced plenty of issues that were not a problem with 1on1 chats, both with our Waku integration and from product perspective as well. Spent the bigger part of the week with fixing these. We also registered a new domain, wakuplay.im where the latest version is deployed. It uses the Gnosis chain for transactions and currently the xDai and Gno tokens are supported, but it is easy to add other ERC-20 tokens now.

-

Next milestone: Splitter Waku Object supporting group chats and smart contracts

-

This will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions. The design is ready and the implementaton has started.

-

Next milestone: Basic Waku Objects website

-

Work started toward having a structure for a website and the content is shaping up nicely. The implementation has been started on it as well.

-

Deployed version of the main branch: - - -https://www.wakuplay.im/

-

Main development repo: - - -https://github.com/logos-innovation-lab/waku-objects-playground

-

Contact: -You can find us at - -https://discord.com/channels/973324189794697286/1118949151225413872 or join our discord at - -https://discord.gg/eaYVgSUG

- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/innovation_lab/updates/index.html b/roadmap/innovation_lab/updates/index.html new file mode 100644 index 000000000..13574ab22 --- /dev/null +++ b/roadmap/innovation_lab/updates/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/innovation_lab/updates
\ No newline at end of file diff --git a/roadmap/nomos/index.html b/roadmap/nomos/index.html new file mode 100644 index 000000000..992c0a643 --- /dev/null +++ b/roadmap/nomos/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/nomos

1 items under this folder.

\ No newline at end of file diff --git a/roadmap/nomos/milestones-overview.html b/roadmap/nomos/milestones-overview.html new file mode 100644 index 000000000..d8cbb5133 --- /dev/null +++ b/roadmap/nomos/milestones-overview.html @@ -0,0 +1,62 @@ + +Nomos Milestones Overview
\ No newline at end of file diff --git a/roadmap/nomos/milestones-overview/index.html b/roadmap/nomos/milestones-overview/index.html deleted file mode 100644 index f18837827..000000000 --- a/roadmap/nomos/milestones-overview/index.html +++ /dev/null @@ -1,365 +0,0 @@ - - - - - - - - Nomos Milestones Overview - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/nomos/updates/2023-07-24.html b/roadmap/nomos/updates/2023-07-24.html new file mode 100644 index 000000000..d664fea0e --- /dev/null +++ b/roadmap/nomos/updates/2023-07-24.html @@ -0,0 +1,95 @@ + +2023-07-24 Nomos weekly

Research

+
    +
  • Milestone 1: Understanding Data Availability (DA) Problem
  • +
  • High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris.
  • +
  • Explored the necessity and key challenges associated with DA.
  • +
  • In-depth study of Verifiable Information Dispersal (VID) as it relates to data availability.
  • +
  • Blocker: The experimental tests for our specific EC scheme are pending, which is blocking progress to make final decision on KZG + commitments for our architecture.
  • +
  • Milestone 2: Privacy for Proof of Stake (PoS)
  • +
  • Analyzed the capabilities and limitations of mixnets, specifically within the context of timing attacks in private PoS.
  • +
  • Invested time in understanding timing attacks and how Nym mixnet caters to these challenges.
  • +
  • Reviewed the Crypsinous paper to understand its privacy vulnerabilities, notably the issue with probabilistic leader election and the vulnerability of anonymous broadcast channels to timing attacks.
  • +
+

Development

+
    +
  • Milestone 1: Mixnet and Networking
  • +
  • Initiated integration of libp2p to be used as the full node’s backend, planning to complete in the next phase.
  • +
  • Begun planning for the next steps for mixnet integration, with a focus on understanding the components of the Nym mixnet, its problem-solving mechanisms, and the potential for integrating some of its components into our codebase.
  • +
  • Milestone 2: Simulation Application
  • +
  • Completed pseudocode for Carnot Simulator, created a test pseudocode, and provided a detailed description of the simulation. The relevant resources can be found at the following links: + +
  • +
  • Implemented simulation network fixes and warding improvements, and increased the run duration of integration tests. The corresponding pull requests can be accessed here: +
      +
    • Simulation network fix (262)
    • +
    • Vote tally fix (268)
    • +
    • Increased run duration of integration tests (263)
    • +
    • Warding improvements (269)
    • +
    +
  • +
\ No newline at end of file diff --git a/roadmap/nomos/updates/2023-07-24/index.html b/roadmap/nomos/updates/2023-07-24/index.html deleted file mode 100644 index 5a4857c77..000000000 --- a/roadmap/nomos/updates/2023-07-24/index.html +++ /dev/null @@ -1,408 +0,0 @@ - - - - - - - - 2023-07-24 Nomos weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-24 Nomos weekly

-

- Last updated -Jul 24, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Research

-
    -
  • Milestone 1: Understanding Data Availability (DA) Problem
  • -
  • High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris.
  • -
  • Explored the necessity and key challenges associated with DA.
  • -
  • In-depth study of Verifiable Information Dispersal (VID) as it relates to data availability.
  • -
  • Blocker: The experimental tests for our specific EC scheme are pending, which is blocking progress to make final decision on KZG + commitments for our architecture.
  • -
  • Milestone 2: Privacy for Proof of Stake (PoS)
  • -
  • Analyzed the capabilities and limitations of mixnets, specifically within the context of timing attacks in private PoS.
  • -
  • Invested time in understanding timing attacks and how Nym mixnet caters to these challenges.
  • -
  • Reviewed the Crypsinous paper to understand its privacy vulnerabilities, notably the issue with probabilistic leader election and the vulnerability of anonymous broadcast channels to timing attacks.
  • -
-

Development

- - - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/nomos/updates/2023-07-31.html b/roadmap/nomos/updates/2023-07-31.html new file mode 100644 index 000000000..7ff513a57 --- /dev/null +++ b/roadmap/nomos/updates/2023-07-31.html @@ -0,0 +1,97 @@ + +2023-07-31 Nomos weekly

Nomos 31st July

+

[Network implementation and Mixnet]:

+

Research

+
    +
  • Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder.
  • +
  • Considered the use of a new NetworkInterface in the simulation to mimic the mixnet, but currently, no significant benefits from doing so have been identified. +Development
  • +
  • Fixes were made on the Overlay interface.
  • +
  • Near completion of the libp2p integration with all tests passing so far, a PR is expected to be opened soon.
  • +
  • Link to libp2p PRs: 278, 279, 280, 281
  • +
  • Started working on the foundation of the libp2p-mixnet transport.
  • +
+

[Private PoS]:

+

Research

+
    +
  • Discussions were held on the Privacy PoS (PPoS) proposal, aligning a general direction of team members.
  • +
  • Reviews on the PPoS proposal were done.
  • +
  • A proposal to merge the PPoS proposal with the efficient one was made, in order to have both privacy and efficiency.
  • +
  • Discussions on merging Efficient PoS (EPoS) with PPoS are in progress.
  • +
+

[Carnot]:

+

Research

+
    +
  • Analyzing Bribery attack scenarios, which seem to make Carnot more vulnerable than expected.
  • +
+

Development

+
    +
  • Improved simulation application to meet test scale requirements (274).
  • +
  • Created a strategy to solve the large message sending issue in the simulation application.
  • +
+

[Data Availability Sampling (or VID)]:

+

Research

+
    +
  • Conducted an analysis of stored data “degradation” problem for data availability, modeling fractions of nodes which leave the system at regular time intervals
  • +
  • Continued literature reading on Verifiable Information Dispersal (VID) for DA problem, as well as encoding/commitment schemes.
  • +
\ No newline at end of file diff --git a/roadmap/nomos/updates/2023-07-31/index.html b/roadmap/nomos/updates/2023-07-31/index.html deleted file mode 100644 index cd008e2c8..000000000 --- a/roadmap/nomos/updates/2023-07-31/index.html +++ /dev/null @@ -1,401 +0,0 @@ - - - - - - - - 2023-07-31 Nomos weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-31 Nomos weekly

-

- Last updated -Jul 31, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Nomos 31st July

-

[Network implementation and Mixnet]:

-

Research

- -

[Private PoS]:

-

Research

-
    -
  • Discussions were held on the Privacy PoS (PPoS) proposal, aligning a general direction of team members.
  • -
  • Reviews on the PPoS proposal were done.
  • -
  • A proposal to merge the PPoS proposal with the efficient one was made, in order to have both privacy and efficiency.
  • -
  • Discussions on merging Efficient PoS (EPoS) with PPoS are in progress.
  • -
-

[Carnot]:

-

Research

-
    -
  • Analyzing Bribery attack scenarios, which seem to make Carnot more vulnerable than expected.
  • -
-

Development

- -

[Data Availability Sampling (or VID)]:

-

Research

-
    -
  • Conducted an analysis of stored data “degradation” problem for data availability, modeling fractions of nodes which leave the system at regular time intervals
  • -
  • Continued literature reading on Verifiable Information Dispersal (VID) for DA problem, as well as encoding/commitment schemes.
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/nomos/updates/2023-08-07.html b/roadmap/nomos/updates/2023-08-07.html new file mode 100644 index 000000000..37a24dc7a --- /dev/null +++ b/roadmap/nomos/updates/2023-08-07.html @@ -0,0 +1,115 @@ + +2023-08-07 Nomos weekly

Nomos weekly report

+

Network implementation and Mixnet:

+

Research

+ +

Development

+
    +
  • Implemented a prototype for building a Sphinx packet, mixing packets at the first hop of gossipsub with 3 mixnodes (+ encryption + delay), raw TCP connections between mixnodes, and the static entire mixnode topology. +(Link: 288)
  • +
  • Added support for libp2p in tests. +(Link: 287)
  • +
  • Added support for libp2p in nomos node. +(Link: 285)
  • +
+

Private PoS:

+

Research

+
    +
  • Worked on PPoS design and addressed potential metadata leakage due to staking and rewarding.
  • +
  • Focus on potential bribery attacks and privacy reasoning, but not much progress yet.
  • +
  • Stopped work on Accountability mechanism and PPoS efficiency due to prioritizing bribery attacks.
  • +
+

Carnot:

+

Research

+
    +
  • Addressed two solutions for the bribery attack. Proposals pending.
  • +
  • Work on accountability against attacks in Carnot including Slashing mechanism for attackers is paused at the moment.
  • +
  • Modeled data decimation using a specific set of parameters and derived equations related to it.
  • +
  • Proposed solutions to address bribery attacks without compromising the protocol’s scalability.
  • +
+

Data Availability Sampling (VID):

+

Research

+
    +
  • Analyzed data decimation in data availability problem. +(Link: gzqvbbmfnxyp)
  • +
  • DA benchmarks and analysis for data commitments and encoding. This confirms that (for now), we are on the right path.
  • +
  • Explored the idea of node sharding: 1907.03331 (taken from Celestia), but discarded it because it doesn’t fit our architecture.
  • +
+

Testing and Node development:

+
    +
  • Fixes and enhancements made to nomos-node. +(Link: 282) +(Link: 289) +(Link: 293) +(Link: 295)
  • +
  • Ran simulations with 10K nodes.
  • +
  • Updated integration tests in CI to use waku or libp2p network. +(Link: 290)
  • +
  • Fix for the node throughput during simulations. +(Link: 295)
  • +
\ No newline at end of file diff --git a/roadmap/nomos/updates/2023-08-07/index.html b/roadmap/nomos/updates/2023-08-07/index.html deleted file mode 100644 index 8f4bc1f3d..000000000 --- a/roadmap/nomos/updates/2023-08-07/index.html +++ /dev/null @@ -1,450 +0,0 @@ - - - - - - - - 2023-08-07 Nomos weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-07 Nomos weekly

-

- Last updated -Aug 7, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Nomos weekly report

-

# Network implementation and Mixnet:

-

# Research

- -

# Development

- -

# Private PoS:

-

# Research

-
    -
  • Worked on PPoS design and addressed potential metadata leakage due to staking and rewarding.
  • -
  • Focus on potential bribery attacks and privacy reasoning, but not much progress yet.
  • -
  • Stopped work on Accountability mechanism and PPoS efficiency due to prioritizing bribery attacks.
  • -
-

# Carnot:

-

# Research

-
    -
  • Addressed two solutions for the bribery attack. Proposals pending.
  • -
  • Work on accountability against attacks in Carnot including Slashing mechanism for attackers is paused at the moment.
  • -
  • Modeled data decimation using a specific set of parameters and derived equations related to it.
  • -
  • Proposed solutions to address bribery attacks without compromising the protocol’s scalability.
  • -
-

# Data Availability Sampling (VID):

-

# Research

-
    -
  • Analyzed data decimation in data availability problem. -(Link: - -https://www.overleaf.com/read/gzqvbbmfnxyp)
  • -
  • DA benchmarks and analysis for data commitments and encoding. This confirms that (for now), we are on the right path.
  • -
  • Explored the idea of node sharding: - -https://arxiv.org/abs/1907.03331 (taken from Celestia), but discarded it because it doesn’t fit our architecture.
  • -
-

# Testing and Node development:

- - - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/nomos/updates/2023-08-14.html b/roadmap/nomos/updates/2023-08-14.html new file mode 100644 index 000000000..b816ccb70 --- /dev/null +++ b/roadmap/nomos/updates/2023-08-14.html @@ -0,0 +1,103 @@ + +2023-08-17 Nomos weekly

Nomos weekly report 14th August

+
+

Network Privacy and Mixnet

+

Research

+ +

Development

+
    +
  • Mixnet PoC implementation starting [302]
  • +
  • Implementation of mixnode: a core module for implementing a mixnode binary
  • +
  • Implementation of mixnet-client: a client library for mixnet users, such as nomos-node
  • +
+

Private PoS

+
    +
  • No progress this week.
  • +
+
+

Data Availability

+

Research

+
    +
  • Continued analysis of node decay in data availability problem
  • +
  • Improved upper bound on the probability of the event that data is no longer available given by the (K,N) erasure ECC scheme [gzqvbbmfnxyp]
  • +
+

Development

+ +
+

Testing, CI and Simulation App

+

Development

+
    +
  • Sim fixes/improvements [299], [298], [295]
  • +
  • Simulation app and instructions shared [300], [291], [294]
  • +
  • CI: Updated and merged [290]
  • +
  • Parallel node init for improved simulation run times [300]
  • +
  • Implemented branch overlay for simulating 100K+ nodes [291]
  • +
  • Sequential builds for nomos node features updated in CI [290]
  • +
\ No newline at end of file diff --git a/roadmap/nomos/updates/2023-08-14/index.html b/roadmap/nomos/updates/2023-08-14/index.html deleted file mode 100644 index af3409e3a..000000000 --- a/roadmap/nomos/updates/2023-08-14/index.html +++ /dev/null @@ -1,396 +0,0 @@ - - - - - - - - 2023-08-17 Nomos weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-17 Nomos weekly

-

- Last updated -Aug 14, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Nomos weekly report 14th August

-
-

# Network Privacy and Mixnet

-

# Research

-
    -
  • Mixnet architecture discussions. Potential agreement on architecture not very different from PoC
  • -
  • Mixnet preliminary design [https://www.notion.so/Mixnet-Architecture-613f53cf11a245098c50af6b191d31d2]
  • -
-

# Development

-
    -
  • Mixnet PoC implementation starting [https://github.com/logos-co/nomos-node/pull/302]
  • -
  • Implementation of mixnode: a core module for implementing a mixnode binary
  • -
  • Implementation of mixnet-client: a client library for mixnet users, such as nomos-node
  • -
-

# Private PoS

-
    -
  • No progress this week.
  • -
-
-

# Data Availability

-

# Research

-
    -
  • Continued analysis of node decay in data availability problem
  • -
  • Improved upper bound on the probability of the event that data is no longer available given by the (K,N) erasure ECC scheme [https://www.overleaf.com/read/gzqvbbmfnxyp]
  • -
-

# Development

- -
-

# Testing, CI and Simulation App

-

# Development

-
    -
  • Sim fixes/improvements [https://github.com/logos-co/nomos-node/pull/299], [https://github.com/logos-co/nomos-node/pull/298], [https://github.com/logos-co/nomos-node/pull/295]
  • -
  • Simulation app and instructions shared [https://github.com/logos-co/nomos-node/pull/300], [https://github.com/logos-co/nomos-node/pull/291], [https://github.com/logos-co/nomos-node/pull/294]
  • -
  • CI: Updated and merged [https://github.com/logos-co/nomos-node/pull/290]
  • -
  • Parallel node init for improved simulation run times [https://github.com/logos-co/nomos-node/pull/300]
  • -
  • Implemented branch overlay for simulating 100K+ nodes [https://github.com/logos-co/nomos-node/pull/291]
  • -
  • Sequential builds for nomos node features updated in CI [https://github.com/logos-co/nomos-node/pull/290]
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/nomos/updates/index.html b/roadmap/nomos/updates/index.html new file mode 100644 index 000000000..d9dea967c --- /dev/null +++ b/roadmap/nomos/updates/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/nomos/updates
\ No newline at end of file diff --git a/roadmap/page/1/index.html b/roadmap/page/1/index.html deleted file mode 100644 index eaa8dd6be..000000000 --- a/roadmap/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/roadmap/ - - - - - - diff --git a/roadmap/page/2/index.html b/roadmap/page/2/index.html deleted file mode 100644 index 2d62afede..000000000 --- a/roadmap/page/2/index.html +++ /dev/null @@ -1,455 +0,0 @@ - - - - - - - - Roadmaps - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

All Roadmaps

- - - - - -
- -
- -
- -
- - - diff --git a/roadmap/page/3/index.html b/roadmap/page/3/index.html deleted file mode 100644 index 1ff318b98..000000000 --- a/roadmap/page/3/index.html +++ /dev/null @@ -1,459 +0,0 @@ - - - - - - - - Roadmaps - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

All Roadmaps

- - - - - -
- -
- -
- -
- - - diff --git a/roadmap/vac/index.html b/roadmap/vac/index.html new file mode 100644 index 000000000..49057db30 --- /dev/null +++ b/roadmap/vac/index.html @@ -0,0 +1,52 @@ + +Vac Roadmap

Welcome to the Vac Roadmap Overview

1 items under this folder.

\ No newline at end of file diff --git a/roadmap/vac/milestones-overview.html b/roadmap/vac/milestones-overview.html new file mode 100644 index 000000000..94e0105ea --- /dev/null +++ b/roadmap/vac/milestones-overview.html @@ -0,0 +1,84 @@ + +Vac Milestones Overview

Overview Notion Page - Information copied here for now

+

Info

+

Structure of milestone names:

+

vac:<unit>:<tag>:<for_project>:<title>_<counter>

+
    +
  • vac indicates it is a vac milestone
  • +
  • unit indicates the vac unit p2p, dst, tke, acz, sc, zkvm, dr, rfc
  • +
  • tag tags a specific area / project / epic within the respective vac unit, e.g. nimlibp2p, or zerokit
  • +
  • for_project indicates which Logos project the milestone is mainly for nomos, waku, codex, nimbus, status; or vac (meaning it is internal / helping all projects as a base layer)
  • +
  • title the title of the milestone
  • +
  • counter an optional counter; 01 is implicit; marked with a 02 onward indicates extensions of previous milestones
  • +
+

Vac Unit Roadmaps

+
\ No newline at end of file diff --git a/roadmap/vac/milestones-overview/index.html b/roadmap/vac/milestones-overview/index.html deleted file mode 100644 index d0ce5bbb6..000000000 --- a/roadmap/vac/milestones-overview/index.html +++ /dev/null @@ -1,405 +0,0 @@ - - - - - - - - Vac Milestones Overview - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

Vac Milestones Overview

-

- Last updated -Aug 17, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

- -Overview Notion Page - Information copied here for now

-

# Info

-

# Structure of milestone names:

-

vac:<unit>:<tag>:<for_project>:<title>_<counter>

-
    -
  • vac indicates it is a vac milestone
  • -
  • unit indicates the vac unit p2p, dst, tke, acz, sc, zkvm, dr, rfc
  • -
  • tag tags a specific area / project / epic within the respective vac unit, e.g. nimlibp2p, or zerokit
  • -
  • for_project indicates which Logos project the milestone is mainly for nomos, waku, codex, nimbus, status; or vac (meaning it is internal / helping all projects as a base layer)
  • -
  • title the title of the milestone
  • -
  • counter an optional counter; 01 is implicit; marked with a 02 onward indicates extensions of previous milestones
  • -
-

# Vac Unit Roadmaps

- - - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/vac/updates/2023-07-10.html b/roadmap/vac/updates/2023-07-10.html new file mode 100644 index 000000000..1e2d9ae88 --- /dev/null +++ b/roadmap/vac/updates/2023-07-10.html @@ -0,0 +1,109 @@ + +2023-07-10 Vac Weekly
    +
  • vc::Deep Research +
      +
    • refined deep research roadmaps 190, 192
    • +
    • working on comprehensive current/related work study on Validator Privacy
    • +
    • working on PoC of Tor push in Nimbus
    • +
    • working towards comprehensive current/related work study on gossipsub scaling
    • +
    +
  • +
  • vsu::P2P +
      +
    • Prepared Paris talks
    • +
    • Implemented perf protocol to compare the performances with other libp2ps 925
    • +
    +
  • +
  • vsu::Tokenomics +
      +
    • Fixing bugs on the SNT staking contract;
    • +
    • Definition of the first formal verification tests for the SNT staking contract;
    • +
    • Slides for the Paris off-site
    • +
    +
  • +
  • vsu::Distributed Systems Testing +
      +
    • Replicated message rate issue (still on it)
    • +
    • First mockup of offline data
    • +
    • Nomos consensus test working
    • +
    +
  • +
  • vip::zkVM +
      +
    • hiring
    • +
    • onboarding new researcher
    • +
    • presentation on ECC during Logos Research Call (incl. preparation)
    • +
    • more research on nova, considering additional options
    • +
    • Identified 3 research questions to be taken into consideration for the ZKVM and the publication
    • +
    • Researched Poseidon implementation for Nova, Nova-Scotia, Circom
    • +
    +
  • +
  • vip::RLNP2P + +
  • +
\ No newline at end of file diff --git a/roadmap/vac/updates/2023-07-10/index.html b/roadmap/vac/updates/2023-07-10/index.html deleted file mode 100644 index ec5d93ce4..000000000 --- a/roadmap/vac/updates/2023-07-10/index.html +++ /dev/null @@ -1,414 +0,0 @@ - - - - - - - - 2023-07-10 Vac Weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-10 Vac Weekly

-

- Last updated -Jul 10, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/vac/updates/2023-07-17.html b/roadmap/vac/updates/2023-07-17.html new file mode 100644 index 000000000..92877b52c --- /dev/null +++ b/roadmap/vac/updates/2023-07-17.html @@ -0,0 +1,174 @@ + +2023-07-17 Vac weekly

Last week

+
    +
  • vc +
      +
    • Vac day in Paris (13th)
    • +
    +
  • +
  • vc::Deep Research +
      +
    • working on comprehensive current/related work study on Validator Privacy
    • +
    • working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node
    • +
    • working towards comprehensive current/related work study on gossipsub scaling
    • +
    +
  • +
  • vsu::P2P +
      +
    • Paris offsite Paris (all CCs)
    • +
    +
  • +
  • vsu::Tokenomics +
      +
    • Bugs found and solved in the SNT staking contract
    • +
    • attend events in Paris
    • +
    +
  • +
  • vsu::Distributed Systems Testing +
      +
    • Events in Paris
    • +
    • QoS on all four infras
    • +
    • Continue work on theoretical gossipsub analysis (varying regular graph sizes)
    • +
    • Peer extraction using WLS (almost finished)
    • +
    • Discv5 testing
    • +
    • Wakurtosis CI improvements
    • +
    • Provide offline data
    • +
    +
  • +
  • vip::zkVM +
      +
    • onboarding new researcher
    • +
    • Prepared and presented ZKVM work during VAC offsite
    • +
    • Deep research on Nova vs Stark in terms of performance and related open questions
    • +
    • researching Sangria
    • +
    • Worked on NEscience document (Nescience-WIP-0645c738eb7a40869d5650ae1d5a4f4e)
    • +
    • zerokit: +
        +
      • worked on PR for arc-circom
      • +
      +
    • +
    +
  • +
  • vip::RLNP2P +
      +
    • offsite Paris
    • +
    +
  • +
+

This week

+
    +
  • vc
  • +
  • vc::Deep Research +
      +
    • working on comprehensive current/related work study on Validator Privacy
    • +
    • working on PoC of Tor push in Nimbus
    • +
    • working towards comprehensive current/related work study on gossipsub scaling
    • +
    +
  • +
  • vsu::P2P +
      +
    • EthCC & Logos event Paris (all CCs)
    • +
    +
  • +
  • vsu::Tokenomics +
      +
    • Attend EthCC and side events in Paris
    • +
    • Integrate staking contracts with radCAD model
    • +
    • Work on a new approach for Codex collateral problem
    • +
    +
  • +
  • vsu::Distributed Systems Testing +
      +
    • Events in Paris
    • +
    • Finish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report
    • +
    • Restructure the Analysis script and start modelling Status control messages
    • +
    • Split Wakurtosis analysis module into separate repository (delayed)
    • +
    • Deliver simulation results (incl fixing discv5 error with new Kurtosis version)
    • +
    • Second iteration Nomos CI
    • +
    +
  • +
  • vip::zkVM +
      +
    • Continue researching on Nova open questions and Sangria
    • +
    • Draft the benchmark document (by the end of the week)
    • +
    • research hardware for benchmarks
    • +
    • research Halo2 cont’
    • +
    • zerokit: +
        +
      • merge a PR for deployment of arc-circom
      • +
      • deal with arc-circom master fail
      • +
      +
    • +
    +
  • +
  • vip::RLNP2P +
      +
    • offsite paris
    • +
    +
  • +
  • blockers +
      +
    • vip::zkVM:zerokit: ark-circom deployment to crates io; contact to ark-circom team
    • +
    +
  • +
\ No newline at end of file diff --git a/roadmap/vac/updates/2023-07-17/index.html b/roadmap/vac/updates/2023-07-17/index.html deleted file mode 100644 index 834f549d4..000000000 --- a/roadmap/vac/updates/2023-07-17/index.html +++ /dev/null @@ -1,475 +0,0 @@ - - - - - - - - 2023-07-17 Vac weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-17 Vac weekly

-

- Last updated -Jul 17, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Last week

-
    -
  • vc -
      -
    • Vac day in Paris (13th)
    • -
    -
  • -
  • vc::Deep Research -
      -
    • working on comprehensive current/related work study on Validator Privacy
    • -
    • working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node
    • -
    • working towards comprehensive current/related work study on gossipsub scaling
    • -
    -
  • -
  • vsu::P2P -
      -
    • Paris offsite Paris (all CCs)
    • -
    -
  • -
  • vsu::Tokenomics -
      -
    • Bugs found and solved in the SNT staking contract
    • -
    • attend events in Paris
    • -
    -
  • -
  • vsu::Distributed Systems Testing -
      -
    • Events in Paris
    • -
    • QoS on all four infras
    • -
    • Continue work on theoretical gossipsub analysis (varying regular graph sizes)
    • -
    • Peer extraction using WLS (almost finished)
    • -
    • Discv5 testing
    • -
    • Wakurtosis CI improvements
    • -
    • Provide offline data
    • -
    -
  • -
  • vip::zkVM - -
  • -
  • vip::RLNP2P -
      -
    • offsite Paris
    • -
    -
  • -
-

This week

-
    -
  • vc
  • -
  • vc::Deep Research -
      -
    • working on comprehensive current/related work study on Validator Privacy
    • -
    • working on PoC of Tor push in Nimbus
    • -
    • working towards comprehensive current/related work study on gossipsub scaling
    • -
    -
  • -
  • vsu::P2P -
      -
    • EthCC & Logos event Paris (all CCs)
    • -
    -
  • -
  • vsu::Tokenomics -
      -
    • Attend EthCC and side events in Paris
    • -
    • Integrate staking contracts with radCAD model
    • -
    • Work on a new approach for Codex collateral problem
    • -
    -
  • -
  • vsu::Distributed Systems Testing -
      -
    • Events in Paris
    • -
    • Finish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report
    • -
    • Restructure the Analysis script and start modelling Status control messages
    • -
    • Split Wakurtosis analysis module into separate repository (delayed)
    • -
    • Deliver simulation results (incl fixing discv5 error with new Kurtosis version)
    • -
    • Second iteration Nomos CI
    • -
    -
  • -
  • vip::zkVM -
      -
    • Continue researching on Nova open questions and Sangria
    • -
    • Draft the benchmark document (by the end of the week)
    • -
    • research hardware for benchmarks
    • -
    • research Halo2 cont'
    • -
    • zerokit: -
        -
      • merge a PR for deployment of arc-circom
      • -
      • deal with arc-circom master fail
      • -
      -
    • -
    -
  • -
  • vip::RLNP2P -
      -
    • offsite paris
    • -
    -
  • -
  • blockers -
      -
    • vip::zkVM:zerokit: ark-circom deployment to crates io; contact to ark-circom team
    • -
    -
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/vac/updates/2023-07-24.html b/roadmap/vac/updates/2023-07-24.html new file mode 100644 index 000000000..2f549f5a5 --- /dev/null +++ b/roadmap/vac/updates/2023-07-24.html @@ -0,0 +1,298 @@ + +2023-08-03 Vac weekly

NOTE: This is a first experimental version moving towards the new reporting structure:

+

Last week

+
    +
  • vc
  • +
  • vc::Deep Research +
      +
    • milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission +
        +
      • related work section
      • +
      +
    • +
    • milestone (15%, 2023/08/31) Nimbus Tor-push PoC +
        +
      • basic torpush encode/decode ( 1 )
      • +
      +
    • +
    • milestone (15%, 2023/11/30) paper on Tor push validator privacy +
        +
      • (focus on Tor-push PoC)
      • +
      +
    • +
    +
  • +
  • vsu::P2P +
      +
    • admin/misc +
        +
      • EthCC (all CCs)
      • +
      +
    • +
    +
  • +
  • vsu::Tokenomics +
      +
    • admin/misc +
        +
      • Attended EthCC and side events in Paris
      • +
      +
    • +
    • milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management +
        +
      • Kicked off a new approach for Codex collateral problem
      • +
      +
    • +
    • milestone (50%, 2023/08/30) SNT staking smart contract +
        +
      • Integrated SNT staking contracts with Python
      • +
      +
    • +
    • milestone (50%, 2023/07/14) SNT litepaper +
        +
      • (delayed)
      • +
      +
    • +
    • milestone(30%, 2023/09/29) Nomos Token: requirements and constraints
    • +
    +
  • +
  • vsu::Distributed Systems Testing +
      +
    • milestone (95%, 2023/07/31) Wakurtosis Waku Report +
        +
      • Add timout to injection async call in WLS to avoid further issues (PR #139 139)
      • +
      • Plotting & analyse 100 msg/s off line Prometehus data
      • +
      +
    • +
    • milestone (90%, 2023/07/31) Nomos CI testing +
        +
      • fixed errors in Nomos consensus simulation
      • +
      +
    • +
    • milestone (30%, …) gossipsub model analysis +
        +
      • add config options to script, allowing to load configs that can be directly compared to Wakurtosis results
      • +
      • added support for small world networks
      • +
      +
    • +
    • admin/misc +
        +
      • Interviews & reports for SE and STA positions
      • +
      • EthCC (1 CC)
      • +
      +
    • +
    +
  • +
  • vip::zkVM +
      +
    • milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…) +
        +
      • (write ups will be available here: zkVM-cd358fe429b14fa2ab38ca42835a8451)
      • +
      • Solved the open questions on Nova adn completed the document (will update the page)
      • +
      • Reviewed Nescience and working on a document
      • +
      • Reviewed partly the write up on FHE
      • +
      • writeup for Nova and Sangria; research on super nova
      • +
      • reading a new paper revisiting Nova (969)
      • +
      +
    • +
    • milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations
    • +
    • zkvm +
        +
      • Researching Nova to understand the folding technique for ZKVM adaptation
      • +
      +
    • +
    • zerokit +
        +
      • Rostyslav became circom-compat maintainer
      • +
      +
    • +
    +
  • +
  • vip::RLNP2P +
      +
    • milestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro +
        +
      • completed
      • +
      +
    • +
    • milestone (95%, 2023/07/31) RLN-Relay Waku production readiness
    • +
    • admin/misc +
        +
      • EthCC + offsite
      • +
      +
    • +
    +
  • +
+

This week

+
    +
  • vc
  • +
  • vc::Deep Research +
      +
    • milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission + +
    • +
    • milestone (15%, 2023/08/31) Nimbus Tor-push PoC +
        +
      • working on establishing a connection via nim-libp2p tor-transport
      • +
      • setting up goerli test node (cont’)
      • +
      +
    • +
    • milestone (15%, 2023/11/30) paper on Tor push validator privacy +
        +
      • continue working on paper
      • +
      +
    • +
    +
  • +
  • vsu::P2P +
      +
    • milestone (…) +
        +
      • Implement ChokeMessage for GossipSub
      • +
      • Continue “limited flood publishing” (911)
      • +
      +
    • +
    +
  • +
  • vsu::Tokenomics +
      +
    • admin/misc: +
        +
      • (3 CC days off)
      • +
      • Catch up with EthCC talks that we couldn’t attend (schedule conflicts)
      • +
      +
    • +
    • milestone (50%, 2023/07/14) SNT litepaper +
        +
      • Start building the SNT agent-based simulation
      • +
      +
    • +
    +
  • +
  • vsu::Distributed Systems Testing +
      +
    • milestone (100%, 2023/07/31) Wakurtosis Waku Report +
        +
      • finalize simulations
      • +
      • finalize report
      • +
      +
    • +
    • milestone (100%, 2023/07/31) Nomos CI testing +
        +
      • finalize milestone
      • +
      +
    • +
    • milestone (30%, …) gossipsub model analysis +
        +
      • Incorporate Status control messages
      • +
      +
    • +
    • admin/misc +
        +
      • Interviews & reports for SE and STA positions
      • +
      • EthCC (1 CC)
      • +
      +
    • +
    +
  • +
  • vip::zkVM +
      +
    • milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…) +
        +
      • Refine the Nescience WIP and FHE documents
      • +
      • research HyperNova
      • +
      +
    • +
    • milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations +
        +
      • Continue exploring Nova and other ZKPs and start technical writing on Nova benchmarks
      • +
      +
    • +
    • zkvm
    • +
    • zerokit +
        +
      • circom: reach an agreement with other maintainers on master branch situation
      • +
      +
    • +
    +
  • +
  • vip::RLNP2P +
      +
    • maintenance +
        +
      • investigate why docker builds of nwaku are failing [zerokit dependency related]
      • +
      • documentation on how to use rln for projects interested (console)
      • +
      +
    • +
    • milestone (95%, 2023/07/31) RLN-Relay Waku production readiness +
        +
      • revert rln bandwidth reduction based on offsite discussion, move to different validator
      • +
      +
    • +
    +
  • +
  • blockers
  • +
\ No newline at end of file diff --git a/roadmap/vac/updates/2023-07-24/index.html b/roadmap/vac/updates/2023-07-24/index.html deleted file mode 100644 index 1a413b306..000000000 --- a/roadmap/vac/updates/2023-07-24/index.html +++ /dev/null @@ -1,612 +0,0 @@ - - - - - - - - 2023-08-03 Vac weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-03 Vac weekly

-

- Last updated -Aug 3, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

NOTE: This is a first experimental version moving towards the new reporting structure:

-

Last week

-
    -
  • vc
  • -
  • vc::Deep Research -
      -
    • milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission -
        -
      • related work section
      • -
      -
    • -
    • milestone (15%, 2023/08/31) Nimbus Tor-push PoC - -
    • -
    • milestone (15%, 2023/11/30) paper on Tor push validator privacy -
        -
      • (focus on Tor-push PoC)
      • -
      -
    • -
    -
  • -
  • vsu::P2P -
      -
    • admin/misc -
        -
      • EthCC (all CCs)
      • -
      -
    • -
    -
  • -
  • vsu::Tokenomics -
      -
    • admin/misc -
        -
      • Attended EthCC and side events in Paris
      • -
      -
    • -
    • milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management -
        -
      • Kicked off a new approach for Codex collateral problem
      • -
      -
    • -
    • milestone (50%, 2023/08/30) SNT staking smart contract -
        -
      • Integrated SNT staking contracts with Python
      • -
      -
    • -
    • milestone (50%, 2023/07/14) SNT litepaper -
        -
      • (delayed)
      • -
      -
    • -
    • milestone(30%, 2023/09/29) Nomos Token: requirements and constraints
    • -
    -
  • -
  • vsu::Distributed Systems Testing -
      -
    • milestone (95%, 2023/07/31) Wakurtosis Waku Report - -
    • -
    • milestone (90%, 2023/07/31) Nomos CI testing -
        -
      • fixed errors in Nomos consensus simulation
      • -
      -
    • -
    • milestone (30%, …) gossipsub model analysis -
        -
      • add config options to script, allowing to load configs that can be directly compared to Wakurtosis results
      • -
      • added support for small world networks
      • -
      -
    • -
    • admin/misc -
        -
      • Interviews & reports for SE and STA positions
      • -
      • EthCC (1 CC)
      • -
      -
    • -
    -
  • -
  • vip::zkVM -
      -
    • milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…) - -
    • -
    • milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations
    • -
    • zkvm -
        -
      • Researching Nova to understand the folding technique for ZKVM adaptation
      • -
      -
    • -
    • zerokit -
        -
      • Rostyslav became circom-compat maintainer
      • -
      -
    • -
    -
  • -
  • vip::RLNP2P -
      -
    • milestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro -
        -
      • completed
      • -
      -
    • -
    • milestone (95%, 2023/07/31) RLN-Relay Waku production readiness
    • -
    • admin/misc -
        -
      • EthCC + offsite
      • -
      -
    • -
    -
  • -
-

This week

-
    -
  • vc
  • -
  • vc::Deep Research -
      -
    • milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission - -
    • -
    • milestone (15%, 2023/08/31) Nimbus Tor-push PoC -
        -
      • working on establishing a connection via nim-libp2p tor-transport
      • -
      • setting up goerli test node (cont')
      • -
      -
    • -
    • milestone (15%, 2023/11/30) paper on Tor push validator privacy -
        -
      • continue working on paper
      • -
      -
    • -
    -
  • -
  • vsu::P2P - -
  • -
  • vsu::Tokenomics -
      -
    • admin/misc: -
        -
      • (3 CC days off)
      • -
      • Catch up with EthCC talks that we couldn’t attend (schedule conflicts)
      • -
      -
    • -
    • milestone (50%, 2023/07/14) SNT litepaper -
        -
      • Start building the SNT agent-based simulation
      • -
      -
    • -
    -
  • -
  • vsu::Distributed Systems Testing -
      -
    • milestone (100%, 2023/07/31) Wakurtosis Waku Report -
        -
      • finalize simulations
      • -
      • finalize report
      • -
      -
    • -
    • milestone (100%, 2023/07/31) Nomos CI testing -
        -
      • finalize milestone
      • -
      -
    • -
    • milestone (30%, …) gossipsub model analysis -
        -
      • Incorporate Status control messages
      • -
      -
    • -
    • admin/misc -
        -
      • Interviews & reports for SE and STA positions
      • -
      • EthCC (1 CC)
      • -
      -
    • -
    -
  • -
  • vip::zkVM -
      -
    • milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…) -
        -
      • Refine the Nescience WIP and FHE documents
      • -
      • research HyperNova
      • -
      -
    • -
    • milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations -
        -
      • Continue exploring Nova and other ZKPs and start technical writing on Nova benchmarks
      • -
      -
    • -
    • zkvm
    • -
    • zerokit -
        -
      • circom: reach an agreement with other maintainers on master branch situation
      • -
      -
    • -
    -
  • -
  • vip::RLNP2P - -
  • -
  • blockers
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/vac/updates/2023-07-31.html b/roadmap/vac/updates/2023-07-31.html new file mode 100644 index 000000000..43c80b635 --- /dev/null +++ b/roadmap/vac/updates/2023-07-31.html @@ -0,0 +1,171 @@ + +2023-07-31 Vac weekly
    +
  • vc::Deep Research +
      +
    • milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission +
        +
      • proposed solution section
      • +
      +
    • +
    • milestone (15%, 2023/08/31) Nimbus Tor-push PoC +
        +
      • establishing torswitch and testing code
      • +
      +
    • +
    • milestone (15%, 2023/11/30) paper on Tor push validator privacy
    • +
    • addressed feedback on current version of paper
    • +
    +
  • +
  • vsu::P2P +
      +
    • nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH’s EIP-4844 +
        +
      • Merged IDontWant (934) & Limit flood publishing (911) 𝕏
      • +
      • This wraps up the “mandatory” optimizations for 4844. We will continue working on stagger sending and other optimizations
      • +
      +
    • +
    • nim-libp2p: (70%, 2023/07/31) WebRTC transport
    • +
    +
  • +
  • vsu::Tokenomics +
      +
    • admin/misc +
        +
      • 2 CCs off for the week
      • +
      +
    • +
    • milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management
    • +
    • milestone (50%, 2023/08/30) SNT staking smart contract
    • +
    • milestone (50%, 2023/07/14) SNT litepaper
    • +
    • milestone (30%, 2023/09/29) Nomos Token: requirements and constraints
    • +
    +
  • +
  • vsu::Distributed Systems Testing +
      +
    • admin/misc +
        +
      • Analysis module extracted from wakurtosis repo (142, DST-Analysis)
      • +
      • hiring
      • +
      +
    • +
    • milestone (99%, 2023/07/31) Wakurtosis Waku Report +
        +
      • Re-run simulations
      • +
      • merge Discv5 PR (129).
      • +
      • finalize Wakurtosis Tech Report v2
      • +
      +
    • +
    • milestone (100%, 2023/07/31) Nomos CI testing +
        +
      • delivered first version of Nomos CI integration (141)
      • +
      +
    • +
    • milestone (30%, 2023/08/31 gossipsub model: Status control messages +
        +
      • Waku model is updated to model topics/content-topics
      • +
      +
    • +
    +
  • +
  • vip::zkVM +
      +
    • milestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…) +
        +
      • achievment :: nova questions answered (see document in Project: zkVM-cd358fe429b14fa2ab38ca42835a8451)
      • +
      • Nescience WIP done (to be delivered next week, priority)
      • +
      • FHE review (lower prio)
      • +
      +
    • +
    • milestone (50%, 2023/08/31) new fair benchmarks + recursive implementations +
        +
      • Working on discoveries about other benchmarks done on plonky2, starky, and halo2
      • +
      +
    • +
    • zkvm
    • +
    • zerokit +
        +
      • fixed ark-circom master
      • +
      • achievment :: publish ark-circom ark-circom
      • +
      • achievment :: publish zerokit_utils zerokit_utils
      • +
      • achievment :: publish rln rln (𝕏 jointly with RLNP2P)
      • +
      +
    • +
    +
  • +
  • vip::RLNP2P +
      +
    • milestone (100%, 2023/07/31) RLN-Relay Waku production readiness +
        +
      • Updated rln-contract to be more modular - and downstreamed to waku fork of rln-contract - rln-contract and waku-rln-contract
      • +
      • Deployed to sepolia
      • +
      • Fixed rln enabled docker image building in nwaku - 1853
      • +
      +
    • +
    • zerokit: +
        +
      • achievement :: zerokit v0.3.0 release done - v0.3.0 (𝕏 jointly with zkVM)
      • +
      +
    • +
    +
  • +
\ No newline at end of file diff --git a/roadmap/vac/updates/2023-07-31/index.html b/roadmap/vac/updates/2023-07-31/index.html deleted file mode 100644 index b0ec00584..000000000 --- a/roadmap/vac/updates/2023-07-31/index.html +++ /dev/null @@ -1,497 +0,0 @@ - - - - - - - - 2023-07-31 Vac weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-31 Vac weekly

-

- Last updated -Jul 31, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -
  • vc::Deep Research -
      -
    • milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission -
        -
      • proposed solution section
      • -
      -
    • -
    • milestone (15%, 2023/08/31) Nimbus Tor-push PoC -
        -
      • establishing torswitch and testing code
      • -
      -
    • -
    • milestone (15%, 2023/11/30) paper on Tor push validator privacy
    • -
    • addressed feedback on current version of paper
    • -
    -
  • -
  • vsu::P2P - -
  • -
  • vsu::Tokenomics -
      -
    • admin/misc -
        -
      • 2 CCs off for the week
      • -
      -
    • -
    • milestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management
    • -
    • milestone (50%, 2023/08/30) SNT staking smart contract
    • -
    • milestone (50%, 2023/07/14) SNT litepaper
    • -
    • milestone (30%, 2023/09/29) Nomos Token: requirements and constraints
    • -
    -
  • -
  • vsu::Distributed Systems Testing - -
  • -
  • vip::zkVM - -
  • -
  • vip::RLNP2P - -
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/vac/updates/2023-08-07.html b/roadmap/vac/updates/2023-08-07.html new file mode 100644 index 000000000..5992d15e6 --- /dev/null +++ b/roadmap/vac/updates/2023-08-07.html @@ -0,0 +1,192 @@ + +2023-08-07 Vac weekly

More info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week): +Vac-Roadmap-907df7eeac464143b00c6f49a20bb632

+

Vac week 32 August 7th

+
    +
  • vsu::P2P +
      +
    • vac:p2p:nim-libp2p:vac:maintenance +
        +
      • Improve gossipsub DDoS resistance 920
      • +
      +
    • +
    • vac:p2p:nim-chronos:vac:maintenance +
        +
      • Remove hard-coded ports from test 429
      • +
      • Investigate flaky test using REUSE_PORT
      • +
      +
    • +
    +
  • +
  • vsu::Tokenomics +
      +
    • (…)
    • +
    +
  • +
  • vsu::Distributed Systems Testing +
      +
    • vac:dst:wakurtosis:waku:techreport + +
    • +
    • vac:dst:wakurtosis:vac:rlog +
        +
      • working on research log post on Waku Wakurtosis simulations
      • +
      +
    • +
    • vac:dst:gsub-model:status:control-messages +
        +
      • delivered: the analytical model can now handle Status messages; status analysis now has a separate cli and config; handles top 5 message types (by expected bandwidth consumption)
      • +
      +
    • +
    • vac:dst:gsub-model:vac:refactoring +
        +
      • Refactoring and bug fixes
      • +
      • introduced and tested 2 new analytical models
      • +
      +
    • +
    • vac:dst:wakurtosis:waku:topology-analysis +
        +
      • delivered: extracted into separate module, independent of wls message
      • +
      +
    • +
    • vac:dst:wakurtosis:nomos:ci-integration_02 +
        +
      • planning
      • +
      +
    • +
    • vac:dst:10ksim:vac:10ksim-bandwidth-test + +
    • +
    +
  • +
  • vip::zkVM +
      +
    • vac:zkvm::vac:research-existing-proof-systems + +
    • +
    • vac:zkvm::vac:proof-system-benchmarks +
        +
      • More discoveries on benchmarks done on ZK-snarks and ZK-starks but all are high level
      • +
      • Viewed some circuits on Nova and Poseidon
      • +
      • Read through Halo2 code (and Poseidon code) from Axiom
      • +
      +
    • +
    +
  • +
  • vip::RLNP2P +
      +
    • vac:acz:rlnp2p:waku:production-readiness +
        +
      • Waku rln contract registry - 3
      • +
      • mark duplicated messages as spam - 1867
      • +
      • use waku-org/waku-rln-contract as a submodule in nwaku - 1884
      • +
      +
    • +
    • vac:acz:zerokit:vac:maintenance + +
    • +
    • vac:acz:zerokit:vac:zerokit-v0.4 +
        +
      • zerokit v0.4.0 release planning - 197
      • +
      +
    • +
    +
  • +
  • vc::Deep Research +
      +
    • vac:dr:valpriv:vac:tor-push-poc +
        +
      • redesigned the torpush integration in nimbus 2
      • +
      +
    • +
    • vac:dr:valpriv:vac:tor-push-relwork +
        +
      • Addressed further comments in paper, improved intro, added source level variation approach
      • +
      +
    • +
    • vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report +
        +
      • cont’ work on the document
      • +
      +
    • +
    +
  • +
\ No newline at end of file diff --git a/roadmap/vac/updates/2023-08-07/index.html b/roadmap/vac/updates/2023-08-07/index.html deleted file mode 100644 index e320f8d5b..000000000 --- a/roadmap/vac/updates/2023-08-07/index.html +++ /dev/null @@ -1,528 +0,0 @@ - - - - - - - - 2023-08-07 Vac weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-07 Vac weekly

-

- Last updated -Aug 7, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

More info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week): - - -https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632

-

Vac week 32 August 7th

- - - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/vac/updates/2023-08-14.html b/roadmap/vac/updates/2023-08-14.html new file mode 100644 index 000000000..27174a5c6 --- /dev/null +++ b/roadmap/vac/updates/2023-08-14.html @@ -0,0 +1,192 @@ + +2023-08-17 Vac weekly

Vac Milestones: Vac-Roadmap-907df7eeac464143b00c6f49a20bb632

+

Vac week 33 August 14th

+
+

vsu::P2P

+

vac:p2p:nim-libp2p:vac:maintenance

+
    +
  • Improve gossipsub DDoS resistance 920
  • +
  • delivered: Perf protocol 925
  • +
  • delivered: Test-plans for the perf protocol perf-nim
  • +
  • Bandwidth estimate as a parameter (waiting for final review) 941
  • +
+

vac:p2p:nim-chronos:vac:maintenance

+
    +
  • delivered: Remove hard-coded ports from test 429
  • +
  • delivered: fixed flaky test using REUSE_PORT 438
  • +
+
+

vsu::Tokenomics

+
    +
  • admin/misc: +
      +
    • (5 CC days off)
    • +
    +
  • +
+

vac:tke::codex:economic-analysis

+
    +
  • Filecoin economic structure and Codex token requirements
  • +
+

vac:tke::status:SNT-staking

+
    +
  • tests with the contracts
  • +
+

vac:tke::nomos:economic-analysis

+
    +
  • resume discussions with Nomos team
  • +
+
+

vsu::Distributed Systems Testing (DST)

+

vac:dst:wakurtosis:waku:techreport

+
    +
  • 1st Draft of Wakurtosis Research Blog (123)
  • +
  • Data Process / Analysis of Non-Discv5 K13 Simulations (Wakurtosis Tech Report v2.5)
  • +
+

vac:dst:shadow:vac:basic-shadow-simulation

+
    +
  • Basic Shadow Simulation of a gossipsub node (Setup, 5nodes)
  • +
+

vac:dst:10ksim:vac:10ksim-bandwidth-test

+
    +
  • Try and plan on how to refactor/generalize testing tool from Codex.
  • +
  • Learn more about Kubernetes
  • +
+

vac:dst:wakurtosis:nomos:ci-integration_02

+
    +
  • Enable subnetworks
  • +
  • Plan how to use wakurtosis with fixed version
  • +
+

vac:dst:eng:vac:bundle-simulation-data

+
    +
  • Run requested simulations
  • +
+
+

vsu:Smart Contracts (SC)

+

vac:sc::vac:secureum-upskilling

+
    +
  • Learned about +
      +
    • cold vs warm storage reads and their gas implications
    • +
    • UTXO vs account models
    • +
    • DELEGATECALL vs CALLCODE opcodes, CREATE vs CREATE2 opcodes; Yul Assembly
    • +
    • Unstructured proxies eip-1967
    • +
    • C3 Linearization 2694) (Diamond inheritance and resolution)
    • +
    +
  • +
  • Uniswap deep dive
  • +
  • Finished Secureum slot 2 and 3
  • +
+

vac:sc::vac:maintainance/misc

+
    +
  • Introduced Vac’s own foundry-template for smart contract projects +
      +
    • Goal is to have the same project structure across projects
    • +
    • Github repository: foundry-template
    • +
    +
  • +
+
+

vsu:Applied Cryptogarphy & ZK (ACZ)

+
    +
  • vac:acz:zerokit:vac:maintenance + +
  • +
+
+

vip::zkVM

+

vac:zkvm::vac:research-existing-proof-systems

+
    +
  • delivered Nescience WIP doc
  • +
  • delivered FHE review
  • +
  • delivered Nova vs Sangria done - Some discussions during the meeting
  • +
  • started HyperNova writeup
  • +
  • started writing a trimmed version of FHE writeup
  • +
  • researched CCS (for HyperNova)
  • +
  • Research Protogalaxy 1106 and Protostar 620.
  • +
+

vac:zkvm::vac:proof-system-benchmarks

+
    +
  • More work on benchmarks is ongoing
  • +
  • Putting down a document that explains the differences
  • +
+
+

vc::Deep Research

+

vac:dr:valpriv:vac:tor-push-poc

+
    +
  • revised the code for PR
  • +
+

vac:dr:valpriv:vac:tor-push-relwork

+
    +
  • added section for mixnet, non-Tor/non-onion routing-based anonymity network
  • +
+

vac:dr:gsub-scaling:vac:gossipsub-simulation

+
    +
  • Used shadow simulator to run first GossibSub simulation
  • +
+

vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report

+
    +
  • Finalized 1st draft of the GossipSub scaling article
  • +
\ No newline at end of file diff --git a/roadmap/vac/updates/2023-08-14/index.html b/roadmap/vac/updates/2023-08-14/index.html deleted file mode 100644 index c4806cc6a..000000000 --- a/roadmap/vac/updates/2023-08-14/index.html +++ /dev/null @@ -1,567 +0,0 @@ - - - - - - - - 2023-08-17 Vac weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-17 Vac weekly

-

- Last updated -Aug 14, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Vac Milestones: - -https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632

-

# Vac week 33 August 14th

-
-

# vsu::P2P

-

# vac:p2p:nim-libp2p:vac:maintenance

- -

# vac:p2p:nim-chronos:vac:maintenance

- -
-

# vsu::Tokenomics

-
    -
  • admin/misc: -
      -
    • (5 CC days off)
    • -
    -
  • -
-

# vac:tke::codex:economic-analysis

-
    -
  • Filecoin economic structure and Codex token requirements
  • -
-

# vac:tke::status:SNT-staking

-
    -
  • tests with the contracts
  • -
-

# vac:tke::nomos:economic-analysis

-
    -
  • resume discussions with Nomos team
  • -
-
-

# vsu::Distributed Systems Testing (DST)

-

# vac:dst:wakurtosis:waku:techreport

- -

# vac:dst:shadow:vac:basic-shadow-simulation

-
    -
  • Basic Shadow Simulation of a gossipsub node (Setup, 5nodes)
  • -
-

# vac:dst:10ksim:vac:10ksim-bandwidth-test

-
    -
  • Try and plan on how to refactor/generalize testing tool from Codex.
  • -
  • Learn more about Kubernetes
  • -
-

# vac:dst:wakurtosis:nomos:ci-integration_02

-
    -
  • Enable subnetworks
  • -
  • Plan how to use wakurtosis with fixed version
  • -
-

# vac:dst:eng:vac:bundle-simulation-data

-
    -
  • Run requested simulations
  • -
-
-

# vsu:Smart Contracts (SC)

-

# vac:sc::vac:secureum-upskilling

- -

# vac:sc::vac:maintainance/misc

- -
-

# vsu:Applied Cryptogarphy & ZK (ACZ)

- -
-

# vip::zkVM

-

# vac:zkvm::vac:research-existing-proof-systems

-
    -
  • delivered Nescience WIP doc
  • -
  • delivered FHE review
  • -
  • delivered Nova vs Sangria done - Some discussions during the meeting
  • -
  • started HyperNova writeup
  • -
  • started writing a trimmed version of FHE writeup
  • -
  • researched CCS (for HyperNova)
  • -
  • Research Protogalaxy - -https://eprint.iacr.org/2023/1106 and Protostar - -https://eprint.iacr.org/2023/620.
  • -
-

# vac:zkvm::vac:proof-system-benchmarks

-
    -
  • More work on benchmarks is ongoing
  • -
  • Putting down a document that explains the differences
  • -
-
-

# vc::Deep Research

-

# vac:dr:valpriv:vac:tor-push-poc

-
    -
  • revised the code for PR
  • -
-

# vac:dr:valpriv:vac:tor-push-relwork

-
    -
  • added section for mixnet, non-Tor/non-onion routing-based anonymity network
  • -
-

# vac:dr:gsub-scaling:vac:gossipsub-simulation

-
    -
  • Used shadow simulator to run first GossibSub simulation
  • -
-

# vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report

-
    -
  • Finalized 1st draft of the GossipSub scaling article
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/vac/updates/2023-08-21.html b/roadmap/vac/updates/2023-08-21.html new file mode 100644 index 000000000..df04c602f --- /dev/null +++ b/roadmap/vac/updates/2023-08-21.html @@ -0,0 +1,257 @@ + +2023-08-21 Vac weekly

Vac Milestones: Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 +Vac Github Repos: Vac-Repositories-75f7feb3861048f897f0fe95ead08b06

+

Vac week 34 August 21th

+

vsu::P2P

+
    +
  • vac:p2p:nim-libp2p:vac:maintenance +
      +
    • Test-plans for the perf protocol (99%: need to find why the executable doesn’t work) 262
    • +
    • WebRTC: Merge all protocols (60%: slowed down by some complications and bad planning with Mbed-TLS) 3
    • +
    • WebRTC: DataChannel (25%)
    • +
    +
  • +
+

vsu::Tokenomics

+
    +
  • admin/misc: +
      +
    • (3 CC days off)
    • +
    +
  • +
  • vac:tke::codex:economic-analysis +
      +
    • Call w/ Codex on token incentives, business analysis of Filecoin
    • +
    +
  • +
  • vac:tke::status:SNT-staking +
      +
    • Bug fixes for tests for the contracts
    • +
    +
  • +
  • vac:tke::nomos:economic-analysis +
      +
    • Narrowed focus to: 1) quantifying bribery attacks, 2) assessing how to min risks and max privacy of delegated staking
    • +
    +
  • +
  • vac:tke::waku:economic-analysis +
      +
    • Caught up w/ Waku team on RLN, adopting a proactive effort to pitch them solutions
    • +
    +
  • +
+

vsu::Distributed Systems Testing (DST)

+
    +
  • vac:dst:wakurtosis:vac:rlog + +
  • +
  • vac:dst:shadow:vac:basic-shadow-simulation +
      +
    • Run 10K simulation of basic gossipsub node
    • +
    +
  • +
  • vac:dst:gsub-model:status:control-messages +
      +
    • Got access to status superset
    • +
    +
  • +
  • vac:dst:analysis:nomos:nomos-simulation-analysis +
      +
    • Basic CLI done, json to csv, can handle 10k nodes
    • +
    +
  • +
  • vac:dst:wakurtosis:waku:topology-analysis +
      +
    • Collection + analysis: now supports all waku protocols, along with relay
    • +
    • Cannot get gossip-sub peerage from waku or prometheus (working on getting info from gossipsub layer)
    • +
    +
  • +
  • vac:dst:wakurtosis:waku:techreport_02 +
      +
    • Merged 4 pending PRs; master now supports regular graphs
    • +
    +
  • +
  • vac:dst:eng:vac:bundle-simulation-data +
      +
    • Run 1 and 10 rate simulations. 100 still being run
    • +
    +
  • +
  • vac:dst:10ksim:vac:10ksim-bandwidth-test +
      +
    • Working on split the structure of codex tool; Working on diagrams also
    • +
    +
  • +
+

vsu:Smart Contracts (SC)

+
    +
  • vac:sc::status:community-contracts-ERC721 +
      +
    • delivered (will need maintenance and adding features as requested in the future)
    • +
    +
  • +
  • vac:sc::status:community-contracts-ERC20 +
      +
    • started working on ERC20 contracts
    • +
    +
  • +
  • vac:sc::vac:secureum-upskilling +
      +
    • Secureum: Finished Epoch 0, Slot 4 and 5
    • +
    • Deep dive on First Depositor/Inflation attacks
    • +
    • Learned about Minimal Proxy Contract pattern
    • +
    • More Uniswap V2 protocol reading
    • +
    +
  • +
  • vac:sc::vac:maintainance/misc +
      +
    • Worked on moving community dapp contracts to new foundry-template
    • +
    +
  • +
+

vsu:Applied Cryptogarphy & ZK (ACZ)

+
    +
  • vac:acz:rlnp2p:waku:rln-relay-enhancments +
      +
    • rpc handler for waku rln relay - 1852
    • +
    • fixed ganache’s change in method to manage subprocesses, fixed timeouts related to it - 1913
    • +
    • should error out on rln-relay mount failure - 1904
    • +
    • fixed invalid start index being used in rln-relay - 1915
    • +
    • constrain the values that can be used as idCommitments in the rln-contract - 26
    • +
    • assist with waku-simulator testing
    • +
    • remove registration capabilities from nwaku, it should be done out of band - 1916
    • +
    • add deployedBlockNumber to the rln-contract for ease of fetching events from the client - 27
    • +
    +
  • +
  • vac:acz:zerokit:vac:maintenance +
      +
    • exposed seq_atomic_operation ffi api to allow users to make use of the current index without making multiple ffi calls - 206
    • +
    • use pmtree instead of vacp2p_pmtree now that changes have been upstreamed - 203
    • +
    • Prepared a PR to fix a stopgap introduces by PR 201 207
    • +
    • PR review 202, 206
    • +
    +
  • +
  • vac:acz:zerokit:vac:zerokit-v0.4 +
      +
    • substitute id_commitments for rate_commitments and update tests in rln-v2 - 205
    • +
    • rln-v2 working branch - 204
    • +
    • misc research while ooo:
    • +
    • stealth commitment scheme inspired by erc-5564 - erc-5564-bn254, associated circuit - circom-rln-erc5564 (very heavy on the constraints)
    • +
    +
  • +
+

vip::zkVM

+
    +
  • vac:zkvm::vac:research-existing-proof-systems + +
  • +
  • vac:zkvm::vac:proof-system-benchmarks + +
  • +
+

vc::Deep Research

+
    +
  • vac:dr:valpriv:vac:tor-push-poc +
      +
    • Reimplemented torpush without any gossip sharing
    • +
    • Added discovering peers for torpush in every epoch/10 minutes
    • +
    • torswitch directly pushes messages to separately identified peers
    • +
    +
  • +
  • vac:dr:valpriv:vac:tor-push-relwork +
      +
    • added quantified measures related to privacy in the paper section
    • +
    +
  • +
  • vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report +
      +
    • Explored different unstructured p2p application architectuture
    • +
    • Studied literature on better bandwidth utilization in unstructured p2p networks.
    • +
    +
  • +
  • vac:dr:gsub-scaling:vac:gossipsub-simulation +
      +
    • Worked on GossibSup simulation in shadow simulator. Tried understanding different libp2p functions
    • +
    • Created short awk scripts for analyzing results.
    • +
    +
  • +
  • vac:dr:consensus:nomos:carnot-bribery-article +
      +
    • Continue work on the article on bribery attacks, PoS and Carnot
    • +
    • Completed presentation about the bribery attacks and Carnot
    • +
    +
  • +
  • vac:dr:consensus:nomos:carnot-paper +
      +
    • Discussed Carnot tests and results with Nomos team. Some adjustment to the parameters needed to be made to accurate results.
    • +
    +
  • +
\ No newline at end of file diff --git a/roadmap/vac/updates/2023-08-21/index.html b/roadmap/vac/updates/2023-08-21/index.html deleted file mode 100644 index b96a0232e..000000000 --- a/roadmap/vac/updates/2023-08-21/index.html +++ /dev/null @@ -1,612 +0,0 @@ - - - - - - - - 2023-08-21 Vac weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-21 Vac weekly

-

- Last updated -Aug 21, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Vac Milestones: - -https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 -Vac Github Repos: - -https://www.notion.so/Vac-Repositories-75f7feb3861048f897f0fe95ead08b06

-

# Vac week 34 August 21th

-

# vsu::P2P

- -

# vsu::Tokenomics

-
    -
  • admin/misc: -
      -
    • (3 CC days off)
    • -
    -
  • -
  • vac:tke::codex:economic-analysis -
      -
    • Call w/ Codex on token incentives, business analysis of Filecoin
    • -
    -
  • -
  • vac:tke::status:SNT-staking -
      -
    • Bug fixes for tests for the contracts
    • -
    -
  • -
  • vac:tke::nomos:economic-analysis -
      -
    • Narrowed focus to: 1) quantifying bribery attacks, 2) assessing how to min risks and max privacy of delegated staking
    • -
    -
  • -
  • vac:tke::waku:economic-analysis -
      -
    • Caught up w/ Waku team on RLN, adopting a proactive effort to pitch them solutions
    • -
    -
  • -
-

# vsu::Distributed Systems Testing (DST)

-
    -
  • vac:dst:wakurtosis:vac:rlog - -
  • -
  • vac:dst:shadow:vac:basic-shadow-simulation -
      -
    • Run 10K simulation of basic gossipsub node
    • -
    -
  • -
  • vac:dst:gsub-model:status:control-messages -
      -
    • Got access to status superset
    • -
    -
  • -
  • vac:dst:analysis:nomos:nomos-simulation-analysis -
      -
    • Basic CLI done, json to csv, can handle 10k nodes
    • -
    -
  • -
  • vac:dst:wakurtosis:waku:topology-analysis -
      -
    • Collection + analysis: now supports all waku protocols, along with relay
    • -
    • Cannot get gossip-sub peerage from waku or prometheus (working on getting info from gossipsub layer)
    • -
    -
  • -
  • vac:dst:wakurtosis:waku:techreport_02 -
      -
    • Merged 4 pending PRs; master now supports regular graphs
    • -
    -
  • -
  • vac:dst:eng:vac:bundle-simulation-data -
      -
    • Run 1 and 10 rate simulations. 100 still being run
    • -
    -
  • -
  • vac:dst:10ksim:vac:10ksim-bandwidth-test -
      -
    • Working on split the structure of codex tool; Working on diagrams also
    • -
    -
  • -
-

# vsu:Smart Contracts (SC)

-
    -
  • vac:sc::status:community-contracts-ERC721 -
      -
    • delivered (will need maintenance and adding features as requested in the future)
    • -
    -
  • -
  • vac:sc::status:community-contracts-ERC20 -
      -
    • started working on ERC20 contracts
    • -
    -
  • -
  • vac:sc::vac:secureum-upskilling -
      -
    • Secureum: Finished Epoch 0, Slot 4 and 5
    • -
    • Deep dive on First Depositor/Inflation attacks
    • -
    • Learned about Minimal Proxy Contract pattern
    • -
    • More Uniswap V2 protocol reading
    • -
    -
  • -
  • vac:sc::vac:maintainance/misc -
      -
    • Worked on moving community dapp contracts to new foundry-template
    • -
    -
  • -
-

# vsu:Applied Cryptogarphy & ZK (ACZ)

- -

# vip::zkVM

- -

# vc::Deep Research

-
    -
  • vac:dr:valpriv:vac:tor-push-poc -
      -
    • Reimplemented torpush without any gossip sharing
    • -
    • Added discovering peers for torpush in every epoch/10 minutes
    • -
    • torswitch directly pushes messages to separately identified peers
    • -
    -
  • -
  • vac:dr:valpriv:vac:tor-push-relwork -
      -
    • added quantified measures related to privacy in the paper section
    • -
    -
  • -
  • vac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report -
      -
    • Explored different unstructured p2p application architectuture
    • -
    • Studied literature on better bandwidth utilization in unstructured p2p networks.
    • -
    -
  • -
  • vac:dr:gsub-scaling:vac:gossipsub-simulation -
      -
    • Worked on GossibSup simulation in shadow simulator. Tried understanding different libp2p functions
    • -
    • Created short awk scripts for analyzing results.
    • -
    -
  • -
  • vac:dr:consensus:nomos:carnot-bribery-article -
      -
    • Continue work on the article on bribery attacks, PoS and Carnot
    • -
    • Completed presentation about the bribery attacks and Carnot
    • -
    -
  • -
  • vac:dr:consensus:nomos:carnot-paper -
      -
    • Discussed Carnot tests and results with Nomos team. Some adjustment to the parameters needed to be made to accurate results.
    • -
    -
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/vac/updates/index.html b/roadmap/vac/updates/index.html new file mode 100644 index 000000000..369fdde62 --- /dev/null +++ b/roadmap/vac/updates/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/vac/updates
\ No newline at end of file diff --git a/roadmap/waku/index.html b/roadmap/waku/index.html new file mode 100644 index 000000000..7f18157ff --- /dev/null +++ b/roadmap/waku/index.html @@ -0,0 +1,52 @@ + +Waku Roadmap

Welcome to the Waku Roadmap Overview

2 items under this folder.

\ No newline at end of file diff --git a/roadmap/waku/milestone-waku-10-users.html b/roadmap/waku/milestone-waku-10-users.html new file mode 100644 index 000000000..cef4a15e9 --- /dev/null +++ b/roadmap/waku/milestone-waku-10-users.html @@ -0,0 +1,85 @@ + +Milestone: Waku Network supports 10k Users
%%{ 
+  init: { 
+    'theme': 'base', 
+    'themeVariables': { 
+      'primaryColor': '#BB2528', 
+      'primaryTextColor': '#fff', 
+      'primaryBorderColor': '#7C0000', 
+      'lineColor': '#F8B229', 
+      'secondaryColor': '#006100', 
+      'tertiaryColor': '#fff' 
+    } 
+  } 
+}%%
+gantt
+	dateFormat YYYY-MM-DD 
+	section Scaling
+		10k Users :done, 2023-01-20, 2023-07-31
+
+

Completion Deliverable

+

TBD

+

Epics

+
\ No newline at end of file diff --git a/roadmap/waku/milestone-waku-10-users/index.html b/roadmap/waku/milestone-waku-10-users/index.html deleted file mode 100644 index 7730291a7..000000000 --- a/roadmap/waku/milestone-waku-10-users/index.html +++ /dev/null @@ -1,391 +0,0 @@ - - - - - - - - Milestone: Waku Network supports 10k Users - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

Milestone: Waku Network supports 10k Users

-

- Last updated -Aug 7, 2023 - - - -Edit Source - - -

-
    - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
%%{ 
-  init: { 
-    'theme': 'base', 
-    'themeVariables': { 
-      'primaryColor': '#BB2528', 
-      'primaryTextColor': '#fff', 
-      'primaryBorderColor': '#7C0000', 
-      'lineColor': '#F8B229', 
-      'secondaryColor': '#006100', 
-      'tertiaryColor': '#fff' 
-    } 
-  } 
-}%%
-gantt
-	dateFormat YYYY-MM-DD 
-	section Scaling
-		10k Users :done, 2023-01-20, 2023-07-31
-
-

# Completion Deliverable

-

TBD

-

# Epics

- - - -
- - -
- - - - - - -
- -
- - - - - -
- - - - diff --git a/roadmap/waku/milestones-overview.html b/roadmap/waku/milestones-overview.html new file mode 100644 index 000000000..2e9e57efd --- /dev/null +++ b/roadmap/waku/milestones-overview.html @@ -0,0 +1,72 @@ + +Waku Milestones Overview
    +
  • 90% - Waku Network support for 10k users
  • +
  • 80% - Waku Network support for 1MM users
  • +
  • 65% - Restricted-run (light node) protocols are production ready
  • +
  • 60% - Peer management strategy for relay and light nodes are defined and implemented
  • +
  • 10% - Quality processes are implemented for nwaku and go-waku
  • +
  • 80% - Define and track network and community metrics for continuous monitoring improvement
  • +
  • 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties)
  • +
  • 15% - Dogfooding of RLN by platforms has started
  • +
  • 06% - First protocol to incentivize operators has been defined
  • +
\ No newline at end of file diff --git a/roadmap/waku/milestones-overview/index.html b/roadmap/waku/milestones-overview/index.html deleted file mode 100644 index 7025fa695..000000000 --- a/roadmap/waku/milestones-overview/index.html +++ /dev/null @@ -1,380 +0,0 @@ - - - - - - - - Waku Milestones Overview - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

Waku Milestones Overview

-

- Last updated -Aug 7, 2023 - - - -Edit Source - - -

-
    - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -
  • 90% - - - - - - -Waku Network support for 10k users
  • -
  • 80% - Waku Network support for 1MM users
  • -
  • 65% - Restricted-run (light node) protocols are production ready
  • -
  • 60% - Peer management strategy for relay and light nodes are defined and implemented
  • -
  • 10% - Quality processes are implemented for nwaku and go-waku
  • -
  • 80% - Define and track network and community metrics for continuous monitoring improvement
  • -
  • 20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties)
  • -
  • 15% - Dogfooding of RLN by platforms has started
  • -
  • 06% - First protocol to incentivize operators has been defined
  • -
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/waku/updates/2023-07-24.html b/roadmap/waku/updates/2023-07-24.html new file mode 100644 index 000000000..363b99739 --- /dev/null +++ b/roadmap/waku/updates/2023-07-24.html @@ -0,0 +1,195 @@ + +2023-07-24 Waku weekly

Disclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones.

+
+

Docs

+

Milestone: Foundation for Waku docs (done)

+

achieved:

+
    +
  • overall layout
  • +
  • concept docs
  • +
  • community/showcase pages
  • +
+

Milestone: Foundation for node operator docs (done)

+

achieved:

+
    +
  • nodes overview page
  • +
  • guide for running nwaku (binaries, source, docker)
  • +
  • peer discovery config guide
  • +
  • reference docs for config methods and options
  • +
+

Milestone: Foundation for js-waku docs

+

achieved:

+
    +
  • js-waku overview + installation guide
  • +
  • lightpush + filter guide
  • +
  • store guide
  • +
  • @waku/create-app guide
  • +
+

next:

+
    +
  • improve @waku/react guide
  • +
+

blocker:

+ +

Milestone: Docs general improvement/incorporating feedback (continuous)

+

Milestone: Running nwaku in the cloud

+

Milestone: Add Waku guide to learnweb3.io

+

Milestone: Encryption docs for js-waku

+

Milestone: Advanced node operator doc (postgres, WSS, monitoring, common config)

+

Milestone: Foundation for go-waku docs

+

Milestone: Foundation for rust-waku-bindings docs

+

Milestone: Waku architecture docs

+

Milestone: Waku detailed roadmap and milestones

+

Milestone: Explain RLN

+
+

Eco Dev (WIP)

+

Milestone: EthCC Logos side event organisation (done)

+

Milestone: Community Growth

+

achieved:

+
    +
  • Wrote several bounties, improved template; setup onboarding flow in Discord.
  • +
+

next:

+
    +
  • Review template, publish on GitHub
  • +
+

Milestone: Business Development (continuous)

+

achieved:

+
    +
  • Discussions with various leads in EthCC
  • +
+

next:

+
    +
  • Booking calls with said leads
  • +
+

Milestone: Setting Up Content Strategy for Waku

+

achieved:

+
    +
  • Discussions with Comms Hubs re Waku Blog
  • +
  • expressed needs and intent around future blog post and needed amplification
  • +
  • discuss strategies to onboard/involve non-dev and potential CTAs.
  • +
+

Milestone: Web3Conf (dates)

+

Milestone: DeCompute conf

+
+

Research (WIP)

+

Milestone: Autosharding v1

+

achieved:

+
    +
  • rendezvous hashing
  • +
  • weighting function
  • +
  • updated LIGHTPUSH to handle autosharding
  • +
+

next:

+
    +
  • update FILTER & STORE for autosharding
  • +
+
+

nwaku (WIP)

+

Milestone: Postgres integration.

+

achieved:

+
    +
  • nwaku can store messages in a Postgres database
  • +
  • we started to perform stress tests
  • +
+

next:

+
    +
  • Analyse why some messages are not stored during stress tests happened in both sqlite and Postgres, so maybe the issue isn’t directly related to store.
  • +
+

Milestone: nwaku as a library (C-bindings)

+

achieved:

+
    +
  • The integration is in progress through N-API framework
  • +
+

next:

+
    +
  • Make the nodejs to properly work by running the nwaku node in a separate thread.
  • +
+
+

go-waku (WIP)

+
+

js-waku (WIP)

+

Milestone: Peer management

+

_achieved:

+
    +
  • spec test for connection manager
  • +
+

Milestone: Peer Exchange

+

Milestone: Static Sharding

+

next:

+
    +
  • start implementation of static sharding in js-waku
  • +
+

Milestone: Developer Experience

+

achieved:

+
    +
  • js-lip2p upgrade to remove usage of polyfills (draft PR)
  • +
+

next:

+
    +
  • merge and release js-libp2p upgrade
  • +
+

Milestone: Waku Relay in the Browser

+
\ No newline at end of file diff --git a/roadmap/waku/updates/2023-07-24/index.html b/roadmap/waku/updates/2023-07-24/index.html deleted file mode 100644 index c0f6c3a9c..000000000 --- a/roadmap/waku/updates/2023-07-24/index.html +++ /dev/null @@ -1,552 +0,0 @@ - - - - - - - - 2023-07-24 Waku weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-24 Waku weekly

-

- Last updated -Jul 24, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Disclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones.

-
-

# Docs

-

# Milestone: Foundation for Waku docs (done)

-

# achieved:

-
    -
  • overall layout
  • -
  • concept docs
  • -
  • community/showcase pages
  • -
-

# Milestone: Foundation for node operator docs (done)

-

# achieved:

-
    -
  • nodes overview page
  • -
  • guide for running nwaku (binaries, source, docker)
  • -
  • peer discovery config guide
  • -
  • reference docs for config methods and options
  • -
-

# Milestone: Foundation for js-waku docs

-

# achieved:

-
    -
  • js-waku overview + installation guide
  • -
  • lightpush + filter guide
  • -
  • store guide
  • -
  • @waku/create-app guide
  • -
-

# next:

-
    -
  • improve @waku/react guide
  • -
-

# blocker:

-
    -
  • polyfills issue with - -js-waku
  • -
-

# Milestone: Docs general improvement/incorporating feedback (continuous)

-

# Milestone: Running nwaku in the cloud

-

# Milestone: Add Waku guide to learnweb3.io

-

# Milestone: Encryption docs for js-waku

-

# Milestone: Advanced node operator doc (postgres, WSS, monitoring, common config)

-

# Milestone: Foundation for go-waku docs

-

# Milestone: Foundation for rust-waku-bindings docs

-

# Milestone: Waku architecture docs

-

# Milestone: Waku detailed roadmap and milestones

-

# Milestone: Explain RLN

-
-

# Eco Dev (WIP)

-

# Milestone: EthCC Logos side event organisation (done)

-

# Milestone: Community Growth

-

# achieved:

-
    -
  • Wrote several bounties, improved template; setup onboarding flow in Discord.
  • -
-

# next:

-
    -
  • Review template, publish on GitHub
  • -
-

# Milestone: Business Development (continuous)

-

# achieved:

-
    -
  • Discussions with various leads in EthCC
  • -
-

# next:

-
    -
  • Booking calls with said leads
  • -
-

# Milestone: Setting Up Content Strategy for Waku

-

# achieved:

-
    -
  • Discussions with Comms Hubs re Waku Blog
  • -
  • expressed needs and intent around future blog post and needed amplification
  • -
  • discuss strategies to onboard/involve non-dev and potential CTAs.
  • -
-

# Milestone: Web3Conf (dates)

-

# Milestone: DeCompute conf

-
-

# Research (WIP)

-

Milestone: - -Autosharding v1

-

# achieved:

-
    -
  • rendezvous hashing
  • -
  • weighting function
  • -
  • updated LIGHTPUSH to handle autosharding
  • -
-

# next:

-
    -
  • update FILTER & STORE for autosharding
  • -
-
-

# nwaku (WIP)

-

# Milestone: Postgres integration.

-

# achieved:

-
    -
  • nwaku can store messages in a Postgres database
  • -
  • we started to perform stress tests
  • -
-

# next:

-
    -
  • Analyse why some messages are not stored during stress tests happened in both sqlite and Postgres, so maybe the issue isn’t directly related to store.
  • -
-

# Milestone: nwaku as a library (C-bindings)

-

# achieved:

-
    -
  • The integration is in progress through N-API framework
  • -
-

# next:

-
    -
  • Make the nodejs to properly work by running the nwaku node in a separate thread.
  • -
-
-

# go-waku (WIP)

-
-

# js-waku (WIP)

-

Milestone: - -Peer management

-

# _achieved:

-
    -
  • spec test for connection manager
  • -
-

Milestone: - -Peer Exchange

-

# Milestone: Static Sharding

-

# next:

-
    -
  • start implementation of static sharding in js-waku
  • -
-

# Milestone: Developer Experience

-

# achieved:

-
    -
  • js-lip2p upgrade to remove usage of polyfills (draft PR)
  • -
-

# next:

-
    -
  • merge and release js-libp2p upgrade
  • -
-

# Milestone: Waku Relay in the Browser

-
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/waku/updates/2023-07-31.html b/roadmap/waku/updates/2023-07-31.html new file mode 100644 index 000000000..f853f028c --- /dev/null +++ b/roadmap/waku/updates/2023-07-31.html @@ -0,0 +1,183 @@ + +2023-07-31 Waku weekly

Docs

+

Milestone: Docs general improvement/incorporating feedback (continuous)

+

next:

+
    +
  • rewrite docs in British English
  • +
+

Milestone: Running nwaku in the cloud

+

next:

+
    +
  • publish guides for Digital Ocean, Oracle, Fly.io
  • +
+
+

Eco Dev (WIP)

+
+

Research

+

Milestone: Detailed network requirements and task breakdown

+

achieved:

+
    +
  • gathering rough network requirements
  • +
+

next:

+
    +
  • detailed task breakdown per milestone and effort allocation
  • +
+

Milestone: Autosharding v1

+

achieved:

+
    +
  • update FILTER & STORE for autosharding
  • +
+

next:

+
    +
  • RFC review & updates
  • +
  • code review & updates
  • +
+
+

nwaku

+

Milestone: nwaku release process automation

+

next:

+
    +
  • setup automation to test/simulate current master to prevent/limit regressions
  • +
  • expand target architectures and platforms for release artifacts (e.g. arm64, Win…)
  • +
+

Milestone: HTTP Rest API for protocols

+

next:

+
    +
  • Filter API added
  • +
  • tests to complete.
  • +
+
+

go-waku

+

Milestone: Increase Maintability Score. Refer to CodeClimate report

+

next:

+
    +
  • define scope on which issues reported by CodeClimate should be fixed. Initially it should be limited to reduce code complexity and duplication.
  • +
+

Milestone: RLN updates, refer issue.

+

achieved:

+
    +
  • expose set_tree, key_gen, seeded_key_gen, extended_seeded_keygen, recover_id_secret, set_leaf, init_tree_with_leaves, set_metadata, get_metadata and get_leaf
  • +
  • created an example on how to use RLN with go-waku
  • +
  • service node can pass in index to keystore credentials and can verify proofs based on bandwidth usage
  • +
+

next:

+
    +
  • merkle tree batch operations (in progress)
  • +
  • usage of persisted merkle tree db
  • +
+

Milestone: Improve test coverage for functional tests of all protocols. Refer to [CodeClimate report]

+

next:

+
    +
  • define scope on which code sections should be covered by tests
  • +
+

Milestone: C-Bindings

+

next:

+
    +
  • update API to match nwaku’s (by using callbacks instead of strings that require freeing)
  • +
+
+

js-waku

+

Milestone: Peer management

+

achieved:

+
    +
  • extend ConnectionManager with EventEmitter and dispatch peers tagged with their discovery + make it public on the Waku interface
  • +
+

next:

+
    +
  • fallback improvement for peer connect rejection
  • +
+

Milestone: Peer Exchange

+

next:

+
    +
  • robusting support around peer-exchange for examples
  • +
+

Milestone: Static Sharding

+

achieved:

+
    +
  • WIP implementation of static sharding in js-waku
  • +
+

next:

+
    +
  • investigation around gauging connection loss;
  • +
+

Milestone: Developer Experience

+

achieved:

+
    +
  • improve & update @waku/react
  • +
  • merge and release js-libp2p upgrade
  • +
+

next:

+
    +
  • update examples to latest release + make sure no old/unused packages there
  • +
+

Milestone: Maintenance

+

achieved:

+
    +
  • update to libp2p@0.46.0
  • +
+

next:

+
    +
  • suit of optional tests in pipeline
  • +
+
\ No newline at end of file diff --git a/roadmap/waku/updates/2023-07-31/index.html b/roadmap/waku/updates/2023-07-31/index.html deleted file mode 100644 index 4f6ba3355..000000000 --- a/roadmap/waku/updates/2023-07-31/index.html +++ /dev/null @@ -1,539 +0,0 @@ - - - - - - - - 2023-07-31 Waku weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-07-31 Waku weekly

-

- Last updated -Jul 31, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# Docs

-

# Milestone: Docs general improvement/incorporating feedback (continuous)

-

# next:

-
    -
  • rewrite docs in British English
  • -
-

# Milestone: Running nwaku in the cloud

-

# next:

-
    -
  • publish guides for Digital Ocean, Oracle, Fly.io
  • -
-
-

# Eco Dev (WIP)

-
-

# Research

-

# Milestone: Detailed network requirements and task breakdown

-

# achieved:

-
    -
  • gathering rough network requirements
  • -
-

# next:

-
    -
  • detailed task breakdown per milestone and effort allocation
  • -
-

Milestone: - -Autosharding v1

-

# achieved:

-
    -
  • update FILTER & STORE for autosharding
  • -
-

# next:

-
    -
  • RFC review & updates
  • -
  • code review & updates
  • -
-
-

# nwaku

-

# Milestone: nwaku release process automation

-

# next:

-
    -
  • setup automation to test/simulate current master to prevent/limit regressions
  • -
  • expand target architectures and platforms for release artifacts (e.g. arm64, Win…)
  • -
-

# Milestone: HTTP Rest API for protocols

-

# next:

-
    -
  • Filter API added
  • -
  • tests to complete.
  • -
-
-

# go-waku

-

Milestone: Increase Maintability Score. Refer to - -CodeClimate report

-

# next:

-
    -
  • define scope on which issues reported by CodeClimate should be fixed. Initially it should be limited to reduce code complexity and duplication.
  • -
-

Milestone: RLN updates, refer - -issue.

-

achieved:

-
    -
  • expose set_tree, key_gen, seeded_key_gen, extended_seeded_keygen, recover_id_secret, set_leaf, init_tree_with_leaves, set_metadata, get_metadata and get_leaf
  • -
  • created an example on how to use RLN with go-waku
  • -
  • service node can pass in index to keystore credentials and can verify proofs based on bandwidth usage
  • -
-

# next:

-
    -
  • merkle tree batch operations (in progress)
  • -
  • usage of persisted merkle tree db
  • -
-

# Milestone: Improve test coverage for functional tests of all protocols. Refer to [CodeClimate report]

-

# next:

-
    -
  • define scope on which code sections should be covered by tests
  • -
-

# Milestone: C-Bindings

-

# next:

-
    -
  • update API to match nwaku’s (by using callbacks instead of strings that require freeing)
  • -
-
-

# js-waku

-

Milestone: - -Peer management

-

# achieved:

-
    -
  • extend ConnectionManager with EventEmitter and dispatch peers tagged with their discovery + make it public on the Waku interface
  • -
-

# next:

-
    -
  • fallback improvement for peer connect rejection
  • -
-

Milestone: - -Peer Exchange

-

# next:

-
    -
  • robusting support around peer-exchange for examples
  • -
-

# Milestone: Static Sharding

-

# achieved:

-
    -
  • WIP implementation of static sharding in js-waku
  • -
-

# next:

-
    -
  • investigation around gauging connection loss;
  • -
-

# Milestone: Developer Experience

-

# achieved:

-
    -
  • improve & update @waku/react
  • -
  • merge and release js-libp2p upgrade
  • -
-

# next:

-
    -
  • update examples to latest release + make sure no old/unused packages there
  • -
-

# Milestone: Maintenance

-

# achieved:

- -

# next:

-
    -
  • suit of optional tests in pipeline
  • -
-
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/waku/updates/2023-08-06.html b/roadmap/waku/updates/2023-08-06.html new file mode 100644 index 000000000..6285531de --- /dev/null +++ b/roadmap/waku/updates/2023-08-06.html @@ -0,0 +1,167 @@ + +2023-08-06 Waku weekly

Milestones for current works are created and used. Next steps are:

+
    +
  1. Refine scope of research work for rest of the year and create matching milestones for research and waku clients
  2. +
  3. Review work not coming from research and setting dates +Note that format matches the Notion page but can be changed easily as it’s scripted
  4. +
+

nwaku

+

Release Process Improvements {E:2023-qa}

+
    +
  • achieved: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible
  • +
  • next: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images
  • +
  • blocker:
  • +
+

PostgreSQL {E:2023-10k-users}

+
    +
  • achieved: Docker compose with nwaku + postgres + prometheus + grafana + postgres_exporter 3
  • +
  • next: Carry on with stress testing
  • +
+

Autosharding v1 {E:2023-1mil-users}

+
    +
  • achieved: feedback/update cycles for FILTER & LIGHTPUSH
  • +
  • next: New fleet, updating ENR from live subscriptions and merging
  • +
  • blocker: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app.
  • +
+

Move Waku v1 and Waku-Bridge to new repos {E:2023-qa}

+
    +
  • achieved: Removed v1 and wakubridge code from nwaku repo
  • +
  • next: Remove references to v2 from nwaku directory structure and documents
  • +
+

nwaku c-bindings {E:2023-many-platforms}

+
    +
  • achieved: +
      +
    • Moved the Waku execution into a secondary working thread. Essential for NodeJs.
    • +
    • Adapted the NodeJs example to use the libwaku with the working-thread approach. The example had been receiving relay messages during a weekend. The memory was stable without crashing.
    • +
    +
  • +
  • next: start applying the thread-safety recommendations 1878
  • +
+

HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs {E:2023-many-platforms}

+
    +
  • achieved: Legacy Filter - v1 - interface Rest Api support added.
  • +
  • next: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node.
  • +
+
+

js-waku

+

Peer Exchange is supported and used by default {E:2023-light-protocols}

+
    +
  • achieved: robustness around peer-exchange, and highlight discovery vs connections for PX on the web-chat example
  • +
  • next: saving successfully connected PX peers to local storage for easier connections on reload
  • +
+

Waku Relay scalability in the Browser {NO EPIC}

+
    +
  • achieved: draft of direct browser-browser RTC example 260
  • +
  • next: improve the example (connection re-usage), work on contentTopic based RTC example
  • +
+
+

go-waku

+

C-Bindings Improvement: Callbacks and Duplications {E:2023-many-platforms}

+
    +
  • achieved: updated c-bindings to use callbacks
  • +
  • next: refactor v1 encoding functions and update RFC
  • +
+

Improve Test Coverage {E:2023-qa}

+
    +
  • achieved: Enabled -race flag and ran all unit tests to identify data races.
  • +
  • next: Fix issues reported by the data race detector tool
  • +
+

RLN: Post-Testnet3 Improvements {E:2023-rln}

+
    +
  • achieved: use zerokit batch insert/delete for members, exposed function to retrieve data from merkle tree, modified zerokit and go-zerokit-rln to pass merkle tree persistance configuration settings
  • +
  • next: resume onchain sync from persisted tree db
  • +
+

Introduce Peer Management {E:2023-peer-mgmt}

+
    +
  • achieved: Basic peer management to ensure standard in/out ratio for relay peers.
  • +
  • next: add service slots to peer manager
  • +
+
+

Eco Dev

+

Aug 2023 {E:2023-eco-growth}

+
    +
  • achieved: production of swags and marketing collaterals for web3conf completed
  • +
  • next: web3conf talk and side event production. various calls with commshub for preparing marketing collaterals.
  • +
+
+

Docs

+

Advanced docs for js-waku {E:2023-eco-growth}

+
    +
  • next: create guide on @waku/react and debugging js-waku web apps
  • +
+

incorporating feedback (2023) {E:2023-eco-growth}

+
    +
  • achieved: rewrote the docs in UK English
  • +
  • next: update docs terms, announce js-waku docs
  • +
+

Foundation of js-waku docs {E:2023-eco-growth}

+

achieved: added guide on js-waku bootstrapping

+
+

Research

+

1.1 Network requirements and task breakdown {E:2023-1mil-users}

+
    +
  • achieved: Setup project management tools; determined number of shards to 8; some conversations on RLN memberships
  • +
  • next: Breakdown and assign tasks under each milestone for the 1 million users/public Waku Network epic.
  • +
+
\ No newline at end of file diff --git a/roadmap/waku/updates/2023-08-06/index.html b/roadmap/waku/updates/2023-08-06/index.html deleted file mode 100644 index 27640fae6..000000000 --- a/roadmap/waku/updates/2023-08-06/index.html +++ /dev/null @@ -1,517 +0,0 @@ - - - - - - - - 2023-08-06 Waku weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-06 Waku weekly

-

- Last updated -Aug 8, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Milestones for current works are created and used. Next steps are:

-
    -
  1. Refine scope of - -research work for rest of the year and create matching milestones for research and waku clients
  2. -
  3. Review work not coming from research and setting dates -Note that format matches the Notion page but can be changed easily as it’s scripted
  4. -
-

# nwaku

-

- -Release Process Improvements {E:2023-qa}

-
    -
  • achieved: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible
  • -
  • next: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images
  • -
  • blocker:
  • -
-

- -PostgreSQL {E:2023-10k-users}

- -

- -Autosharding v1 {E:2023-1mil-users}

-
    -
  • achieved: feedback/update cycles for FILTER & LIGHTPUSH
  • -
  • next: New fleet, updating ENR from live subscriptions and merging
  • -
  • blocker: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app.
  • -
-

- -Move Waku v1 and Waku-Bridge to new repos {E:2023-qa}

-
    -
  • achieved: Removed v1 and wakubridge code from nwaku repo
  • -
  • next: Remove references to v2 from nwaku directory structure and documents
  • -
-

- -nwaku c-bindings {E:2023-many-platforms}

-
    -
  • achieved: -
      -
    • Moved the Waku execution into a secondary working thread. Essential for NodeJs.
    • -
    • Adapted the NodeJs example to use the libwaku with the working-thread approach. The example had been receiving relay messages during a weekend. The memory was stable without crashing.
    • -
    -
  • -
  • next: start applying the thread-safety recommendations - -https://github.com/waku-org/nwaku/issues/1878
  • -
-

- -HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs {E:2023-many-platforms}

-
    -
  • achieved: Legacy Filter - v1 - interface Rest Api support added.
  • -
  • next: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node.
  • -
-
-

# js-waku

-

- -Peer Exchange is supported and used by default {E:2023-light-protocols}

-
    -
  • achieved: robustness around peer-exchange, and highlight discovery vs connections for PX on the web-chat example
  • -
  • next: saving successfully connected PX peers to local storage for easier connections on reload
  • -
-

- -Waku Relay scalability in the Browser {NO EPIC}

- -
-

# go-waku

-

- -C-Bindings Improvement: Callbacks and Duplications {E:2023-many-platforms}

-
    -
  • achieved: updated c-bindings to use callbacks
  • -
  • next: refactor v1 encoding functions and update RFC
  • -
-

- -Improve Test Coverage {E:2023-qa}

-
    -
  • achieved: Enabled -race flag and ran all unit tests to identify data races.
  • -
  • next: Fix issues reported by the data race detector tool
  • -
-

- -RLN: Post-Testnet3 Improvements {E:2023-rln}

-
    -
  • achieved: use zerokit batch insert/delete for members, exposed function to retrieve data from merkle tree, modified zerokit and go-zerokit-rln to pass merkle tree persistance configuration settings
  • -
  • next: resume onchain sync from persisted tree db
  • -
-

- -Introduce Peer Management {E:2023-peer-mgmt}

-
    -
  • achieved: Basic peer management to ensure standard in/out ratio for relay peers.
  • -
  • next: add service slots to peer manager
  • -
-
-

# Eco Dev

-

- -Aug 2023 {E:2023-eco-growth}

-
    -
  • achieved: production of swags and marketing collaterals for web3conf completed
  • -
  • next: web3conf talk and side event production. various calls with commshub for preparing marketing collaterals.
  • -
-
-

# Docs

-

- -Advanced docs for js-waku {E:2023-eco-growth}

-
    -
  • next: create guide on @waku/react and debugging js-waku web apps
  • -
-

- -Docs general improvement/incorporating feedback (2023) {E:2023-eco-growth}

-
    -
  • achieved: rewrote the docs in UK English
  • -
  • next: update docs terms, announce js-waku docs
  • -
-

- -Foundation of js-waku docs {E:2023-eco-growth}

-

achieved: added guide on js-waku bootstrapping

-
-

# Research

-

- -1.1 Network requirements and task breakdown {E:2023-1mil-users}

-
    -
  • achieved: Setup project management tools; determined number of shards to 8; some conversations on RLN memberships
  • -
  • next: Breakdown and assign tasks under each milestone for the 1 million users/public Waku Network epic.
  • -
-
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/waku/updates/2023-08-14.html b/roadmap/waku/updates/2023-08-14.html new file mode 100644 index 000000000..d5d471ab4 --- /dev/null +++ b/roadmap/waku/updates/2023-08-14.html @@ -0,0 +1,148 @@ + +2023-08-14 Waku weekly

2023-08-14 Waku weekly

+
+

Epics

+

Waku Network Can Support 10K Users {E:2023-10k-users}

+

All software has been delivered. Pending items are:

+
    +
  • Running stress testing on PostgreSQL to confirm performance gain 1894
  • +
  • Setting up a staging fleet for Status to try static sharding
  • +
  • Running simulations for Store protocol: commitment and probably move this to 1mil epic
  • +
+
+

Eco Dev

+

Aug 2023 {E:2023-eco-growth}

+
    +
  • achieved: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub
  • +
  • next: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning
  • +
  • blocker: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel
  • +
+
+

Docs

+

Advanced docs for js-waku

+
    +
  • next: document notes/recommendations for NodeJS, begin docs on js-waku encryption
  • +
+
+

nwaku

+

Release Process Improvements {E:2023-qa}

+
    +
  • achieved: minor CI fixes and improvements
  • +
  • next: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images
  • +
+

PostgreSQL {E:2023-10k-users}

+
    +
  • achieved: Learned that the insertion rate is constrained by the relay protocol. i.e. the maximum insert rate is limited by relay so I couldn’t push the “insert” operation to a limit from a Postgres point of view. For example, if 25 clients publish messages concurrently, and each client publishes 300 msgs, all the messages are correctly stored. If repeating the same operation but with 50 clients, then many messages are lost because the relay protocol doesn’t process all of them.
  • +
  • next: Carry on with stress testing. Analyze the performance differences between Postgres and SQLite regarding the read operations.
  • +
+

Autosharding v1 {E:2023-1mil-users}

+
    +
  • achieved: many feedback/update cycles for FILTER, LIGHTPUSH, STORE & RFC
  • +
  • next: updating ENR for live subscriptions
  • +
+

HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs {E:2023-many-platforms}

+
    +
  • achieved: Legacy Filter - v1 - interface Rest Api support added.
  • +
  • next: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node. Add more tests.
  • +
+
+

js-waku

+

Maintenance {E:2023-qa}

+
    +
  • achieved: upgrade libp2p & chainsafe deps to libp2p 0.46.3 while removing deprecated libp2p standalone interface packages (new breaking change libp2p w/ other deps), add tsdoc for referenced types, setting up/fixing prettier/eslint conflict
  • +
+

Developer Experience (2023) {E:2023-eco-growth}

+
    +
  • achieved: non blocking pipeline step (1411)
  • +
+

Peer Exchange is supported and used by default {E:2023-light-protocols}

+
    +
  • achieved: close the “fallback mechanism for peer rejections”, refactor peer-exchange compliance test
  • +
  • next: peer-exchange to be included with default discovery, action peer-exchange browser feedback
  • +
+
+

go-waku

+

Maintenance {E:2023-qa}

+
    +
  • achieved: improved keep alive logic for identifying if machine is waking up; added vacuum feature to sqlite and postgresql; made migrations optional; refactored db and migration code, extracted code to generate node key to its own separate subcommand
  • +
+

C-Bindings Improvement: Callbacks and Duplications {E:2023-many-platforms}

+
    +
  • achieved: PR for updating the RFC to use callbacks, and refactored the encoding functions
  • +
+

Improve Test Coverage {E:2023-qa}

+
    +
  • achieved: Fixed issues reported by the data race detector tool.
  • +
  • next: identify areas where test coverage needs improvement.
  • +
+

RLN: Post-Testnet3 Improvements {E:2023-rln}

+
    +
  • achieved: exposed merkle tree configuration, removed embedded resources from go-zerokit-rln, fixed nwaku / go-waku rlnKeystore compatibility, added merkle tree persistence and modified zerokit to print to stderr any error obtained while executing functions via FFI.
  • +
  • next: interop with nwaku
  • +
+

Introduce Peer Management {E:2023-peer-mgmt}

+
    +
  • achieved: add service slots to peer manager.
  • +
  • next: implement relay connectivity loop, integrate gossipsub scoring for peer disconnections
  • +
+
\ No newline at end of file diff --git a/roadmap/waku/updates/2023-08-14/index.html b/roadmap/waku/updates/2023-08-14/index.html deleted file mode 100644 index 3a15ce131..000000000 --- a/roadmap/waku/updates/2023-08-14/index.html +++ /dev/null @@ -1,495 +0,0 @@ - - - - - - - - 2023-08-14 Waku weekly - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

2023-08-14 Waku weekly

-

- Last updated -Aug 14, 2023 - - - -Edit Source - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# 2023-08-14 Waku weekly

-
-

# Epics

-

- -Waku Network Can Support 10K Users {E:2023-10k-users}

-

All software has been delivered. Pending items are:

- -
-

# Eco Dev

-

- -Aug 2023 {E:2023-eco-growth}

-
    -
  • achieved: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub
  • -
  • next: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning
  • -
  • blocker: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel
  • -
-
-

# Docs

-

- -Advanced docs for js-waku

-
    -
  • next: document notes/recommendations for NodeJS, begin docs on js-waku encryption
  • -
-
-

# nwaku

-

- -Release Process Improvements {E:2023-qa}

-
    -
  • achieved: minor CI fixes and improvements
  • -
  • next: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images
  • -
-

- -PostgreSQL {E:2023-10k-users}

-
    -
  • achieved: Learned that the insertion rate is constrained by the relay protocol. i.e. the maximum insert rate is limited by relay so I couldn’t push the “insert” operation to a limit from a Postgres point of view. For example, if 25 clients publish messages concurrently, and each client publishes 300 msgs, all the messages are correctly stored. If repeating the same operation but with 50 clients, then many messages are lost because the relay protocol doesn’t process all of them.
  • -
  • next: Carry on with stress testing. Analyze the performance differences between Postgres and SQLite regarding the read operations.
  • -
-

- -Autosharding v1 {E:2023-1mil-users}

-
    -
  • achieved: many feedback/update cycles for FILTER, LIGHTPUSH, STORE & RFC
  • -
  • next: updating ENR for live subscriptions
  • -
-

- -HTTP REST API: Store, Filter, Lightpush, Admin and Private APIs {E:2023-many-platforms}

-
    -
  • achieved: Legacy Filter - v1 - interface Rest Api support added.
  • -
  • next: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node. Add more tests.
  • -
-
-

# js-waku

-

- -Maintenance {E:2023-qa}

-
    -
  • achieved: upgrade libp2p & chainsafe deps to libp2p 0.46.3 while removing deprecated libp2p standalone interface packages (new breaking change libp2p w/ other deps), add tsdoc for referenced types, setting up/fixing prettier/eslint conflict
  • -
-

- -Developer Experience (2023) {E:2023-eco-growth}

- -

- -Peer Exchange is supported and used by default {E:2023-light-protocols}

-
    -
  • achieved: close the “fallback mechanism for peer rejections”, refactor peer-exchange compliance test
  • -
  • next: peer-exchange to be included with default discovery, action peer-exchange browser feedback
  • -
-
-

# go-waku

-

- -Maintenance {E:2023-qa}

-
    -
  • achieved: improved keep alive logic for identifying if machine is waking up; added vacuum feature to sqlite and postgresql; made migrations optional; refactored db and migration code, extracted code to generate node key to its own separate subcommand
  • -
-

- -C-Bindings Improvement: Callbacks and Duplications {E:2023-many-platforms}

-
    -
  • achieved: PR for updating the RFC to use callbacks, and refactored the encoding functions
  • -
-

- -Improve Test Coverage {E:2023-qa}

-
    -
  • achieved: Fixed issues reported by the data race detector tool.
  • -
  • next: identify areas where test coverage needs improvement.
  • -
-

- -RLN: Post-Testnet3 Improvements {E:2023-rln}

-
    -
  • achieved: exposed merkle tree configuration, removed embedded resources from go-zerokit-rln, fixed nwaku / go-waku rlnKeystore compatibility, added merkle tree persistence and modified zerokit to print to stderr any error obtained while executing functions via FFI.
  • -
  • next: interop with nwaku
  • -
-

- -Introduce Peer Management {E:2023-peer-mgmt}

-
    -
  • achieved: add service slots to peer manager.
  • -
  • next: implement relay connectivity loop, integrate gossipsub scoring for peer disconnections
  • -
-
- - -
- - -
- - - - - - -
- -
- - - -
- - - - diff --git a/roadmap/waku/updates/index.html b/roadmap/waku/updates/index.html new file mode 100644 index 000000000..7696f367c --- /dev/null +++ b/roadmap/waku/updates/index.html @@ -0,0 +1,52 @@ + +Folder: roadmap/waku/updates
\ No newline at end of file diff --git a/showcase.html b/showcase.html new file mode 100644 index 000000000..5bd3b39f2 --- /dev/null +++ b/showcase.html @@ -0,0 +1,77 @@ + +Quartz Showcase \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index bf3f50a86..92e5dfddd 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1,133 +1,133 @@ - - - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-21/ - 2023-08-21T00:00:00+00:00 + + https://roadmap.logos.co/authoring-content + 2023-08-22T08:20:28.274Z - https://roadmap.logos.co/roadmap/ - 2023-08-21T00:00:00+00:00 + https://roadmap.logos.co/build + 2023-08-22T08:20:28.274Z - https://roadmap.logos.co/tags/ - 2023-08-21T00:00:00+00:00 + https://roadmap.logos.co/configuration + 2023-08-22T08:20:28.274Z - https://roadmap.logos.co/tags/vac-updates/ - 2023-08-21T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/acid/milestones-overview/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/tags/milestones/ - 2023-08-17T16:15:17-04:00 - - https://roadmap.logos.co/roadmap/nomos/milestones-overview/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/waku/updates/2023-08-14/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-14/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/tags/nomos-updates/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/tags/waku-updates/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/tags/TEAM-updates/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/codex/updates/2023-08-11/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/tags/codex-updates/ - 2023-08-17T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/acid/updates/2023-08-09/ - 2023-08-09T00:00:00+00:00 - - https://roadmap.logos.co/tags/acid-updates/ - 2023-08-09T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/waku/updates/2023-08-06/ - 2023-08-08T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07/ - 2023-08-07T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-07/ - 2023-08-07T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/codex/milestones-overview/ - 2023-08-07T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/waku/milestone-waku-10-users/ - 2023-08-07T00:00:00+00:00 - - https://roadmap.logos.co/tags/milestones-overview/ - 2023-08-07T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/waku/milestones-overview/ - 2023-08-07T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/acid/updates/2023-08-02/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-24/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/tags/ilab-updates/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/codex/updates/2023-08-01/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-31/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/waku/updates/2023-07-31/ - 2023-08-04T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/waku/updates/2023-07-24/ - 2023-08-04T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/codex/updates/2023-07-21/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-17/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12/ - 2023-08-03T00:00:00+00:00 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-10/ - 2023-07-16T00:00:00+00:00 - - https://roadmap.logos.co/categories/ + https://roadmap.logos.co/hosting + 2023-08-22T08:20:28.274Z https://roadmap.logos.co/ - 2023-08-21T11:49:38-04:00 + 2023-08-22T08:20:28.274Z - https://roadmap.logos.co/roadmap/vac/milestones-overview/ - 2023-08-17T16:15:17-04:00 - - + https://roadmap.logos.co/index_default + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/layout + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/migrating-from-Quartz-3 + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/philosophy + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/showcase + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/upgrading + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/tags/component + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/roadmap/acid/milestones-overview + 2023-08-17T00:00:00.000Z + + https://roadmap.logos.co/roadmap/codex/milestones-overview + 2023-08-07T00:00:00.000Z + + https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview + 2023-08-17T00:00:00.000Z + + https://roadmap.logos.co/roadmap/nomos/milestones-overview + 2023-08-17T00:00:00.000Z + + https://roadmap.logos.co/roadmap/vac/ + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/roadmap/vac/milestones-overview + 2023-08-17T20:15:32.290Z + + https://roadmap.logos.co/roadmap/waku/ + 2023-08-22T08:20:28.274Z + + https://roadmap.logos.co/roadmap/waku/milestone-waku-10-users + 2023-08-07T00:00:00.000Z + + https://roadmap.logos.co/roadmap/waku/milestones-overview + 2023-08-07T00:00:00.000Z + + https://roadmap.logos.co/roadmap/acid/updates/2023-08-02 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/acid/updates/2023-08-09 + 2023-08-09T00:00:00.000Z + + https://roadmap.logos.co/roadmap/codex/updates/2023-07-21 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/codex/updates/2023-08-01 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/codex/updates/2023-08-11 + 2023-08-17T00:00:00.000Z + + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11 + 2023-08-17T00:00:00.000Z + + https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07 + 2023-08-07T00:00:00.000Z + + https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14 + 2023-08-17T00:00:00.000Z + + https://roadmap.logos.co/roadmap/vac/updates/2023-07-10 + 2023-07-16T00:00:00.000Z + + https://roadmap.logos.co/roadmap/vac/updates/2023-07-17 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/vac/updates/2023-07-24 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/vac/updates/2023-07-31 + 2023-08-03T00:00:00.000Z + + https://roadmap.logos.co/roadmap/vac/updates/2023-08-07 + 2023-08-07T00:00:00.000Z + + https://roadmap.logos.co/roadmap/vac/updates/2023-08-14 + 2023-08-17T00:00:00.000Z + + https://roadmap.logos.co/roadmap/vac/updates/2023-08-21 + 2023-08-21T00:00:00.000Z + + https://roadmap.logos.co/roadmap/waku/updates/2023-07-24 + 2023-08-04T00:00:00.000Z + + https://roadmap.logos.co/roadmap/waku/updates/2023-07-31 + 2023-08-04T00:00:00.000Z + + https://roadmap.logos.co/roadmap/waku/updates/2023-08-06 + 2023-08-08T00:00:00.000Z + + https://roadmap.logos.co/roadmap/waku/updates/2023-08-14 + 2023-08-17T00:00:00.000Z + \ No newline at end of file diff --git a/static/contentIndex.json b/static/contentIndex.json new file mode 100644 index 000000000..4198b4841 --- /dev/null +++ b/static/contentIndex.json @@ -0,0 +1 @@ +{"authoring-content":{"title":"Authoring Content","links":["","build","callouts","wikilinks","private-pages"],"tags":[],"content":"All of the content in your Quartz should go in the /content folder. The content for the home page of your Quartz lives in content/index.md. If you’ve setup Quartz already, this folder should already be initailized. Any Markdown in this folder will get processed by Quartz.\nIt is recommended that you use Obsidian as a way to edit and maintain your Quartz. It comes with a nice editor and graphical interface to preview, edit, and link your local files and attachments.\nGot everything setup? Let’s build and preview your Quartz locally!\nSyntax §\nAs Quartz uses Markdown files as the main way of writing content, it fully supports Markdown syntax. By default, Quartz also ships with a few syntax extensions like Github Flavored Markdown (footnotes, strikethrough, tables, tasklists) and Obsidian Flavored Markdown (callouts, wikilinks).\nAdditionally, Quartz also allows you to specify additional metadata in your notes called frontmatter.\ncontent/note.md---\ntitle: Example Title\ndraft: false\ntags:\n - example-tag\n---\n \nThe rest of your content lives here. You can use **Markdown** here :)\nSome common frontmatter fields that are natively supported by Quartz:\n\ntitle: Title of the page. If it isn’t provided, Quartz will use the name of the file as the title.\naliases: Other names for this note. This is a list of strings.\ndraft: Whether to publish the page or not. This is one way to make pages private in Quartz.\ndate: A string representing the day the note was published. Normally uses YYYY-MM-DD format.\n\nSyncing your Content §\nWhen you’re Quartz is at a point you’re happy with, you can save your changes to GitHub by doing npx quartz sync.\n\n\n \n Flags and options \n \n \nFor full help options, you can run npx quartz sync --help.\nMost of these have sensible defaults but you can override them if you have a custom setup:\n\n-d or --directory: the content folder. This is normally just content\n-v or --verbose: print out extra logging information\n--commit or --no-commit: whether to make a git commit for your changes\n--push or --no-push: whether to push updates to your GitHub fork of Quartz\n--pull or --no-pull: whether to try and pull in any updates from your GitHub fork (i.e. from other devices) before pushing\n\n"},"build":{"title":"Building your Quartz","links":[""],"tags":[],"content":"Once you’ve initialized Quartz, let’s see what it looks like locally:\nnpx quartz build --serve\nThis will start a local web server to run your Quartz on your computer. Open a web browser and visit http://localhost:8080/ to view it.\n\n\n \n Flags and options \n \n \nFor full help options, you can run npx quartz build --help.\nMost of these have sensible defaults but you can override them if you have a custom setup:\n\n-d or --directory: the content folder. This is normally just content\n-v or --verbose: print out extra logging information\n-o or --output: the output folder. This is normally just public\n--serve: run a local hot-reloading server to preview your Quartz\n--port: what port to run the local preview server on\n--concurrency: how many threads to use to parse notes\n\n"},"configuration":{"title":"Configuration","links":["layout","RSS-Feed","SPA-Routing","popover-previews","hosting","private-pages","graph-view","syntax-highlighting","making-plugins","Latex"],"tags":[],"content":"Quartz is meant to be extremely configurable, even if you don’t know any coding. Most of the configuration you should need can be done by just editing quartz.config.ts or changing the layout in quartz.layout.ts.\n\n\n \n Tip \n \n \nIf you edit Quartz configuration using a text-editor that has TypeScript language support like VSCode, it will warn you when you you’ve made an error in your configuration, helping you avoid configuration mistakes!\n\nThe configuration of Quartz can be broken down into two main parts:\nquartz.config.tsconst config: QuartzConfig = {\n configuration: { ... },\n plugins: { ... },\n}\nGeneral Configuration §\nThis part of the configuration concerns anything that can affect the whole site. The following is a list breaking down all the things you can configure:\n\npageTitle: title of the site. This is also used when generating the RSS Feed for your site.\nenableSPA: whether to enable SPA Routing on your site.\nenablePopovers: whether to enable popover previews on your site.\nanalytics: what to use for analytics on your site. Values can be\n\nnull: don’t use analytics;\n{ provider: 'plausible' }: use Plausible, a privacy-friendly alternative to Google Analytics; or\n{ provider: 'google', tagId: <your-google-tag> }: use Google Analytics\n\n\nbaseUrl: this is used for sitemaps and RSS feeds that require an absolute URL to know where the canonical ‘home’ of your site lives. This is normally the deployed URL of your site (e.g. quartz.jzhao.xyz for this site). Do not include the protocol (i.e. https://) or any leading or trailing slashes.\n\nThis should also include the subpath if you are hosting on GitHub pages without a custom domain. For example, if my repository is jackyzha0/quartz, GitHub pages would deploy to https://jackyzha0.github.io/quartz and the baseUrl would be jackyzha0.github.io/quartz\nNote that Quartz 4 will avoid using this as much as possible and use relative URLs whenever it can to make sure your site works no matter where you end up actually deploying it.\n\n\nignorePatterns: a list of glob patterns that Quartz should ignore and not search through when looking for files inside the content folder. See private pages for more details.\ntheme: configure how the site looks.\n\ntypography: what fonts to use. Any font available on Google Fonts works here.\n\nheader: Font to use for headers\ncode: Font for inline and block quotes.\nbody: Font for everything\n\n\ncolors: controls the theming of the site.\n\nlight: page background\nlightgray: borders\ngray: graph links, heavier borders\ndarkgray: body text\ndark: header text and icons\nsecondary: link colour, current graph node\ntertiary: hover states and visited graph nodes\nhighlight: internal link background, highlighted text, highlighted lines of code\n\n\n\n\n\nPlugins §\nYou can think of Quartz plugins as a series of transformations over content.\n\nplugins: {\n transformers: [...],\n filters: [...],\n emitters: [...],\n}\n\nTransformers map over content (e.g. parsing frontmatter, generating a description)\nFilters filter content (e.g. filtering out drafts)\nEmitters reduce over content (e.g. creating an RSS feed or pages that list all files with a specific tag)\n\nBy adding, removing, and reordering plugins from the tranformers, filters, and emitters fields, you can customize the behaviour of Quartz.\n\n\n \n Note \n \n \nEach node is modified by every transformer in order. Some transformers are position-sensitive so you may need to take special note of whether it needs come before or after any other particular plugins.\n\nAdditionally, plugins may also have their own configuration settings that you can pass in. For example, the Latex plugin allows you to pass in a field specifying the renderEngine to choose between Katex and MathJax.\ntransformers: [\n Plugin.FrontMatter(), // uses default options\n Plugin.Latex({ renderEngine: "katex" }), // specify some options\n]\nIf you’d like to make your own plugins, read the guide on making plugins for more information."},"hosting":{"title":"Hosting","links":["RSS-Feed","configuration"],"tags":[],"content":"Quartz effectively turns your Markdown files and other resources into a bundle of HTML, JS, and CSS files (a website!).\nHowever, if you’d like to publish your site to the world, you need a way to host it online. This guide will detail how to deploy with either GitHub Pages or Cloudflare pages but any service that allows you to deploy static HTML should work as well (e.g. Netlify, Replit, etc.)\n\n\n \n Tip \n \n \nSome Quartz features (like RSS Feed and sitemap generation) require baseUrl to be configured properly in your configuration to work properly. Make sure you set this before deploying!\n\nCloudflare Pages §\n\nLog in to the Cloudflare dashboard and select your account.\nIn Account Home, select Workers & Pages > Create application > Pages > Connect to Git.\nSelect the new GitHub repository that you created and, in the Set up builds and deployments section, provide the following information:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nConfiguration optionValueProduction branchv4Framework presetNoneBuild commandnpx quartz buildBuild output directorypublic\nPress “Save and deploy” and Cloudflare should have a deployed version of your site in about a minute. Then, every time you sync your Quartz changes to GitHub, your site should be updated.\nTo add a custom domain, check out Cloudflare’s documentation.\nGitHub Pages §\nLike Quartz 3, you can deploy the site generated by Quartz 4 via GitHub Pages.\nIn your local Quartz, create a new file quartz/.github/workflows/deploy.yml.\nquartz/.github/workflows/deploy.ymlname: Deploy Quartz site to GitHub Pages\n \non:\n push:\n branches:\n - v4\n \npermissions:\n contents: read\n pages: write\n id-token: write\n \nconcurrency:\n group: "pages"\n cancel-in-progress: false\n \njobs:\n build:\n runs-on: ubuntu-22.04\n steps:\n - uses: actions/checkout@v3\n with:\n fetch-depth: 0 # Fetch all history for git info\n - uses: actions/setup-node@v3\n with:\n node-version: 18.14\n - name: Install Dependencies\n run: npm ci\n - name: Build Quartz\n run: npx quartz build\n - name: Upload artifact\n uses: actions/upload-pages-artifact@v2\n with:\n path: public\n \n deploy:\n needs: build\n environment:\n name: github-pages\n url: ${{ steps.deployment.outputs.page_url }}\n runs-on: ubuntu-latest\n steps:\n - name: Deploy to GitHub Pages\n id: deployment\n uses: actions/deploy-pages@v2\nThen:\n\nHead to “Settings” tab of your forked repository and in the sidebar, click “Pages”. Under “Source”, select “GitHub Actions”.\nCommit these changes by doing npx quartz sync. This should deploy your site to <github-username>.github.io/<repository-name>.\n\n\n\n \n Tip \n \n \nIf you get an error about not being allowed to deploy to github-pages due to environment protection rules, make sure you remove any existing GitHub pages environments.\nYou can do this by going to your Settings page on your GitHub fork and going to the Environments tab and pressing the trash icon. The GitHub action will recreate the environment for you correctly the next time you sync your Quartz.\n\nCustom Domain §\nHere’s how to add a custom domain to your GitHub pages deployment.\n\nHead to the “Settings” tab of your forked repository.\nIn the “Code and automation” section of the sidebar, click “Pages”.\nUnder “Custom Domain”, type your custom domain and click “Save”.\nThis next step depends on whether you are using an apex domain (example.com) or a subdomain (subdomain.example.com).\n\nIf you are using an apex domain, navigate to your DNS provider and create an A record that points your apex domain to GitHub’s name servers which have the following IP addresses:\n\n185.199.108.153\n185.199.109.153\n185.199.110.153\n185.199.111.153\n\n\nIf you are using a subdomain, navigate to your DNS provider and create a CNAME record that points your subdomain to the default domain for your site. For example, if you want to use the subdomain quartz.example.com for your user site, create a CNAME record that points quartz.example.com to <github-username>.github.io.\n\n\n\nThe above shows a screenshot of Google Domains configured for both jzhao.xyz (an apex domain) and quartz.jzhao.xyz (a subdomain).\nSee the GitHub documentation for more detail about how to setup your own custom domain with GitHub Pages.\n\n\n \n Why aren't my changes showing up? \n \n \nThere could be many different reasons why your changes aren’t showing up but the most likely reason is that you forgot to push your changes to GitHub.\nMake sure you save your changes to Git and sync it to GitHub by doing npx quartz sync. This will also make sure to pull any updates you may have made from other devices so you have them locally.\n"},"index":{"title":"","links":["roadmap/waku/milestones-overview","tags/waku-updates","roadmap/waku-updates","roadmap/codex/milestones-overview","tags/codex-updates","roadmap/nomos/milestones-overview","tags/nomos-updates","roadmap/vac/milestones-overview","tags/vac-updates","roadmap/innovation_lab/milestones-overview","tags/ilab-updates","roadmap/acid/milestones-overview","tags/acid-updates"],"tags":[],"content":"This site attempts to inform the previous, current, and future work required to fulfill the requirements of the projects under the Logos Collective, a complete tech stack that provides infrastructure for the self-sovereign network state. To learn more about the motivation, please visit the Logos Collective Site.\nNavigation §\nWaku §\n\nMilestones\nweekly updates\ntest\n\nCodex §\n\nMilestones\nweekly updates\n\nNomos §\n\nMilestones\nweekly updates\n\nVac §\n\nMilestones\nweekly updates\n\nInnovation Lab §\n\nMilestones\nweekly updates\n\nComms (Acid Info) §\n\nMilestones\nweekly updates\n"},"index_default":{"title":"Welcome to Quartz 4","links":["showcase","authoring-content","configuration","layout","build","hosting","migrating-from-Quartz-3","Obsidian-compatibility","full-text-search","graph-view","wikilinks","backlinks","Latex","syntax-highlighting","popover-previews","features","creating-components","SPA-Routing","making-plugins","philosophy","architecture","upgrading"],"tags":[],"content":"Quartz is a fast, batteries-included static-site generator that transforms Markdown content into fully functional websites. Thousands of students, developers, and teachers are already using Quartz to publish personal notes, wikis, and digital gardens to the web.\n🪴 Get Started §\nQuartz requires at least Node v18.14 to function correctly. Ensure you have this installed on your machine before continuing.\nThen, in your terminal of choice, enter the following commands line by line:\ngit clone https://github.com/jackyzha0/quartz.git\ncd quartz\nnpm i\nnpx quartz create\nThis will guide you through initializing your Quartz with content. Once you’ve done so, see how to:\n\nAuthor content in Quartz\nConfigure Quartz’s behaviour\nChange Quartz’s layout\nBuild and preview Quartz\nHost Quartz online\n\n\n\n \n Info \n \n \nComing from Quartz 3? See the migration guide for the differences between Quartz 3 and Quartz 4 and how to migrate.\n\n🔧 Features §\n\nObsidian compatibility, full-text search, graph view, wikilinks, backlinks, Latex, syntax highlighting, popover previews, and many more right out of the box\nHot-reload for both configuration and content\nSimple JSX layouts and page components\nRidiculously fast page loads and tiny bundle sizes\nFully-customizable parsing, filtering, and page generation through plugins\n\nFor a comprehensive list of features, visit the features page. You can read more about the why behind these features on the philosophy page and a technical overview on the architecture page.\n🚧 Troubleshooting + Updating §\nHaving trouble with Quartz? Try searching for your issue using the search feature. If you haven’t already, upgrade to the newest version of Quartz to see if this fixes your issue.\nIf you’re still having trouble, feel free to submit an issue if you feel you found a bug or ask for help in our Discord Community."},"layout":{"title":"Layout","links":["tags/component","creating-components","configuration"],"tags":[],"content":"Certain emitters may also output HTML files. To enable easy customization, these emitters allow you to fully rearrange the layout of the page. The default page layouts can be found in quartz.layout.ts.\nEach page is composed of multiple different sections which contain QuartzComponents. The following code snippet lists all of the valid sections that you can add components to:\nquartz/cfg.tsexport interface FullPageLayout {\n head: QuartzComponent // single component\n header: QuartzComponent[] // laid out horizontally\n beforeBody: QuartzComponent[] // laid out vertically\n pageBody: QuartzComponent // single component\n left: QuartzComponent[] // vertical on desktop, horizontal on mobile\n right: QuartzComponent[] // vertical on desktop, horizontal on mobile\n footer: QuartzComponent // single component\n}\nThese correspond to following parts of the page:\n\n\n\n \n Note \n \n \nThere are two additional layout fields that are not shown in the above diagram.\n\nhead is a single component that renders the <head> tag in the HTML. This doesn’t appear visually on the page and is only is responsible for metadata about the document like the tab title, scripts, and styles.\nheader is a set of components that are laid out horizontally and appears before the beforeBody section. This enables you to replicate the old Quartz 3 header bar where the title, search bar, and dark mode toggle. By default, Quartz 4 doesn’t place any components in the header.\n\n\nQuartz components, like plugins, can take in additional properties as configuration options. If you’re familiar with React terminology, you can think of them as Higher-order Components.\nSee a list of all the components for all available components along with their configuration options. You can also checkout the guide on creating components if you’re interested in further customizing the behaviour of Quartz.\nStyle §\nMost meaningful style changes like colour scheme and font can be done simply through the general configuration options. However, if you’d like to make more involved style changes, you can do this by writing your own styles. Quartz 4, like Quartz 3, uses Sass for styling.\nYou can see the base style sheet in quartz/styles/base.scss and write your own in quartz/styles/custom.scss.\n\n\n \n Note \n \n \nSome components may provide their own styling as well! For example, quartz/components/Darkmode.tsx imports styles from quartz/components/styles/darkmode.scss. If you’d like to customize styling for a specific component, double check the component definition to see how its styles are defined.\n"},"migrating-from-Quartz-3":{"title":"Migrating from Quartz 3","links":["configuration","hosting","folder-and-tag-listings","creating-components"],"tags":[],"content":"As you already have Quartz locally, you don’t need to fork or clone it again. Simply just checkout the alpha branch, install the dependencies, and import your old vault.\ngit fetch\ngit checkout v4\ngit pull upstream v4\nnpm i\nnpx quartz create\nIf you get an error like fatal: 'upstream' does not appear to be a git repository, make sure you add upstream as a remote origin:\ngit remote add upstream https://github.com/jackyzha0/quartz.git\nWhen running npx quartz create, you will be prompted as to how to initialize your content folder. Here, you can choose to import or link your previous content folder and Quartz should work just as you expect it to.\n\n\n \n Note \n \n \nIf the existing content folder you’d like to use is at the same path on a different branch, clone the repo again somewhere at a different path in order to use it.\n\nKey changes §\n\nRemoving Hugo and hugo-obsidian: Hugo worked well for earlier versions of Quartz but it also made it hard for people outside of the Golang and Hugo communities to fully understand what Quartz was doing under the hood and be able to properly customize it to their needs. Quartz 4 now uses a Node-based static-site generation process which should lead to a much more helpful error messages and an overall smoother user experience.\nFull-hot reload: The many rough edges of how hugo-obsidian integrated with Hugo meant that watch mode didn’t re-trigger hugo-obsidian to update the content index. This lead to a lot of weird cases where the watch mode output wasn’t accurate. Quartz 4 now uses a cohesive parse, filter, and emit pipeline which gets run on every change so hot-reloads are always accurate.\nReplacing Go template syntax with JSX: Quartz 3 used Go templates to create layouts for pages. However, the syntax isn’t great for doing any sort of complex rendering (like text processing) and it got very difficult to make any meaningful layout changes to Quartz 3. Quartz 4 uses an extension of JavaScript syntax called JSX which allows you to write layout code that looks like HTML in JavaScript which is significantly easier to understand and maintain.\nA new extensible configuration and plugin system: Quartz 3 was hard to configure without technical knowledge of how Hugo’s partials worked. Extensions were even hard to make. Quartz 4’s configuration and plugin system is designed to be extended by users while making updating to new versions of Quartz easy.\n\nThings to update §\n\nYou will need to update your deploy scripts. See the hosting guide for more details.\nEnsure that your default branch on GitHub is updated from hugo to v4.\nFolder and tag listings have also changed.\n\nFolder descriptions should go under content/<folder-name>/index.md where <folder-name> is the name of the folder.\nTag descriptions should go under content/tags/<tag-name>.md where <tag-name> is the name of the tag.\n\n\nSome HTML layout may not be the same between Quartz 3 and Quartz 4. If you depended on a particular HTML hierarchy or class names, you may need to update your custom CSS to reflect these changes.\nIf you customized the layout of Quartz 3, you may need to translate these changes from Go templates back to JSX as Quartz 4 no longer uses Hugo. For components, check out the guide on creating components for more details on this.\n"},"philosophy":{"title":"Philosophy of Quartz","links":[],"tags":[],"content":"A garden should be a true hypertext §\n\nThe garden is the web as topology. Every walk through the garden creates new paths, new meanings, and when we add things to the garden we add them in a way that allows many future, unpredicted relationships.\n(The Garden and the Stream)\n\nThe problem with the file cabinet is that it focuses on efficiency of access and interoperability rather than generativity and creativity. Thinking is not linear, nor is it hierarchical. In fact, not many things are linear or hierarchical at all. Then why is it that most tools and thinking strategies assume a nice chronological or hierarchical order for my thought processes? The ideal tool for thought for me would embrace the messiness of my mind, and organically help insights emerge from chaos instead of forcing an artificial order. A rhizomatic, not arboresecent, form of note taking.\nMy goal with a digital garden is not purely as an organizing system and information store (though it works nicely for that). I want my digital garden to be a playground for new ways ideas can connect together. As a result, existing formal organizing systems like Zettelkasten or the hierarchical folder structures of Notion don’t work well for me. There is way too much upfront friction that by the time I’ve thought about how to organize my thought into folders categories, I’ve lost it.\nQuartz embraces the inherent rhizomatic and web-like nature of our thinking and tries to encourage note-taking in a similar form.\n\nA garden should be shared §\nThe goal of digital gardening should be to tap into your network’s collective intelligence to create constructive feedback loops. If done well, I have a shareable representation of my thoughts that I can send out into the world and people can respond. Even for my most half-baked thoughts, this helps me create a feedback cycle to strengthen and fully flesh out that idea.\nQuartz is designed first and foremost as a tool for publishing digital gardens to the web. To me, digital gardening is not just passive knowledge collection. It’s a form of expression and sharing.\n\n“[One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important.”\n— Richard Hamming\n\nThe goal of Quartz is to make sharing your digital garden free and simple. At its core, Quartz is designed to be easy to use enough for non-technical people to get going but also powerful enough that senior developers can tweak it to work how they’d like it to work."},"showcase":{"title":"Quartz Showcase","links":[],"tags":[],"content":"Want to see what Quartz can do? Here are some cool community gardens:\n\nQuartz Documentation (this site!)\nJacky Zhao’s Garden\nBrandon Boswell’s Garden\nScaling Synthesis - A hypertext research notebook\nAWAGMI Intern Notes\nCourse notes for Information Technology Advanced Theory\nData Dictionary 🧠\nsspaeti.com’s Second Brain\noldwinterの数字花园\nAbhijeet’s Math Wiki\nMike’s AI Garden 🤖🪴\nMatt Dunn’s Second Brain\n\nIf you want to see your own on here, submit a Pull Request adding yourself to this file!"},"upgrading":{"title":"Upgrading Quartz","links":["migrating-from-Quartz-3"],"tags":[],"content":"\n\n \n Note \n \n \nThis is specifically a guide for upgrading Quartz 4 version to a more recent update. If you are coming from Quartz 3, check out the migration guide for more info.\n\nTo fetch the latest Quartz updates, simply run\nnpx quartz update\nAs Quartz uses git under the hood for versioning, updating effectively ‘pulls’ in the updates from the official Quartz GitHub repository. If you have local changes that might conflict with the updates, you may need to resolve these manually yourself (or, pull manually using git pull origin upstream).\n\n\n \n Tip \n \n \nQuartz will try to cache your content before updating to try and prevent merge conflicts. If you get a conflict mid-merge, you can stop the merge and then run npx quartz restore to restore your content from the cache.\n\nIf you have the GitHub desktop app, this will automatically open to help you resolve the conflicts. Otherwise, you will need to resolve this in a text editor like VSCode. For more help on resolving conflicts manually, check out the GitHub guide on resolving merge conflicts."},"tags/component":{"title":"Components","links":["creating-components"],"tags":[],"content":"Want to create your own custom component? Check out the advanced guide on creating components for more information."},"roadmap/acid/milestones-overview":{"title":"Comms Milestones Overview","links":[],"tags":["milestones"],"content":"\nComms Roadmap\nComms Projects\nComms planner deadlines\n"},"roadmap/codex/milestones-overview":{"title":"Codex Milestones Overview","links":[],"tags":["milestones-overview"],"content":"Milestones §\n\nZenhub Tracker\nMiro Tracker\n"},"roadmap/innovation_lab/milestones-overview":{"title":"Innovation Lab Milestones Overview","links":[],"tags":["milestones"],"content":"iLab Milestones can be found on the Notion Page"},"roadmap/nomos/milestones-overview":{"title":"Nomos Milestones Overview","links":[],"tags":["milestones"],"content":"Milestones Overview Notion Page"},"roadmap/vac/index":{"title":"Vac Roadmap","links":[],"tags":[],"content":"Welcome to the Vac Roadmap Overview"},"roadmap/vac/milestones-overview":{"title":"Vac Milestones Overview","links":[],"tags":["milestones"],"content":"Overview Notion Page - Information copied here for now\nInfo §\nStructure of milestone names: §\nvac:<unit>:<tag>:<for_project>:<title>_<counter>\n\nvac indicates it is a vac milestone\nunit indicates the vac unit p2p, dst, tke, acz, sc, zkvm, dr, rfc\ntag tags a specific area / project / epic within the respective vac unit, e.g. nimlibp2p, or zerokit\nfor_project indicates which Logos project the milestone is mainly for nomos, waku, codex, nimbus, status; or vac (meaning it is internal / helping all projects as a base layer)\ntitle the title of the milestone\ncounter an optional counter; 01 is implicit; marked with a 02 onward indicates extensions of previous milestones\n\nVac Unit Roadmaps §\n\nRoadmap: P2P\nRoadmap: Token Economics\nRoadmap: Distributed Systems Testing (DST))\nRoadmap: Applied Cryptography and ZK (ACZ)\nRoadmap: Smart Contracts (SC)\nRoadmap: zkVM\nRoadmap: Deep Research (DR)\nRoadmap: RFC Process\n"},"roadmap/waku/index":{"title":"Waku Roadmap","links":[],"tags":[],"content":"Welcome to the Waku Roadmap Overview"},"roadmap/waku/milestone-waku-10-users":{"title":"Milestone: Waku Network supports 10k Users","links":[],"tags":[],"content":"%%{ \n init: { \n 'theme': 'base', \n 'themeVariables': { \n 'primaryColor': '#BB2528', \n 'primaryTextColor': '#fff', \n 'primaryBorderColor': '#7C0000', \n 'lineColor': '#F8B229', \n 'secondaryColor': '#006100', \n 'tertiaryColor': '#fff' \n } \n } \n}%%\ngantt\n\tdateFormat YYYY-MM-DD \n\tsection Scaling\n\t\t10k Users :done, 2023-01-20, 2023-07-31\n\nCompletion Deliverable §\nTBD\nEpics §\n\nGithub Issue Tracker\n"},"roadmap/waku/milestones-overview":{"title":"Waku Milestones Overview","links":["roadmap/waku/milestone-waku-10-users"],"tags":[],"content":"\n90% - Waku Network support for 10k users\n80% - Waku Network support for 1MM users\n65% - Restricted-run (light node) protocols are production ready\n60% - Peer management strategy for relay and light nodes are defined and implemented\n10% - Quality processes are implemented for nwaku and go-waku\n80% - Define and track network and community metrics for continuous monitoring improvement\n20% - Executed an array of community growth activity (8 hackathons, workshops, and bounties)\n15% - Dogfooding of RLN by platforms has started\n06% - First protocol to incentivize operators has been defined\n"},"roadmap/acid/updates/2023-08-02":{"title":"2023-08-02 Acid weekly","links":[],"tags":["acid-updates"],"content":"Leads roundup - acid §\nAl / Comms\n\nStatus app relaunch comms campaign plan in the works. Approx. date for launch 31.08.\nLogos comms + growth plan post launch is next up TBD.\nWill be waiting for specs for data room, raise etc.\nHires: split the role for content studio to be more realistic in getting top level talent.\n\nMatt / Copy\n\nInitiative updating old documentation like CC guide to reflect broader scope of BUs\nBrand guidelines/ modes of presentation are in process\nWikipedia entry on network states and virtual states is live on\n\nEddy / Digital Comms\n\nLogos Discord will be completed by EOD.\nCodex Discord will be done tomorrow.\nLPE rollout plan, currently working on it, will be ready EOW\nPodcast rollout needs some\nOverarching BU plan will be ready in next couple of weeks as things on top have taken priority.\n\nAmir / Studio\n\nStarted execution of LPE for new requirements, broken down in smaller deliveries. Looking to have it working and live by EOM.\nHires: still looking for 3 positions with main focus on developer side.\n\nJonny / Podcast\n\nPodcast timelines are being set. In production right now. Nick delivered graphics for HiO but we need a full pack.\nFirst HiO episode is in the works. Will be ready in 2 weeks to fit in the rollout of the LPE.\n\nLouisa / Events\n\nGlobal strategy paper for wider comms plan.\nTemplate for processes and executions when preparing events.\nDecision made with Carl to move Network State event to November in satellite of other events. Looking into ETH Lisbon / Staking Summit etc.\nSeoul Q4 hackathon is already in the works. Needs bounty planning.\n"},"roadmap/acid/updates/2023-08-09":{"title":"2023-08-09 Acid weekly","links":[],"tags":["acid-updates"],"content":"Top level priorities: §\nLogos Growth Plan\nStatus Relaunch\nLaunch of LPE\nPodcasts (Target: Every week one podcast out)\nHiring: TD studio and DC studio roles\nMovement Building: §\n\nLogos collective comms plan skeleton ready - will be applied for all BUs as next step\nGoal is to have plan + overview to set realistic KPIs and expectations\nDiscord Server update on various views\nStatus relaunch comms plan is ready for input from John et al.\nReach out to BUs for needs and deliverables\n\nTD Studio §\nFull focus on LPE:\n\nOn track, target of end of august\nreview of options, more diverse landscape of content\nEpisodes page proposals\nPlayers in progress\nrefactoring from prev code base\nstructure of content ready in GDrive\n\nCopy §\n\nContent around LPE\nContent for podcast launches\nStatus launch - content requirements to receive\nOrganization of doc sites review\nTBD what type of content and how the generation workflows will look like\n\nPodcast §\n\nGood state in editing and producing the shows\nFirst interview edited end to end with XMTP is ready. 2 weeks with social assets and all included.\nLSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n3 recorded for HIO, motion graphics in progress\nFirst E2E podcast ready in 2 weeks for LPE\nLSP is looking at having 2 months of content ready to launch with the sessions that have been recorded.\n\nDC Studio §\n\nBrand guidelines for HiO are ready and set. Thanks Shmeda!\nLogos State branding assets are being developed\nPresentation templates update\n\nEvents §\n\nNetwork State event probably in Istanbul in November re: Devconnect will confirm shortly.\nProgram elements and speakers are top priority\nHackathon in Seoul in Q1 2024 - late Febuary probably\nJarrad will be speaking at HCPP and EthRome\nGlobal event strategy written and in review\nLou presented social media and event KPIs on Paris event\n\nCRM & Marketing tool §\n\nGet feedback from stakeholders and users\nPM implementation to be planned (+- 3 month time TBD) with working group\nLPE KPI: Collecting email addresses of relevant people\nCareful on how we manage and use data, important for BizDev\nCareful on which segments of the project to manage using the CRM as it can be very off brand\n"},"roadmap/codex/updates/2023-07-21":{"title":"2023-07-21 Codex weekly","links":["tags/479","tags/166"],"tags":["codex-updates","479","166"],"content":"Codex update 07/12/2023 to 07/21/2023 §\nOverall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc…\nOur main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. A lot of grunt work is being done to make that possible. Progress is steady, but there are lots of stabilization and testing & infra related work going on.\nWe’re also onboarding several new members to the team (4 to be precise), this will ultimately accelerate our progress, but it requires some upfront investment from some of the more experienced team members.\nDevOps/Infrastructure: §\n\nAdopted nim-codex Docker builds for Dist Tests.\nOrdered Dedicated node on Hetzner.\nConfigured Hetzner StorageBox for local backup on Dedicated server.\nConfigured new Logs shipper and Grafana in Dist-Tests cluster.\nCreated Geth and Prometheus Docker images for Dist-Tests.\nCreated a separate codex-contracts-eth Docker image for Dist-Tests.\nSet up Ingress Controller in Dist-Tests cluster.\n\nTesting: §\n\nSet up deployer to gather metrics.\nDebugging and identifying potential deadlock in the Codex client.\nAdded metrics, built image, and ran tests.\nUpdated dist-test log for Kibana compatibility.\nRan dist-tests on a new master image.\nDebugging continuous tests.\n\nDevelopment: §\n\nWorked on codex-dht nimble updates and fixing key format issue.\nUpdated CI and split Windows CI tests to run on two CI machines.\nContinued updating dependencies in codex-dht.\nFixed decoding large manifests (PR #479).\nExplored the existing implementation of NAT Traversal techniques in nim-libp2p.\n\nResearch §\n\nExploring additional directions for remote verification techniques and the interplay of different encoding approaches and cryptographic primitives\n\n1500.pdf\npcs-multiproofs.html\n1544.pdf\n\n\nOnboarding Balázs as our ZK researcher/engineer\nContinued research in DAS related topics\n\nRunning simulation on newly setup infrastructure\n\n\nDevised a new direction to reduce metadata overhead and enable remote verification metadata-overhead.md\nLooked into NAT Traversal (issue #166).\n\nCross-functional (Combination of DevOps/Testing/Development): §\n\nFixed discovery related issues.\nPlanned Codex Demo update for the Logos event and prepared environment for the demo.\nDescribed requirements for Dist Tests logs format.\nConfigured new Logs shipper and Grafana in Dist-Tests cluster.\nDist Tests logs adoption requirements - Updated log format for Kibana compatibility.\nHetzner Dedicated server was configured.\nSet up Hetzner StorageBox for local backup on Dedicated server.\nConfigured new Logs shipper in Dist-Tests cluster.\nSetup Grafana in Dist-Tests cluster.\nCreated a separate codex-contracts-eth Docker image for Dist-Tests.\nSetup Ingress Controller in Dist-Tests cluster.\n\n\nConversations §\n\nzk_id — 07/24/2023 11:59 AM\n\n\nWe’ve explored VDI for rollups ourselves in the last week, curious to know your thoughts\n\n\ndryajov — 07/25/2023 1:28 PM\n\n\nIt depends on what you mean, from a high level (A)VID is probably the closest thing to DAS in academic research, in fact DAS is probably either a subset or a superset of VID, so it’s definitely worth digging into. But I’m not sure what exactly you’re interested in, in the context of rollups…\n\n\n\nzk_id — 07/25/2023 3:28 PM\nThe part of the rollups seems to be the base for choosing proofs that scale linearly with the amount of nodes (which makes it impractical for large numbers of nodes). The protocol is very simple, and would only need to instead provide constant proofs with the Kate commitments (at the cost of large computational resources is my understanding). This was at least the rationale that I get from reading the paper and the conversation with Bunz, one of the founders of the Espresso shared sequencer (which is where I found the first reference to this paper). I guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn’t need to do that for the agreement of the dispersal. You still would need the sampling for the light clients though, of course.\n\n\ndryajov — 07/25/2023 8:31 PM\n\nI guess my main open question is why would you do the sampling if you can do VID in the context of blockchains as well. With the proofs of dispersal on-chain, you wouldn’t need to do that for the agreement of the dispersal.\n\nYeah, great question. What follows is strictly IMO, as I haven’t seen this formally contrasted anywhere, so my reasoning can be wrong in subtle ways.\n\n(A)VID - dispersing and storing data in a verifiable manner\nSampling - verifying already dispersed data\n\ntl;dr Sampling allows light nodes to protect against dishonest majority attacks. In other words, a light node cannot be tricked to follow an incorrect chain by a dishonest validator majority that withholds data. More details are here - data-availability-checks.html ------------- First, DAS implies (A)VID, as there is an initial phase where data is distributed to some subset of nodes. Moreover, these nodes, usually the validators, attest that they received the data and that it is correct. If a majority of validators accepts, then the block is considered correct, otherwise it is rejected. This is the verifiable dispersal part. But what if the majority of validators are dishonest? Can you prevent them from tricking the rest of the network from following the chain?\nDankrad Feist\nData availability checks\nPrimer on data availability checks\n\n\n[8:31 PM]\nDealing with dishonest majorities §\nThis is easy if all the data is downloaded by all nodes all the time, but we’re trying to avoid just that. But lets assume, for the sake of the argument, that there are full nodes in the network that download all the data and are able to construct fraud proofs for missing data, can this mitigate the problem? It turns out that it can’t, because proving data (un)availability isn’t a directly attributable fault - in other words, you can observe/detect it but there is no way you can prove it to the rest of the network reliably. More details here A-note-on-data-availability-and-erasure-coding So, if there isn’t much that can be done by detecting that a block isn’t available, what good is it for? Well nodes can still avoid following the unavailable chain and thus be tricked by a dishonest majority. However, simply attesting that data has been publishing is not enough to prevent a dishonest majority from attacking the network. (edited)\n\n\ndryajov — 07/25/2023 9:06 PM\nTo complement, the relevant quote from A-note-on-data-availability-and-erasure-coding, is:\n\nHere, fraud proofs are not a solution, because not publishing data is not a uniquely attributable fault - in any scheme where a node (“fisherman”) has the ability to “raise the alarm” about some piece of data not being available, if the publisher then publishes the remaining data, all nodes who were not paying attention to that specific piece of data at that exact time cannot determine whether it was the publisher that was maliciously withholding data or whether it was the fisherman that was maliciously making a false alarm.\n\nThe relevant quote from from data-availability-checks.html, is:\n\nThere is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a consensus supermajority has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? Honest full nodes, obviously, will not follow this chain, as they can’t download the data. But light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.\n\nBoth articles are a bit old, but the intuitions still hold.\n\n\nJuly 26, 2023\n\n\nzk_id — 07/26/2023 10:42 AM\nThanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it’s not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\n\n\n[10:45 AM]\nThe dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\n\n\nzk_id\nThanks a ton @dryajov ! We are on the same page. TBH it took me a while to get to this point, as it’s not an intuitive problem at first. The relationship between the VID and the DAS, and what each is for is crucial for us, btw. Your writing here and your references give us the confidence that we understand the problem and are equipped to evaluate the different solutions. Deeply appreciate that you took the time to write this, and is very valuable.\ndryajov — 07/26/2023 4:42 PM §\nGreat! Glad to help anytime\n\n\nzk_id\nThe dishonest majority is critical scenario for Nomos (essential part of the whole sovereignty narrative), and generally not considered by most blockchain designs\ndryajov — 07/26/2023 4:43 PM\nYes, I’d argue it is crucial in a network with distributed validation, where all nodes are either fully light or partially light nodes.\n\n\n[4:46 PM]\nBtw, there is probably more we can share/compare notes on in this problem space, we’re looking at similar things, perhaps from a slightly different perspective in Codex’s case, but the work done on DAS with the EF directly is probably very relevant for you as well\n\n\nJuly 27, 2023\n\n\nzk_id — 07/27/2023 3:05 AM\nI would love to. Do you have those notes somewhere?\n\n\nzk_id — 07/27/2023 4:01 AM\nall the links you have, anything, would be useful\n\n\nzk_id\nI would love to. Do you have those notes somewhere?\ndryajov — 07/27/2023 4:50 PM\nA bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\n\n\nJuly 28, 2023\n\n\nzk_id — 07/28/2023 5:47 AM\nWould love to see anything that is possible\n\n\n[5:47 AM]\nOur setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\n\n\nzk_id\nOur setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\ndryajov — 07/28/2023 4:07 PM\nYes, we’re also working in this direction as this is crucial for us as well. There should be some result coming soon(tm), now that @bkomuves is helping us with this part.\n\n\nzk_id\nOur setting is much simpler, but any progress that you make (specifically in the computational cost of the polynomial commitments or alternative proofs) would be really useful for us\nbkomuves — 07/28/2023 4:44 PM\nmy current view (it’s changing pretty often :) is that there is tension between:\n\ncommitment cost\nproof cost\nand verification cost\n\nthe holy grail which is the best for all of them doesn’t seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what’s possible, there are external restrictions)\n\n\nJuly 29, 2023\n\n\nbkomuves\nmy current view (it’s changing pretty often :) is that there is tension between: \n\ncommitment cost\nproof cost\nand verification cost\n\n the holy grail which is the best for all of them doesn’t seem to exist. Hence, you have to make tradeoffs, and it depends on your specific use case what you should optimize for, or what balance you aim for. we plan to find some points in this 3 dimensional space which are hopefully close to the optimal surface, and in parallel to that figure out what balance to aim for, and then choose a solution based on that (and also based on what’s possible, there are external restrictions)\nzk_id — 07/29/2023 4:23 AM\nI agree. That’s also my understanding (although surely much more superficial).\n\n\n[4:24 AM]\nThere is also the dimension of computation vs size cost\n\n\n[4:25 AM]\nie the VID scheme (of the paper that kickstarted this conversation) has all the properties we need, but it scales n^2 in message complexity which makes it lose the properties we are looking for after 1k nodes. We need to scale confortably to 10k nodes.\n\n\n[4:29 AM]\nSo we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is:\n\nOur rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\nIf we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don’t think we will pursue this, but we will have to if this scheme doesn’t scale with the first option.\n\n\n\nAugust 1, 2023\n\n\ndryajov\nA bit scattered all over the place, mainly from @Leobago and @cskiraly @cskiraly has a draft paper somewhere\nLeobago — 08/01/2023 1:13 PM\nNote much public write-ups yet. You can find some content here:\n\n\ndata-availability-sampling\n\n\ndas-research\n\n\n\n\nWe also have a few Jupiter notebooks but they are not public yet. As soon as that content is out we can let you know \nCodex Storage Blog\nData Availability Sampling\nThe Codex team is busy building a new web3 decentralized storage platform with the latest advances in erasure coding and verification systems. Part of the challenge of deploying decentralized storage infrastructure is to guarantee that the data that has been stored and is available for retrieval from the beginning until\nGitHub\ndas-research: This repository hosts all the …\nThis repository hosts all the research on DAS for the collaboration between Codex and the EF. - GitHub - codex-storage/das-research: This repository hosts all the research on DAS for the collabora…\n\n\n\n\nzk_id\nSo we are at the moment most likely to use KZG commitments with a 2d RS polynomial. Basically just copy Ethereum. Reason is: \n\nOur rollups/EZ leader will generate this, and those are beefier machines than the Base Layer. The base layer nodes just need to verify and sign the EC fragments and return them to complete the VID protocol (and then run consensus on the aggregated signed proofs).\nIf we ever decide to change the design for the VID dispersal to be done by Base Layer leaders (in a multileader fashion), it can be distributed (rows/columns can be reconstructed and proven separately). I don’t think we will pursue this, but we will have to if this scheme doesn’t scale with the first option.\n\ndryajov — 08/01/2023 1:55 PM\nThis might interest you as well - combining-kzg-and-erasure-coding-fc903dc78f1a\nMedium\nCombining KZG and erasure coding\nThe Hitchhiker’s Guide to Subspace  — Episode II\n\n\n\n\n[1:56 PM]\nThis is a great analysis of the current state of the art in structure of data + commitment and the interplay. I would also recoment reading the first article of the series which it also links to\n\n\nzk_id — 08/01/2023 3:04 PM\nThanks @dryajov @Leobago ! Much appreciated!\n\n\n[3:05 PM]\nVery glad that we can discuss these things with you. Maybe I have some specific questions once I finish reading a huge pile of pending docs that I’m tackling starting today…\n\n\nzk_id — 08/01/2023 6:34 PM\n@Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\n\n\nzk_id\n@Leobago @dryajov I was playing with the DAS simulator. It seems the results are a bunch of XML. Is there a way so I visualize the results?\nLeobago — 08/01/2023 6:36 PM\nYes, checkout the visual branch and make sure to enable plotting in the config file, it should produce a bunch of figures \n\n\n[6:37 PM]\nYou might find also some bugs here and there on that branch \n\n\nzk_id — 08/01/2023 7:44 PM\nThanks!\n\n"},"roadmap/codex/updates/2023-08-01":{"title":"2023-08-01 Codex weekly","links":[],"tags":["codex-updates"],"content":"Codex update Aug 1st §\nClient §\nMilestone: Merkelizing block data §\n\nInitial design writeup metadata-overhead.md\n\nWork break down and review for Ben and Tomasz (epic coming up)\nThis is required to integrate the proving system\n\n\n\nMilestone: Block discovery and retrieval §\n\nSome initial work break down and milestones here - edit\n\nInitial analysis of block discovery - 1067876\nInitial block discovery simulator - block-discovery-sim\n\n\n\nMilestone: Distributed Client Testing §\n\nLots of work around log collection/analysis and monitoring\n\nDetails here 41\n\n\n\nMarketplace §\nMilestone: L2 §\n\nTaiko L2 integration\n\nThis is a first try of running against an L2\nMostly done, waiting on related fixes to land before merge - 483\n\n\n\nMilestone: Reservations and slot management §\n\nLots of work around slot reservation and queuing 455\n\nRemote auditing §\nMilestone: Implement Poseidon2 §\n\nFirst pass at an implementation by Balazs\n\nprivate repo, but can give access if anyone is interested\n\n\n\nMilestone: Refine proving system §\n\nLost of thinking around storage proofs and proving systems\n\nprivate repo, but can give access if anyone is interested\n\n\n\nDAS §\nMilestone: DHT simulations §\n\nImplementing a DHT in Python for the DAS simulator.\nImplemented logical error-rates and delays to interactions between DHT clients.\n"},"roadmap/codex/updates/2023-08-11":{"title":"2023-08-11 Codex weekly","links":[],"tags":["codex-updates"],"content":"Codex update August 11 §\n\nClient §\nMilestone: Merkelizing block data §\n\nInitial Merkle Tree implementation - 504\nWork on persisting/serializing Merkle Tree is underway, PR upcoming\n\nMilestone: Block discovery and retrieval §\n\nContinued analysis of block discovery and retrieval - _KOAm8kNQamMx-lkQvw-Iw?both=#fn5\n\nReviewing papers on peers sampling and related topics\n\nWormhole Peer Sampling paper\nSmoothcache\n\n\n\n\nStarting work on simulations based on the above work\n\nMilestone: Distributed Client Testing §\n\nContinuing working on log collection/analysis and monitoring\n\nDetails here 41\nMore related issues/PRs:\n\n20\n20\n\n\n\n\nTesting and debugging Condex in continuous testing environment\n\nDebugging continuous tests 44\npod labeling 39\n\n\n\n\nInfra §\nMilestone: Kubernetes Configuration and Management §\n\nMove Dist-Tests cluster to OVH and define naming conventions\nConfigure Ingress Controller for Kibana/Grafana\nCreate documentation for Kubernetes management\nConfigure Dist/Continuous-Tests Pods logs shipping\n\nMilestone: Continuous Testing and Labeling §\n\nWatch the Continuous tests demo\nImplement and configure Dist-Tests labeling\nSet up logs shipping based on labels\nImprove Docker workflows and add ‘latest’ tag\n\nMilestone: CI/CD and Synchronization §\n\nSet up synchronization by codex-storage\nConfigure Codex Storage and Demo CI/CD environments\n\n\nMarketplace §\nMilestone: L2 §\n\nTaiko L2 integration\n\nDone but merge is blocked by a few issues - 483\n\n\n\nMilestone: Marketplace Sales §\n\nLots of cleanup and refactoring\n\nFinished refactoring state machine PR link\nAdded support for loading node’s slots during Sale’s module start link\n\n\n\n\nDAS §\nMilestone: DHT simulations §\n\nImplementing a DHT in Python for the DAS simulator - py-dht.\n\nNOTE: Several people are/where out during the last few weeks, so some milestones are paused until they are back"},"roadmap/innovation_lab/updates/2023-07-12":{"title":"2023-07-12 Innovation Lab Weekly","links":[],"tags":["ilab-updates"],"content":"Logos Lab 12th of July\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\nMilestone: deliver the first transactional Waku Object called Payggy (attached some design screenshots).\nIt is now possible to make transactions on the blockchain and the objects send notifications over the messaging layer (e.g. Waku) to the other participants. What is left is the proper transaction status management and some polishing.\nThere is also work being done on supporting external objects, this enables creating the objects with any web technology. This work will guide the separation of the interfaces between the app and the objects and lead us to release it as an SDK.\nNext milestone: group chat support\nThe design is already done for the group chat functionality. There is ongoing design work for a new Waku Object that would showcase what can be done in a group chat context.\nDeployed version of the main branch:\nwaku-objects-playground.vercel.app\nLink to Payggy design files:\n64ae9e965652632169060c7d\nMain development repo:\nwaku-objects-playground\nContact:\nYou can find us at 1118949151225413872 or join our discord at UtVHf2EU\n\nConversation §\n\n\npetty — 07/15/2023 5:49 AM\nthe waku-objects repo is empty. Where is the code storing that part vs the playground that is using them?\n\n\npetty\nthe waku-objects repo is empty. Where is the code storing that part vs the playground that is using them?\n\n\nattila🍀 — 07/15/2023 6:18 AM\nat the moment most of the code is in the waku-objects-playground repo later we may split it to several repos here is the link: waku-objects-playground\n\n"},"roadmap/innovation_lab/updates/2023-08-02":{"title":"2023-08-02 Innovation Lab weekly","links":[],"tags":["ilab-updates"],"content":"Logos Lab 2nd of August\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\nThe last few weeks were a bit slower than usual because there were vacations, one team member got married, there was EthCC and a team offsite.\nStill, a lot of progress were made and the team released the first version of a color system in the form of an npm package, which lets the users to choose any color they like to customize their app. It is based on grayscale design and uses luminance, hence the name of the library. Try it in the Playground app or check the links below.\nMilestone: group chat support\nThere is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\nNext milestone: Splitter Waku Object supporting group chats and smart contracts\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions.\nDeployed version of the main branch:\nwaku-objects-playground.vercel.app\nMain development repo:\nwaku-objects-playground\nGrayscale design:\ngrayscale.design\nLuminance package on npm:\nluminance\nContact:\nYou can find us at 1118949151225413872 or join our discord at ZMU4yyWG\n\nConversation §\n\n\nfryorcraken — Yesterday at 10:58 PM\n\nThere is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation.\n\nWhile status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n\n\nAugust 3, 2023\n\n\nfryorcraken\n\n > There is a draft PR for group chat support for private groups and it is expected to be finished this week. At the end we decided to roll our own toy group chat protocol implementation because we did not find anything ready to use. It would have been great if we could have just used an existing implementation. While status-js does implement chat features, I do not know how nice the API is. Waku is actively hiring a chat sdk lead and golang eng. We will probably also hire a JS engineer (not yet confirmed) to provide nice libraries to enable such use case (1:1 chat, group chat, community chat).\n\n\n\nattila🍀 — Today at 4:21 AM\nThis is great news and I think it will help with adoption. I did not find a JS API for status (maybe I was looking at the wrong places), the closest was the status-js-api project but that still uses whisper and the repo recommends to use js-waku instead status-js-api Also I also found the 56/STATUS-COMMUNITIES spec: 56 It seems to be quite a complete solution for community management with all the bells and whistles. However our use case is a private group chat for your existing contacts, so it seems to be a bit overkill for that.\n\n\nfryorcraken — Today at 5:32 AM\nThe repo is status-im/status-web\n\n\n[5:33 AM]\nSpec is 55\n\n\nfryorcraken\nThe repo is status-im/status-web\n\n\nattila🍀 — Today at 6:05 AM\nAs constructive feedback I can tell you that it is not trivial to find it and use it in other projects It is presented as a React component without documentation and by looking at the code it seems to provide you the whole chat UI of the desktop app, which is not necessarily what you need if you want to embed it in your app It seems to be using this package: js Which also does not have documentation I assume that package is built from this: status-js This looks promising, but again there is no documentation. Of course you can use the code to figure out things, but at least I would be interested in what are the requirements and high level architecture (does it require an ethereum RPC endpoint, where does it store data, etc.) so that I can evaluate if this is the right approach for me. So maybe a lesson here is to put effort in the documentation and the presentation as well and if you have the budget then have someone on the team whose main responsibility is that (like a devrel or dev evangelist role)\n\n"},"roadmap/innovation_lab/updates/2023-08-11":{"title":"2023-08-17 weekly","links":[],"tags":["team-updates"],"content":"Logos Lab 11th of August §\nCurrently working on the Waku Objects prototype, which is a modular system for transactional chat objects.\nWe merged the group chat but it surfaced plenty of issues that were not a problem with 1on1 chats, both with our Waku integration and from product perspective as well. Spent the bigger part of the week with fixing these. We also registered a new domain, wakuplay.im where the latest version is deployed. It uses the Gnosis chain for transactions and currently the xDai and Gno tokens are supported, but it is easy to add other ERC-20 tokens now.\nNext milestone: Splitter Waku Object supporting group chats and smart contracts\nThis will be the first Waku Object that is meaningful in a group chat context. Also this will demonstrate how to use smart contracts and multiparty transactions. The design is ready and the implementaton has started.\nNext milestone: Basic Waku Objects website\nWork started toward having a structure for a website and the content is shaping up nicely. The implementation has been started on it as well.\nDeployed version of the main branch:\nwww.wakuplay.im\nMain development repo:\nwaku-objects-playground\nContact:\nYou can find us at 1118949151225413872 or join our discord at eaYVgSUG"},"roadmap/nomos/updates/2023-07-24":{"title":"2023-07-24 Nomos weekly","links":[],"tags":["nomos-updates"],"content":"Research\n\nMilestone 1: Understanding Data Availability (DA) Problem\nHigh-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris.\nExplored the necessity and key challenges associated with DA.\nIn-depth study of Verifiable Information Dispersal (VID) as it relates to data availability.\nBlocker: The experimental tests for our specific EC scheme are pending, which is blocking progress to make final decision on KZG + commitments for our architecture.\nMilestone 2: Privacy for Proof of Stake (PoS)\nAnalyzed the capabilities and limitations of mixnets, specifically within the context of timing attacks in private PoS.\nInvested time in understanding timing attacks and how Nym mixnet caters to these challenges.\nReviewed the Crypsinous paper to understand its privacy vulnerabilities, notably the issue with probabilistic leader election and the vulnerability of anonymous broadcast channels to timing attacks.\n\nDevelopment\n\nMilestone 1: Mixnet and Networking\nInitiated integration of libp2p to be used as the full node’s backend, planning to complete in the next phase.\nBegun planning for the next steps for mixnet integration, with a focus on understanding the components of the Nym mixnet, its problem-solving mechanisms, and the potential for integrating some of its components into our codebase.\nMilestone 2: Simulation Application\nCompleted pseudocode for Carnot Simulator, created a test pseudocode, and provided a detailed description of the simulation. The relevant resources can be found at the following links:\n\nCarnot Simulator pseudocode (carnot_simulation_psuedocode.py)\nTest pseudocode (test_carnot_simulation.py)\nDescription of the simulation (Carnot-Simulation-c025dbab6b374c139004aae45831cf78)\n\n\nImplemented simulation network fixes and warding improvements, and increased the run duration of integration tests. The corresponding pull requests can be accessed here:\n\nSimulation network fix (262)\nVote tally fix (268)\nIncreased run duration of integration tests (263)\nWarding improvements (269)\n\n\n"},"roadmap/nomos/updates/2023-07-31":{"title":"2023-07-31 Nomos weekly","links":[],"tags":["nomos-updates"],"content":"Nomos 31st July\n[Network implementation and Mixnet]:\nResearch\n\nInitial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder.\nConsidered the use of a new NetworkInterface in the simulation to mimic the mixnet, but currently, no significant benefits from doing so have been identified.\nDevelopment\nFixes were made on the Overlay interface.\nNear completion of the libp2p integration with all tests passing so far, a PR is expected to be opened soon.\nLink to libp2p PRs: 278, 279, 280, 281\nStarted working on the foundation of the libp2p-mixnet transport.\n\n[Private PoS]:\nResearch\n\nDiscussions were held on the Privacy PoS (PPoS) proposal, aligning a general direction of team members.\nReviews on the PPoS proposal were done.\nA proposal to merge the PPoS proposal with the efficient one was made, in order to have both privacy and efficiency.\nDiscussions on merging Efficient PoS (EPoS) with PPoS are in progress.\n\n[Carnot]:\nResearch\n\nAnalyzing Bribery attack scenarios, which seem to make Carnot more vulnerable than expected.\n\nDevelopment\n\nImproved simulation application to meet test scale requirements (274).\nCreated a strategy to solve the large message sending issue in the simulation application.\n\n[Data Availability Sampling (or VID)]:\nResearch\n\nConducted an analysis of stored data “degradation” problem for data availability, modeling fractions of nodes which leave the system at regular time intervals\nContinued literature reading on Verifiable Information Dispersal (VID) for DA problem, as well as encoding/commitment schemes.\n"},"roadmap/nomos/updates/2023-08-07":{"title":"2023-08-07 Nomos weekly","links":[],"tags":["nomos-updates"],"content":"Nomos weekly report §\nNetwork implementation and Mixnet: §\nResearch §\n\nResearched the Nym mixnet architecture in depth in order to design our prototype architecture.\n(Link: 273#issuecomment-1661386628)\nDiscussions about how to manage the mixnet topology.\n(Link: 273#issuecomment-1665101243)\n\nDevelopment §\n\nImplemented a prototype for building a Sphinx packet, mixing packets at the first hop of gossipsub with 3 mixnodes (+ encryption + delay), raw TCP connections between mixnodes, and the static entire mixnode topology.\n(Link: 288)\nAdded support for libp2p in tests.\n(Link: 287)\nAdded support for libp2p in nomos node.\n(Link: 285)\n\nPrivate PoS: §\nResearch §\n\nWorked on PPoS design and addressed potential metadata leakage due to staking and rewarding.\nFocus on potential bribery attacks and privacy reasoning, but not much progress yet.\nStopped work on Accountability mechanism and PPoS efficiency due to prioritizing bribery attacks.\n\nCarnot: §\nResearch §\n\nAddressed two solutions for the bribery attack. Proposals pending.\nWork on accountability against attacks in Carnot including Slashing mechanism for attackers is paused at the moment.\nModeled data decimation using a specific set of parameters and derived equations related to it.\nProposed solutions to address bribery attacks without compromising the protocol’s scalability.\n\nData Availability Sampling (VID): §\nResearch §\n\nAnalyzed data decimation in data availability problem.\n(Link: gzqvbbmfnxyp)\nDA benchmarks and analysis for data commitments and encoding. This confirms that (for now), we are on the right path.\nExplored the idea of node sharding: 1907.03331 (taken from Celestia), but discarded it because it doesn’t fit our architecture.\n\nTesting and Node development: §\n\nFixes and enhancements made to nomos-node.\n(Link: 282)\n(Link: 289)\n(Link: 293)\n(Link: 295)\nRan simulations with 10K nodes.\nUpdated integration tests in CI to use waku or libp2p network.\n(Link: 290)\nFix for the node throughput during simulations.\n(Link: 295)\n"},"roadmap/nomos/updates/2023-08-14":{"title":"2023-08-17 Nomos weekly","links":[],"tags":["nomos-updates"],"content":"Nomos weekly report 14th August §\n\nNetwork Privacy and Mixnet §\nResearch §\n\nMixnet architecture discussions. Potential agreement on architecture not very different from PoC\nMixnet preliminary design [Mixnet-Architecture-613f53cf11a245098c50af6b191d31d2]\n\nDevelopment §\n\nMixnet PoC implementation starting [302]\nImplementation of mixnode: a core module for implementing a mixnode binary\nImplementation of mixnet-client: a client library for mixnet users, such as nomos-node\n\nPrivate PoS §\n\nNo progress this week.\n\n\nData Availability §\nResearch §\n\nContinued analysis of node decay in data availability problem\nImproved upper bound on the probability of the event that data is no longer available given by the (K,N) erasure ECC scheme [gzqvbbmfnxyp]\n\nDevelopment §\n\nLibrary survey: Library used for the benchmarks is not yet ready for requirements, looking for alternatives\nRS & KZG benchmarking for our use case 2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450\nStudy documentation on Danksharding and set of questions for Leonardo [2D-Reed-Solomon-Encoding-KZG-Commitments-benchmarking-b8340382ecc741c4a16b8a0c4a114450]\n\n\nTesting, CI and Simulation App §\nDevelopment §\n\nSim fixes/improvements [299], [298], [295]\nSimulation app and instructions shared [300], [291], [294]\nCI: Updated and merged [290]\nParallel node init for improved simulation run times [300]\nImplemented branch overlay for simulating 100K+ nodes [291]\nSequential builds for nomos node features updated in CI [290]\n"},"roadmap/vac/updates/2023-07-10":{"title":"2023-07-10 Vac Weekly","links":[],"tags":["vac-updates"],"content":"\nvc::Deep Research\n\nrefined deep research roadmaps 190, 192\nworking on comprehensive current/related work study on Validator Privacy\nworking on PoC of Tor push in Nimbus\nworking towards comprehensive current/related work study on gossipsub scaling\n\n\nvsu::P2P\n\nPrepared Paris talks\nImplemented perf protocol to compare the performances with other libp2ps 925\n\n\nvsu::Tokenomics\n\nFixing bugs on the SNT staking contract;\nDefinition of the first formal verification tests for the SNT staking contract;\nSlides for the Paris off-site\n\n\nvsu::Distributed Systems Testing\n\nReplicated message rate issue (still on it)\nFirst mockup of offline data\nNomos consensus test working\n\n\nvip::zkVM\n\nhiring\nonboarding new researcher\npresentation on ECC during Logos Research Call (incl. preparation)\nmore research on nova, considering additional options\nIdentified 3 research questions to be taken into consideration for the ZKVM and the publication\nResearched Poseidon implementation for Nova, Nova-Scotia, Circom\n\n\nvip::RLNP2P\n\nfinished rln contract for waku product - rln-contract\nfixed homebrew issue that prevented zerokit from building - 8a365f0c9e5c4a744f70c5dd4904ce8d8f926c34\nrln-relay: verify proofs based upon bandwidth usage - 3fe4522a7e9e48a3196c10973975d924269d872a\nRLN contract audit cont’ B195lgIth\n\n\n"},"roadmap/vac/updates/2023-07-17":{"title":"2023-07-17 Vac weekly","links":[],"tags":["vac-updates"],"content":"Last week\n\nvc\n\nVac day in Paris (13th)\n\n\nvc::Deep Research\n\nworking on comprehensive current/related work study on Validator Privacy\nworking on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node\nworking towards comprehensive current/related work study on gossipsub scaling\n\n\nvsu::P2P\n\nParis offsite Paris (all CCs)\n\n\nvsu::Tokenomics\n\nBugs found and solved in the SNT staking contract\nattend events in Paris\n\n\nvsu::Distributed Systems Testing\n\nEvents in Paris\nQoS on all four infras\nContinue work on theoretical gossipsub analysis (varying regular graph sizes)\nPeer extraction using WLS (almost finished)\nDiscv5 testing\nWakurtosis CI improvements\nProvide offline data\n\n\nvip::zkVM\n\nonboarding new researcher\nPrepared and presented ZKVM work during VAC offsite\nDeep research on Nova vs Stark in terms of performance and related open questions\nresearching Sangria\nWorked on NEscience document (Nescience-WIP-0645c738eb7a40869d5650ae1d5a4f4e)\nzerokit:\n\nworked on PR for arc-circom\n\n\n\n\nvip::RLNP2P\n\noffsite Paris\n\n\n\nThis week\n\nvc\nvc::Deep Research\n\nworking on comprehensive current/related work study on Validator Privacy\nworking on PoC of Tor push in Nimbus\nworking towards comprehensive current/related work study on gossipsub scaling\n\n\nvsu::P2P\n\nEthCC & Logos event Paris (all CCs)\n\n\nvsu::Tokenomics\n\nAttend EthCC and side events in Paris\nIntegrate staking contracts with radCAD model\nWork on a new approach for Codex collateral problem\n\n\nvsu::Distributed Systems Testing\n\nEvents in Paris\nFinish peer extraction, plot the peer connections; script/runs for the analysis, and add data to the Tech Report\nRestructure the Analysis script and start modelling Status control messages\nSplit Wakurtosis analysis module into separate repository (delayed)\nDeliver simulation results (incl fixing discv5 error with new Kurtosis version)\nSecond iteration Nomos CI\n\n\nvip::zkVM\n\nContinue researching on Nova open questions and Sangria\nDraft the benchmark document (by the end of the week)\nresearch hardware for benchmarks\nresearch Halo2 cont’\nzerokit:\n\nmerge a PR for deployment of arc-circom\ndeal with arc-circom master fail\n\n\n\n\nvip::RLNP2P\n\noffsite paris\n\n\nblockers\n\nvip::zkVM:zerokit: ark-circom deployment to crates io; contact to ark-circom team\n\n\n"},"roadmap/vac/updates/2023-07-24":{"title":"2023-08-03 Vac weekly","links":["tags/139"],"tags":["vac-updates","139"],"content":"NOTE: This is a first experimental version moving towards the new reporting structure:\nLast week\n\nvc\nvc::Deep Research\n\nmilestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n\nrelated work section\n\n\nmilestone (15%, 2023/08/31) Nimbus Tor-push PoC\n\nbasic torpush encode/decode ( 1 )\n\n\nmilestone (15%, 2023/11/30) paper on Tor push validator privacy\n\n(focus on Tor-push PoC)\n\n\n\n\nvsu::P2P\n\nadmin/misc\n\nEthCC (all CCs)\n\n\n\n\nvsu::Tokenomics\n\nadmin/misc\n\nAttended EthCC and side events in Paris\n\n\nmilestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\n\nKicked off a new approach for Codex collateral problem\n\n\nmilestone (50%, 2023/08/30) SNT staking smart contract\n\nIntegrated SNT staking contracts with Python\n\n\nmilestone (50%, 2023/07/14) SNT litepaper\n\n(delayed)\n\n\nmilestone(30%, 2023/09/29) Nomos Token: requirements and constraints\n\n\nvsu::Distributed Systems Testing\n\nmilestone (95%, 2023/07/31) Wakurtosis Waku Report\n\nAdd timout to injection async call in WLS to avoid further issues (PR #139 139)\nPlotting & analyse 100 msg/s off line Prometehus data\n\n\nmilestone (90%, 2023/07/31) Nomos CI testing\n\nfixed errors in Nomos consensus simulation\n\n\nmilestone (30%, …) gossipsub model analysis\n\nadd config options to script, allowing to load configs that can be directly compared to Wakurtosis results\nadded support for small world networks\n\n\nadmin/misc\n\nInterviews & reports for SE and STA positions\nEthCC (1 CC)\n\n\n\n\nvip::zkVM\n\nmilestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…)\n\n(write ups will be available here: zkVM-cd358fe429b14fa2ab38ca42835a8451)\nSolved the open questions on Nova adn completed the document (will update the page)\nReviewed Nescience and working on a document\nReviewed partly the write up on FHE\nwriteup for Nova and Sangria; research on super nova\nreading a new paper revisiting Nova (969)\n\n\nmilestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\nzkvm\n\nResearching Nova to understand the folding technique for ZKVM adaptation\n\n\nzerokit\n\nRostyslav became circom-compat maintainer\n\n\n\n\nvip::RLNP2P\n\nmilestone (100%, 2023/07/31) rln-relay testnet 3 completed and retro\n\ncompleted\n\n\nmilestone (95%, 2023/07/31) RLN-Relay Waku production readiness\nadmin/misc\n\nEthCC + offsite\n\n\n\n\n\nThis week\n\nvc\nvc::Deep Research\n\nmilestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission\n\nworking on contributions section, based on X1DoBHtYTtuGqYg0qK4zJw\n\n\nmilestone (15%, 2023/08/31) Nimbus Tor-push PoC\n\nworking on establishing a connection via nim-libp2p tor-transport\nsetting up goerli test node (cont’)\n\n\nmilestone (15%, 2023/11/30) paper on Tor push validator privacy\n\ncontinue working on paper\n\n\n\n\nvsu::P2P\n\nmilestone (…)\n\nImplement ChokeMessage for GossipSub\nContinue “limited flood publishing” (911)\n\n\n\n\nvsu::Tokenomics\n\nadmin/misc:\n\n(3 CC days off)\nCatch up with EthCC talks that we couldn’t attend (schedule conflicts)\n\n\nmilestone (50%, 2023/07/14) SNT litepaper\n\nStart building the SNT agent-based simulation\n\n\n\n\nvsu::Distributed Systems Testing\n\nmilestone (100%, 2023/07/31) Wakurtosis Waku Report\n\nfinalize simulations\nfinalize report\n\n\nmilestone (100%, 2023/07/31) Nomos CI testing\n\nfinalize milestone\n\n\nmilestone (30%, …) gossipsub model analysis\n\nIncorporate Status control messages\n\n\nadmin/misc\n\nInterviews & reports for SE and STA positions\nEthCC (1 CC)\n\n\n\n\nvip::zkVM\n\nmilestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…)\n\nRefine the Nescience WIP and FHE documents\nresearch HyperNova\n\n\nmilestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n\nContinue exploring Nova and other ZKPs and start technical writing on Nova benchmarks\n\n\nzkvm\nzerokit\n\ncircom: reach an agreement with other maintainers on master branch situation\n\n\n\n\nvip::RLNP2P\n\nmaintenance\n\ninvestigate why docker builds of nwaku are failing [zerokit dependency related]\ndocumentation on how to use rln for projects interested (console)\n\n\nmilestone (95%, 2023/07/31) RLN-Relay Waku production readiness\n\nrevert rln bandwidth reduction based on offsite discussion, move to different validator\n\n\n\n\nblockers\n"},"roadmap/vac/updates/2023-07-31":{"title":"2023-07-31 Vac weekly","links":[],"tags":["vac-updates"],"content":"\nvc::Deep Research\n\nmilestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission\n\nproposed solution section\n\n\nmilestone (15%, 2023/08/31) Nimbus Tor-push PoC\n\nestablishing torswitch and testing code\n\n\nmilestone (15%, 2023/11/30) paper on Tor push validator privacy\naddressed feedback on current version of paper\n\n\nvsu::P2P\n\nnim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH’s EIP-4844\n\nMerged IDontWant (934) & Limit flood publishing (911) 𝕏\nThis wraps up the “mandatory” optimizations for 4844. We will continue working on stagger sending and other optimizations\n\n\nnim-libp2p: (70%, 2023/07/31) WebRTC transport\n\n\nvsu::Tokenomics\n\nadmin/misc\n\n2 CCs off for the week\n\n\nmilestone (30%, 2023/09/30) Codex economic analysis, Codex token utility, Codex collateral management\nmilestone (50%, 2023/08/30) SNT staking smart contract\nmilestone (50%, 2023/07/14) SNT litepaper\nmilestone (30%, 2023/09/29) Nomos Token: requirements and constraints\n\n\nvsu::Distributed Systems Testing\n\nadmin/misc\n\nAnalysis module extracted from wakurtosis repo (142, DST-Analysis)\nhiring\n\n\nmilestone (99%, 2023/07/31) Wakurtosis Waku Report\n\nRe-run simulations\nmerge Discv5 PR (129).\nfinalize Wakurtosis Tech Report v2\n\n\nmilestone (100%, 2023/07/31) Nomos CI testing\n\ndelivered first version of Nomos CI integration (141)\n\n\nmilestone (30%, 2023/08/31 gossipsub model: Status control messages\n\nWaku model is updated to model topics/content-topics\n\n\n\n\nvip::zkVM\n\nmilestone(50%, 2023/08/31) background/research on existing proof systems (nova, sangria…)\n\nachievment :: nova questions answered (see document in Project: zkVM-cd358fe429b14fa2ab38ca42835a8451)\nNescience WIP done (to be delivered next week, priority)\nFHE review (lower prio)\n\n\nmilestone (50%, 2023/08/31) new fair benchmarks + recursive implementations\n\nWorking on discoveries about other benchmarks done on plonky2, starky, and halo2\n\n\nzkvm\nzerokit\n\nfixed ark-circom master\nachievment :: publish ark-circom ark-circom\nachievment :: publish zerokit_utils zerokit_utils\nachievment :: publish rln rln (𝕏 jointly with RLNP2P)\n\n\n\n\nvip::RLNP2P\n\nmilestone (100%, 2023/07/31) RLN-Relay Waku production readiness\n\nUpdated rln-contract to be more modular - and downstreamed to waku fork of rln-contract - rln-contract and waku-rln-contract\nDeployed to sepolia\nFixed rln enabled docker image building in nwaku - 1853\n\n\nzerokit:\n\nachievement :: zerokit v0.3.0 release done - v0.3.0 (𝕏 jointly with zkVM)\n\n\n\n\n"},"roadmap/vac/updates/2023-08-07":{"title":"2023-08-07 Vac weekly","links":[],"tags":["vac-updates"],"content":"More info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week):\nVac-Roadmap-907df7eeac464143b00c6f49a20bb632\nVac week 32 August 7th\n\nvsu::P2P\n\nvac:p2p:nim-libp2p:vac:maintenance\n\nImprove gossipsub DDoS resistance 920\n\n\nvac:p2p:nim-chronos:vac:maintenance\n\nRemove hard-coded ports from test 429\nInvestigate flaky test using REUSE_PORT\n\n\n\n\nvsu::Tokenomics\n\n(…)\n\n\nvsu::Distributed Systems Testing\n\nvac:dst:wakurtosis:waku:techreport\n\ndelivered: Wakurtosis Tech Report v2 (edit?usp=sharing)\n\n\nvac:dst:wakurtosis:vac:rlog\n\nworking on research log post on Waku Wakurtosis simulations\n\n\nvac:dst:gsub-model:status:control-messages\n\ndelivered: the analytical model can now handle Status messages; status analysis now has a separate cli and config; handles top 5 message types (by expected bandwidth consumption)\n\n\nvac:dst:gsub-model:vac:refactoring\n\nRefactoring and bug fixes\nintroduced and tested 2 new analytical models\n\n\nvac:dst:wakurtosis:waku:topology-analysis\n\ndelivered: extracted into separate module, independent of wls message\n\n\nvac:dst:wakurtosis:nomos:ci-integration_02\n\nplanning\n\n\nvac:dst:10ksim:vac:10ksim-bandwidth-test\n\nplanning; check usage of new codex simulator tool (cs-codex-dist-tests)\n\n\n\n\nvip::zkVM\n\nvac:zkvm::vac:research-existing-proof-systems\n\n90% Nescience WIP done – to be reviewed carefully since no other follow up documents were giiven to me\n50% FHE review - needs to be refined and summarized\nfinished SuperNova writeup ( SuperNova-research-document-8deab397f8fe413fa3a1ef3aa5669f37 )\nresearched starky\n80% Halo2 notes ( halo2-fb8d7d0b857f43af9eb9f01c44e76fb9 )\n\n\nvac:zkvm::vac:proof-system-benchmarks\n\nMore discoveries on benchmarks done on ZK-snarks and ZK-starks but all are high level\nViewed some circuits on Nova and Poseidon\nRead through Halo2 code (and Poseidon code) from Axiom\n\n\n\n\nvip::RLNP2P\n\nvac:acz:rlnp2p:waku:production-readiness\n\nWaku rln contract registry - 3\nmark duplicated messages as spam - 1867\nuse waku-org/waku-rln-contract as a submodule in nwaku - 1884\n\n\nvac:acz:zerokit:vac:maintenance\n\nFixed atomic_operation ffi edge case error - 195\ndocs cleanup - 196\nfixed version tags - 194\nreleased zerokit v0.3.1 - 198\nmarked all functions as virtual in rln-contract for inheritors - a092b934a6293203abbd4b9e3412db23ff59877e\nmake nwaku use zerokit v0.3.1 - 1886\nrlnp2p implementers draft - rln-impl-w-waku\n\n\nvac:acz:zerokit:vac:zerokit-v0.4\n\nzerokit v0.4.0 release planning - 197\n\n\n\n\nvc::Deep Research\n\nvac:dr:valpriv:vac:tor-push-poc\n\nredesigned the torpush integration in nimbus 2\n\n\nvac:dr:valpriv:vac:tor-push-relwork\n\nAddressed further comments in paper, improved intro, added source level variation approach\n\n\nvac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report\n\ncont’ work on the document\n\n\n\n\n"},"roadmap/vac/updates/2023-08-14":{"title":"2023-08-17 Vac weekly","links":[],"tags":["vac-updates"],"content":"Vac Milestones: Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\nVac week 33 August 14th §\n\nvsu::P2P §\nvac:p2p:nim-libp2p:vac:maintenance §\n\nImprove gossipsub DDoS resistance 920\ndelivered: Perf protocol 925\ndelivered: Test-plans for the perf protocol perf-nim\nBandwidth estimate as a parameter (waiting for final review) 941\n\nvac:p2p:nim-chronos:vac:maintenance §\n\ndelivered: Remove hard-coded ports from test 429\ndelivered: fixed flaky test using REUSE_PORT 438\n\n\nvsu::Tokenomics §\n\nadmin/misc:\n\n(5 CC days off)\n\n\n\nvac:tke::codex:economic-analysis §\n\nFilecoin economic structure and Codex token requirements\n\nvac:tke::status:SNT-staking §\n\ntests with the contracts\n\nvac:tke::nomos:economic-analysis §\n\nresume discussions with Nomos team\n\n\nvsu::Distributed Systems Testing (DST) §\nvac:dst:wakurtosis:waku:techreport §\n\n1st Draft of Wakurtosis Research Blog (123)\nData Process / Analysis of Non-Discv5 K13 Simulations (Wakurtosis Tech Report v2.5)\n\nvac:dst:shadow:vac:basic-shadow-simulation §\n\nBasic Shadow Simulation of a gossipsub node (Setup, 5nodes)\n\nvac:dst:10ksim:vac:10ksim-bandwidth-test §\n\nTry and plan on how to refactor/generalize testing tool from Codex.\nLearn more about Kubernetes\n\nvac:dst:wakurtosis:nomos:ci-integration_02 §\n\nEnable subnetworks\nPlan how to use wakurtosis with fixed version\n\nvac:dst:eng:vac:bundle-simulation-data §\n\nRun requested simulations\n\n\nvsu:Smart Contracts (SC) §\nvac:sc::vac:secureum-upskilling §\n\nLearned about\n\ncold vs warm storage reads and their gas implications\nUTXO vs account models\nDELEGATECALL vs CALLCODE opcodes, CREATE vs CREATE2 opcodes; Yul Assembly\nUnstructured proxies eip-1967\nC3 Linearization 2694) (Diamond inheritance and resolution)\n\n\nUniswap deep dive\nFinished Secureum slot 2 and 3\n\nvac:sc::vac:maintainance/misc §\n\nIntroduced Vac’s own foundry-template for smart contract projects\n\nGoal is to have the same project structure across projects\nGithub repository: foundry-template\n\n\n\n\nvsu:Applied Cryptogarphy & ZK (ACZ) §\n\nvac:acz:zerokit:vac:maintenance\n\nPR reviews 200, 201\n\n\n\n\nvip::zkVM §\nvac:zkvm::vac:research-existing-proof-systems §\n\ndelivered Nescience WIP doc\ndelivered FHE review\ndelivered Nova vs Sangria done - Some discussions during the meeting\nstarted HyperNova writeup\nstarted writing a trimmed version of FHE writeup\nresearched CCS (for HyperNova)\nResearch Protogalaxy 1106 and Protostar 620.\n\nvac:zkvm::vac:proof-system-benchmarks §\n\nMore work on benchmarks is ongoing\nPutting down a document that explains the differences\n\n\nvc::Deep Research §\nvac:dr:valpriv:vac:tor-push-poc §\n\nrevised the code for PR\n\nvac:dr:valpriv:vac:tor-push-relwork §\n\nadded section for mixnet, non-Tor/non-onion routing-based anonymity network\n\nvac:dr:gsub-scaling:vac:gossipsub-simulation §\n\nUsed shadow simulator to run first GossibSub simulation\n\nvac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report §\n\nFinalized 1st draft of the GossipSub scaling article\n"},"roadmap/vac/updates/2023-08-21":{"title":"2023-08-21 Vac weekly","links":[],"tags":["vac-updates"],"content":"Vac Milestones: Vac-Roadmap-907df7eeac464143b00c6f49a20bb632\nVac Github Repos: Vac-Repositories-75f7feb3861048f897f0fe95ead08b06\nVac week 34 August 21th §\nvsu::P2P §\n\nvac:p2p:nim-libp2p:vac:maintenance\n\nTest-plans for the perf protocol (99%: need to find why the executable doesn’t work) 262\nWebRTC: Merge all protocols (60%: slowed down by some complications and bad planning with Mbed-TLS) 3\nWebRTC: DataChannel (25%)\n\n\n\nvsu::Tokenomics §\n\nadmin/misc:\n\n(3 CC days off)\n\n\nvac:tke::codex:economic-analysis\n\nCall w/ Codex on token incentives, business analysis of Filecoin\n\n\nvac:tke::status:SNT-staking\n\nBug fixes for tests for the contracts\n\n\nvac:tke::nomos:economic-analysis\n\nNarrowed focus to: 1) quantifying bribery attacks, 2) assessing how to min risks and max privacy of delegated staking\n\n\nvac:tke::waku:economic-analysis\n\nCaught up w/ Waku team on RLN, adopting a proactive effort to pitch them solutions\n\n\n\nvsu::Distributed Systems Testing (DST) §\n\nvac:dst:wakurtosis:vac:rlog\n\nPushed second draft and figures (DST-Wakurtosis)\n\n\nvac:dst:shadow:vac:basic-shadow-simulation\n\nRun 10K simulation of basic gossipsub node\n\n\nvac:dst:gsub-model:status:control-messages\n\nGot access to status superset\n\n\nvac:dst:analysis:nomos:nomos-simulation-analysis\n\nBasic CLI done, json to csv, can handle 10k nodes\n\n\nvac:dst:wakurtosis:waku:topology-analysis\n\nCollection + analysis: now supports all waku protocols, along with relay\nCannot get gossip-sub peerage from waku or prometheus (working on getting info from gossipsub layer)\n\n\nvac:dst:wakurtosis:waku:techreport_02\n\nMerged 4 pending PRs; master now supports regular graphs\n\n\nvac:dst:eng:vac:bundle-simulation-data\n\nRun 1 and 10 rate simulations. 100 still being run\n\n\nvac:dst:10ksim:vac:10ksim-bandwidth-test\n\nWorking on split the structure of codex tool; Working on diagrams also\n\n\n\nvsu:Smart Contracts (SC) §\n\nvac:sc::status:community-contracts-ERC721\n\ndelivered (will need maintenance and adding features as requested in the future)\n\n\nvac:sc::status:community-contracts-ERC20\n\nstarted working on ERC20 contracts\n\n\nvac:sc::vac:secureum-upskilling\n\nSecureum: Finished Epoch 0, Slot 4 and 5\nDeep dive on First Depositor/Inflation attacks\nLearned about Minimal Proxy Contract pattern\nMore Uniswap V2 protocol reading\n\n\nvac:sc::vac:maintainance/misc\n\nWorked on moving community dapp contracts to new foundry-template\n\n\n\nvsu:Applied Cryptogarphy & ZK (ACZ) §\n\nvac:acz:rlnp2p:waku:rln-relay-enhancments\n\nrpc handler for waku rln relay - 1852\nfixed ganache’s change in method to manage subprocesses, fixed timeouts related to it - 1913\nshould error out on rln-relay mount failure - 1904\nfixed invalid start index being used in rln-relay - 1915\nconstrain the values that can be used as idCommitments in the rln-contract - 26\nassist with waku-simulator testing\nremove registration capabilities from nwaku, it should be done out of band - 1916\nadd deployedBlockNumber to the rln-contract for ease of fetching events from the client - 27\n\n\nvac:acz:zerokit:vac:maintenance\n\nexposed seq_atomic_operation ffi api to allow users to make use of the current index without making multiple ffi calls - 206\nuse pmtree instead of vacp2p_pmtree now that changes have been upstreamed - 203\nPrepared a PR to fix a stopgap introduces by PR 201 207\nPR review 202, 206\n\n\nvac:acz:zerokit:vac:zerokit-v0.4\n\nsubstitute id_commitments for rate_commitments and update tests in rln-v2 - 205\nrln-v2 working branch - 204\nmisc research while ooo:\nstealth commitment scheme inspired by erc-5564 - erc-5564-bn254, associated circuit - circom-rln-erc5564 (very heavy on the constraints)\n\n\n\nvip::zkVM §\n\nvac:zkvm::vac:research-existing-proof-systems\n\nUpdated the Nova questions document (zkVM-cd358fe429b14fa2ab38ca42835a8451 -> Projects -> Nova_Research_Answers.pdf)\nResearched ProtoStar and Nova aleternatives\n\n\nvac:zkvm::vac:proof-system-benchmarks\n\nDrafted the Nova Benchamarks document (zkVM-cd358fe429b14fa2ab38ca42835a8451 -> Projects -> Benchmarks.pdf)\nResearched hash functions\nResearched benchmarks\n\n\n\nvc::Deep Research §\n\nvac:dr:valpriv:vac:tor-push-poc\n\nReimplemented torpush without any gossip sharing\nAdded discovering peers for torpush in every epoch/10 minutes\ntorswitch directly pushes messages to separately identified peers\n\n\nvac:dr:valpriv:vac:tor-push-relwork\n\nadded quantified measures related to privacy in the paper section\n\n\nvac:dr:gsub-scaling:vac:gossipsub-improvements-tech-report\n\nExplored different unstructured p2p application architectuture\nStudied literature on better bandwidth utilization in unstructured p2p networks.\n\n\nvac:dr:gsub-scaling:vac:gossipsub-simulation\n\nWorked on GossibSup simulation in shadow simulator. Tried understanding different libp2p functions\nCreated short awk scripts for analyzing results.\n\n\nvac:dr:consensus:nomos:carnot-bribery-article\n\nContinue work on the article on bribery attacks, PoS and Carnot\nCompleted presentation about the bribery attacks and Carnot\n\n\nvac:dr:consensus:nomos:carnot-paper\n\nDiscussed Carnot tests and results with Nomos team. Some adjustment to the parameters needed to be made to accurate results.\n\n\n"},"roadmap/waku/updates/2023-07-24":{"title":"2023-07-24 Waku weekly","links":[],"tags":["waku-updates"],"content":"Disclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones.\n\nDocs §\nMilestone: Foundation for Waku docs (done) §\nachieved: §\n\noverall layout\nconcept docs\ncommunity/showcase pages\n\nMilestone: Foundation for node operator docs (done) §\nachieved: §\n\nnodes overview page\nguide for running nwaku (binaries, source, docker)\npeer discovery config guide\nreference docs for config methods and options\n\nMilestone: Foundation for js-waku docs §\nachieved: §\n\njs-waku overview + installation guide\nlightpush + filter guide\nstore guide\n@waku/create-app guide\n\nnext: §\n\nimprove @waku/react guide\n\nblocker: §\n\npolyfills issue with js-waku\n\nMilestone: Docs general improvement/incorporating feedback (continuous) §\nMilestone: Running nwaku in the cloud §\nMilestone: Add Waku guide to learnweb3.io §\nMilestone: Encryption docs for js-waku §\nMilestone: Advanced node operator doc (postgres, WSS, monitoring, common config) §\nMilestone: Foundation for go-waku docs §\nMilestone: Foundation for rust-waku-bindings docs §\nMilestone: Waku architecture docs §\nMilestone: Waku detailed roadmap and milestones §\nMilestone: Explain RLN §\n\nEco Dev (WIP) §\nMilestone: EthCC Logos side event organisation (done) §\nMilestone: Community Growth §\nachieved: §\n\nWrote several bounties, improved template; setup onboarding flow in Discord.\n\nnext: §\n\nReview template, publish on GitHub\n\nMilestone: Business Development (continuous) §\nachieved: §\n\nDiscussions with various leads in EthCC\n\nnext: §\n\nBooking calls with said leads\n\nMilestone: Setting Up Content Strategy for Waku §\nachieved: §\n\nDiscussions with Comms Hubs re Waku Blog\nexpressed needs and intent around future blog post and needed amplification\ndiscuss strategies to onboard/involve non-dev and potential CTAs.\n\nMilestone: Web3Conf (dates) §\nMilestone: DeCompute conf §\n\nResearch (WIP) §\nMilestone: Autosharding v1 §\nachieved: §\n\nrendezvous hashing\nweighting function\nupdated LIGHTPUSH to handle autosharding\n\nnext: §\n\nupdate FILTER & STORE for autosharding\n\n\nnwaku (WIP) §\nMilestone: Postgres integration. §\nachieved: §\n\nnwaku can store messages in a Postgres database\nwe started to perform stress tests\n\nnext: §\n\nAnalyse why some messages are not stored during stress tests happened in both sqlite and Postgres, so maybe the issue isn’t directly related to store.\n\nMilestone: nwaku as a library (C-bindings) §\nachieved: §\n\nThe integration is in progress through N-API framework\n\nnext: §\n\nMake the nodejs to properly work by running the nwaku node in a separate thread.\n\n\ngo-waku (WIP) §\n\njs-waku (WIP) §\nMilestone: Peer management §\n_achieved: §\n\nspec test for connection manager\n\nMilestone: Peer Exchange §\nMilestone: Static Sharding §\nnext: §\n\nstart implementation of static sharding in js-waku\n\nMilestone: Developer Experience §\nachieved: §\n\njs-lip2p upgrade to remove usage of polyfills (draft PR)\n\nnext: §\n\nmerge and release js-libp2p upgrade\n\nMilestone: Waku Relay in the Browser §\n"},"roadmap/waku/updates/2023-07-31":{"title":"2023-07-31 Waku weekly","links":[],"tags":["waku-updates"],"content":"Docs §\nMilestone: Docs general improvement/incorporating feedback (continuous) §\nnext: §\n\nrewrite docs in British English\n\nMilestone: Running nwaku in the cloud §\nnext: §\n\npublish guides for Digital Ocean, Oracle, Fly.io\n\n\nEco Dev (WIP) §\n\nResearch §\nMilestone: Detailed network requirements and task breakdown §\nachieved: §\n\ngathering rough network requirements\n\nnext: §\n\ndetailed task breakdown per milestone and effort allocation\n\nMilestone: Autosharding v1 §\nachieved: §\n\nupdate FILTER & STORE for autosharding\n\nnext: §\n\nRFC review & updates\ncode review & updates\n\n\nnwaku §\nMilestone: nwaku release process automation §\nnext: §\n\nsetup automation to test/simulate current master to prevent/limit regressions\nexpand target architectures and platforms for release artifacts (e.g. arm64, Win…)\n\nMilestone: HTTP Rest API for protocols §\nnext: §\n\nFilter API added\ntests to complete.\n\n\ngo-waku §\nMilestone: Increase Maintability Score. Refer to CodeClimate report §\nnext: §\n\ndefine scope on which issues reported by CodeClimate should be fixed. Initially it should be limited to reduce code complexity and duplication.\n\nMilestone: RLN updates, refer issue. §\nachieved:\n\nexpose set_tree, key_gen, seeded_key_gen, extended_seeded_keygen, recover_id_secret, set_leaf, init_tree_with_leaves, set_metadata, get_metadata and get_leaf\ncreated an example on how to use RLN with go-waku\nservice node can pass in index to keystore credentials and can verify proofs based on bandwidth usage\n\nnext: §\n\nmerkle tree batch operations (in progress)\nusage of persisted merkle tree db\n\nMilestone: Improve test coverage for functional tests of all protocols. Refer to [CodeClimate report] §\nnext: §\n\ndefine scope on which code sections should be covered by tests\n\nMilestone: C-Bindings §\nnext: §\n\nupdate API to match nwaku’s (by using callbacks instead of strings that require freeing)\n\n\njs-waku §\nMilestone: Peer management §\nachieved: §\n\nextend ConnectionManager with EventEmitter and dispatch peers tagged with their discovery + make it public on the Waku interface\n\nnext: §\n\nfallback improvement for peer connect rejection\n\nMilestone: Peer Exchange §\nnext: §\n\nrobusting support around peer-exchange for examples\n\nMilestone: Static Sharding §\nachieved: §\n\nWIP implementation of static sharding in js-waku\n\nnext: §\n\ninvestigation around gauging connection loss;\n\nMilestone: Developer Experience §\nachieved: §\n\nimprove & update @waku/react\nmerge and release js-libp2p upgrade\n\nnext: §\n\nupdate examples to latest release + make sure no old/unused packages there\n\nMilestone: Maintenance §\nachieved: §\n\nupdate to libp2p@0.46.0\n\nnext: §\n\nsuit of optional tests in pipeline\n\n"},"roadmap/waku/updates/2023-08-06":{"title":"2023-08-06 Waku weekly","links":[],"tags":["waku-updates"],"content":"Milestones for current works are created and used. Next steps are:\n\nRefine scope of research work for rest of the year and create matching milestones for research and waku clients\nReview work not coming from research and setting dates\nNote that format matches the Notion page but can be changed easily as it’s scripted\n\nnwaku §\nRelease Process Improvements {E:2023-qa}\n\nachieved: fixed a bug in release CI workflow, enhanced the CI workflow to build and push a docker image on each PR to make simulations per PR more feasible\nnext: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\nblocker:\n\nPostgreSQL {E:2023-10k-users}\n\nachieved: Docker compose with nwaku + postgres + prometheus + grafana + postgres_exporter 3\nnext: Carry on with stress testing\n\nAutosharding v1 {E:2023-1mil-users}\n\nachieved: feedback/update cycles for FILTER & LIGHTPUSH\nnext: New fleet, updating ENR from live subscriptions and merging\nblocker: Architecturally it seams difficult to send the info to Discv5 from JSONRPC for the Waku app.\n\nMove Waku v1 and Waku-Bridge to new repos {E:2023-qa}\n\nachieved: Removed v1 and wakubridge code from nwaku repo\nnext: Remove references to v2 from nwaku directory structure and documents\n\nnwaku c-bindings {E:2023-many-platforms}\n\nachieved:\n\nMoved the Waku execution into a secondary working thread. Essential for NodeJs.\nAdapted the NodeJs example to use the libwaku with the working-thread approach. The example had been receiving relay messages during a weekend. The memory was stable without crashing.\n\n\nnext: start applying the thread-safety recommendations 1878\n\nHTTP REST API: Store, Filter, Lightpush, Admin and Private APIs {E:2023-many-platforms}\n\nachieved: Legacy Filter - v1 - interface Rest Api support added.\nnext: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node.\n\n\njs-waku §\nPeer Exchange is supported and used by default {E:2023-light-protocols}\n\nachieved: robustness around peer-exchange, and highlight discovery vs connections for PX on the web-chat example\nnext: saving successfully connected PX peers to local storage for easier connections on reload\n\nWaku Relay scalability in the Browser {NO EPIC}\n\nachieved: draft of direct browser-browser RTC example 260\nnext: improve the example (connection re-usage), work on contentTopic based RTC example\n\n\ngo-waku §\nC-Bindings Improvement: Callbacks and Duplications {E:2023-many-platforms}\n\nachieved: updated c-bindings to use callbacks\nnext: refactor v1 encoding functions and update RFC\n\nImprove Test Coverage {E:2023-qa}\n\nachieved: Enabled -race flag and ran all unit tests to identify data races.\nnext: Fix issues reported by the data race detector tool\n\nRLN: Post-Testnet3 Improvements {E:2023-rln}\n\nachieved: use zerokit batch insert/delete for members, exposed function to retrieve data from merkle tree, modified zerokit and go-zerokit-rln to pass merkle tree persistance configuration settings\nnext: resume onchain sync from persisted tree db\n\nIntroduce Peer Management {E:2023-peer-mgmt}\n\nachieved: Basic peer management to ensure standard in/out ratio for relay peers.\nnext: add service slots to peer manager\n\n\nEco Dev §\nAug 2023 {E:2023-eco-growth}\n\nachieved: production of swags and marketing collaterals for web3conf completed\nnext: web3conf talk and side event production. various calls with commshub for preparing marketing collaterals.\n\n\nDocs §\nAdvanced docs for js-waku {E:2023-eco-growth}\n\nnext: create guide on @waku/react and debugging js-waku web apps\n\nincorporating feedback (2023) {E:2023-eco-growth}\n\nachieved: rewrote the docs in UK English\nnext: update docs terms, announce js-waku docs\n\nFoundation of js-waku docs {E:2023-eco-growth}\nachieved: added guide on js-waku bootstrapping\n\nResearch §\n1.1 Network requirements and task breakdown {E:2023-1mil-users}\n\nachieved: Setup project management tools; determined number of shards to 8; some conversations on RLN memberships\nnext: Breakdown and assign tasks under each milestone for the 1 million users/public Waku Network epic.\n\n"},"roadmap/waku/updates/2023-08-14":{"title":"2023-08-14 Waku weekly","links":[],"tags":["waku-updates"],"content":"2023-08-14 Waku weekly §\n\nEpics §\nWaku Network Can Support 10K Users {E:2023-10k-users}\nAll software has been delivered. Pending items are:\n\nRunning stress testing on PostgreSQL to confirm performance gain 1894\nSetting up a staging fleet for Status to try static sharding\nRunning simulations for Store protocol: commitment and probably move this to 1mil epic\n\n\nEco Dev §\nAug 2023 {E:2023-eco-growth}\n\nachieved: web3conf talk, swags, 2 side events, twitter promotions, requested for marketing collateral to commshub\nnext: complete waku metrics, coordinate events with Lou, ethsafari planning, muchangmai planning\nblocker: was blocked on infra for hosting nextjs app for waku metrics but migrating to SSR and hosting on vercel\n\n\nDocs §\nAdvanced docs for js-waku\n\nnext: document notes/recommendations for NodeJS, begin docs on js-waku encryption\n\n\nnwaku §\nRelease Process Improvements {E:2023-qa}\n\nachieved: minor CI fixes and improvements\nnext: document how to run PR built images in waku-simulator, adding Linux arm64 binaries and images\n\nPostgreSQL {E:2023-10k-users}\n\nachieved: Learned that the insertion rate is constrained by the relay protocol. i.e. the maximum insert rate is limited by relay so I couldn’t push the “insert” operation to a limit from a Postgres point of view. For example, if 25 clients publish messages concurrently, and each client publishes 300 msgs, all the messages are correctly stored. If repeating the same operation but with 50 clients, then many messages are lost because the relay protocol doesn’t process all of them.\nnext: Carry on with stress testing. Analyze the performance differences between Postgres and SQLite regarding the read operations.\n\nAutosharding v1 {E:2023-1mil-users}\n\nachieved: many feedback/update cycles for FILTER, LIGHTPUSH, STORE & RFC\nnext: updating ENR for live subscriptions\n\nHTTP REST API: Store, Filter, Lightpush, Admin and Private APIs {E:2023-many-platforms}\n\nachieved: Legacy Filter - v1 - interface Rest Api support added.\nnext: Extend Rest Api interface for new v2 filter. Get v2 filter service supported from node. Add more tests.\n\n\njs-waku §\nMaintenance {E:2023-qa}\n\nachieved: upgrade libp2p & chainsafe deps to libp2p 0.46.3 while removing deprecated libp2p standalone interface packages (new breaking change libp2p w/ other deps), add tsdoc for referenced types, setting up/fixing prettier/eslint conflict\n\nDeveloper Experience (2023) {E:2023-eco-growth}\n\nachieved: non blocking pipeline step (1411)\n\nPeer Exchange is supported and used by default {E:2023-light-protocols}\n\nachieved: close the “fallback mechanism for peer rejections”, refactor peer-exchange compliance test\nnext: peer-exchange to be included with default discovery, action peer-exchange browser feedback\n\n\ngo-waku §\nMaintenance {E:2023-qa}\n\nachieved: improved keep alive logic for identifying if machine is waking up; added vacuum feature to sqlite and postgresql; made migrations optional; refactored db and migration code, extracted code to generate node key to its own separate subcommand\n\nC-Bindings Improvement: Callbacks and Duplications {E:2023-many-platforms}\n\nachieved: PR for updating the RFC to use callbacks, and refactored the encoding functions\n\nImprove Test Coverage {E:2023-qa}\n\nachieved: Fixed issues reported by the data race detector tool.\nnext: identify areas where test coverage needs improvement.\n\nRLN: Post-Testnet3 Improvements {E:2023-rln}\n\nachieved: exposed merkle tree configuration, removed embedded resources from go-zerokit-rln, fixed nwaku / go-waku rlnKeystore compatibility, added merkle tree persistence and modified zerokit to print to stderr any error obtained while executing functions via FFI.\nnext: interop with nwaku\n\nIntroduce Peer Management {E:2023-peer-mgmt}\n\nachieved: add service slots to peer manager.\nnext: implement relay connectivity loop, integrate gossipsub scoring for peer disconnections\n\n"}} \ No newline at end of file diff --git a/static/icon.png b/static/icon.png new file mode 100644 index 000000000..b6656a7a8 Binary files /dev/null and b/static/icon.png differ diff --git a/static/og-image.png b/static/og-image.png new file mode 100644 index 000000000..f1321455b Binary files /dev/null and b/static/og-image.png differ diff --git a/styles.7fdbd93987bfba941d84b8a4050caaba.min.css b/styles.7fdbd93987bfba941d84b8a4050caaba.min.css deleted file mode 100644 index 64b17bd54..000000000 --- a/styles.7fdbd93987bfba941d84b8a4050caaba.min.css +++ /dev/null @@ -1 +0,0 @@ -@import "https://fonts.googleapis.com/css2?family=Fira+Code:wght@400;700&family=Inter:wght@400;600;700&family=Source+Sans+Pro:wght@400;600&display=swap";:root{--font-body:"Source Sans Pro";--font-header:"Inter";--font-mono:"Fira Code"}html{scroll-behavior:smooth}html:lang(ar) p,html:lang(ar) h1,html:lang(ar) h2,html:lang(ar) h3,html:lang(ar) article{direction:rtl;text-align:right}.singlePage{padding:4em 30vw}@media all and (max-width:1200px){.singlePage{padding:25px 5vw}}body{margin:0;height:100vh;width:100vw;max-width:100%;box-sizing:border-box;background-color:var(--light)}h1,h2,h3,h4,h5,h6,thead{font-family:var(--font-header);color:var(--dark);font-weight:revert;margin:2rem 0 0;padding:2rem auto 1rem}h1:hover>.hanchor,h2:hover>.hanchor,h3:hover>.hanchor,h4:hover>.hanchor,h5:hover>.hanchor,h6:hover>.hanchor,thead:hover>.hanchor{color:var(--secondary)}.hanchor{font-family:var(--font-header);opacity:.8;transition:color .3s ease;color:var(--dark)}p,ul,text,a,tr,td,li,ol,ul{font-family:var(--font-body);color:var(--gray);fill:var(--gray);font-weight:revert;margin:revert;padding:revert}tbody,li,p{line-height:1.5em}.mainTOC{border-radius:5px;padding:.75em 0}.mainTOC details summary{cursor:zoom-in;font-family:var(--font-header);color:var(--dark);font-weight:700}.mainTOC details[open] summary{cursor:zoom-out}#TableOfContents>ol{counter-reset:section;margin-left:0;padding-left:1.5em}#TableOfContents>ol>li{counter-increment:section}#TableOfContents>ol>li>ol{counter-reset:subsection}#TableOfContents>ol>li>ol>li{counter-increment:subsection}#TableOfContents>ol>li>ol>li::marker{content:counter(section)"." counter(subsection)" "}#TableOfContents>ol>li::marker{content:counter(section)" "}#TableOfContents>ol>li::marker,#TableOfContents>ol>li>ol>li::marker{font-family:var(--font-body);font-weight:700}table{border:1px solid var(--outlinegray);width:100%;padding:1.5em;border-collapse:collapse}td,th{padding:.2em 1em;border:1px solid var(--outlinegray)}img{max-width:100%;border-radius:3px;margin:1em 0}p>img+em{display:block;transform:translateY(-1em)}sup{line-height:0}blockquote{margin-left:0;border-left:3px solid var(--secondary);padding-left:1em;transition:border-color .2s ease}.footnotes p{margin:.5em 0}.pagination{list-style:none;padding-left:0;display:flex;margin-top:2em;gap:1.5em;justify-content:center}.pagination .disabled{opacity:.2}.pagination>li{text-align:center;display:inline-block}.pagination>li a{background-color:transparent!important}.pagination>li a[href$="#"],.pagination>li.active a{opacity:.2}article>h1{margin-top:2em;font-size:2em}article>.meta{margin:0 0 1em;opacity:.7}article a{font-weight:600}article a.internal-link{text-decoration:none;background-color:rgba(143,159,169,.15);padding:0 .1em;margin:auto -.1em;border-radius:3px}article a.internal-link.broken{opacity:.5;background-color:transparent}article p{overflow-wrap:anywhere}.tags{list-style:none;padding-left:0}.tags .meta{margin:1.5em 0}.tags .meta>h1{margin:0}.tags .meta>p{margin:0}.tags>li{display:inline-block;margin:.4em 0}.tags>li>a{border-radius:8px;border:var(--outlinegray)1px solid;padding:.2em .5em}.tags>li>a::before{content:"#";margin-right:.3em;color:var(--outlinegray)}.backlinks a{font-weight:600;font-size:.9rem}sup>a{text-decoration:none;padding:0 .1em 0 .2em}#page-title{margin:0}#page-title>a{font-family:var(--font-header)}a{font-size:1em;font-weight:700;text-decoration:none;transition:all .2s ease;color:var(--secondary)}a:hover{color:var(--tertiary)!important}pre{font-family:var(--font-mono);padding:.75em;border-radius:3px;overflow-x:scroll}code{font-family:var(--font-mono);font-size:.85em;padding:.15em .3em;border-radius:5px;background:var(--lightgray)}@keyframes fadeIn{0%{opacity:0}100%{opacity:1}}footer{margin-top:4em;text-align:center}footer ul{padding-left:0}hr{width:25%;margin:4em auto;height:2px;border-radius:1px;border-width:0;color:var(--dark);background-color:var(--dark)}.page-end{display:flex;flex-direction:row;gap:2em}@media all and (max-width:780px){.page-end{flex-direction:column}}.page-end>*{flex:1 0}.page-end>.backlinks-container>ul{list-style:none;padding:0;margin:0}.page-end>.backlinks-container>ul>li{margin:.5em 0;padding:.25em 1em;border:var(--outlinegray)1px solid;border-radius:5px}.page-end #graph-container{border:var(--outlinegray)1px solid;border-radius:5px;box-sizing:border-box;min-height:250px;margin:.5em 0}.page-end #graph-container>svg{margin-bottom:-5px}.centered{margin-top:30vh}.spacer{flex:auto}header{display:flex;flex-direction:row;align-items:center;margin:1em 0 2em}header>h1{font-size:2em}@media all and (max-width:600px){header>nav{display:none}}header #search-icon{background-color:var(--lightgray);border-radius:4px;height:2em;display:flex;align-items:center;cursor:pointer}header #search-icon>p{display:inline;padding:0 1.5em 0 2em}header svg{cursor:pointer;width:18px;min-width:18px;margin:0 .5em}header svg:hover .search-path{stroke:var(--tertiary)}header svg .search-path{stroke:var(--gray);stroke-width:2px;transition:stroke .5s ease}#search-container{position:fixed;z-index:9999;left:0;top:0;width:100vw;height:100%;overflow:scroll;display:none;backdrop-filter:blur(4px);-webkit-backdrop-filter:blur(4px)}#search-container>div{width:50%;margin-top:15vh;margin-left:auto;margin-right:auto}@media all and (max-width:1200px){#search-container>div{width:90%}}#search-container>div>*{width:100%;border-radius:4px;background:var(--light);box-shadow:0 14px 50px rgba(27,33,48,.12),0 10px 30px rgba(27,33,48,.16);margin-bottom:2em}#search-container>div>input{box-sizing:border-box;padding:.5em 1em;font-family:var(--font-body);color:var(--dark);font-size:1.1em;border:1px solid var(--outlinegray)}#search-container>div>input:focus{outline:none}#search-container>div>#results-container .result-card{padding:1em;cursor:pointer;transition:background .2s ease;border:1px solid var(--outlinegray);border-bottom:none;width:100%;font-family:inherit;font-size:100%;line-height:1.15;margin:0;overflow:visible;text-transform:none;text-align:left;background:var(--light);outline:none}#search-container>div>#results-container .result-card:hover,#search-container>div>#results-container .result-card:focus{background:rgba(180,180,180,.15)}#search-container>div>#results-container .result-card:first-of-type{border-top-left-radius:5px;border-top-right-radius:5px}#search-container>div>#results-container .result-card:last-of-type{border-bottom-left-radius:5px;border-bottom-right-radius:5px;border-bottom:1px solid var(--outlinegray)}#search-container>div>#results-container .result-card>h3,#search-container>div>#results-container .result-card>p{margin:0}.search-highlight{background-color:#afbfc966;padding:.05em .2em;border-radius:3px}.section-ul{list-style:none;margin-top:2em;padding-left:0}.section-li{margin-bottom:1em}.section-li>.section{display:flex;align-items:center}@media all and (max-width:600px){.section-li>.section .tags{display:none}}.section-li>.section h3>a{font-weight:700;margin:0}.section-li>.section p{margin:0;padding-right:1em;flex-basis:6em}.section-li h3{opacity:1;font-weight:700;margin:0}.section-li .meta{opacity:.6}@keyframes dropin{0%{display:none;opacity:0;visibility:hidden}1%{display:inline-block;opacity:0}100%{opacity:1;visibility:visible}}.popover{z-index:999;position:absolute;width:20rem;display:none;background-color:var(--light);padding:1rem;margin:1rem;border:1px solid var(--outlinegray);border-radius:5px;pointer-events:none;transition:opacity .2s ease,transform .2s ease;user-select:none;overflow-wrap:anywhere;box-shadow:6px 6px 36px rgba(0,0,0,.25)}@media all and (max-width:600px){.popover{display:none!important}}.popover.visible{opacity:1;visibility:visible;display:inline-block;animation:dropin .2s ease}.popover>h3{font-size:1rem;margin:.25rem 0}.popover>.meta{margin-top:.25rem;opacity:.5;font-family:var(--font-mono);font-size:.8rem}.popover>p{margin:0;padding:.5rem 0}.popover>p,.popover>a{font-size:1rem;font-weight:400;user-select:none}#contact_buttons ul{list-style-type:none}#contact_buttons ul li{display:inline-block}#contact_buttons ul li a{padding:0 1em}.clipboard-button{position:absolute;display:flex;float:right;right:0;padding:.69em;margin:.5em;color:var(--outlinegray);border-color:var(--dark);background-color:var(--lightgray);filter:contrast(1.1);border:2px solid;border-radius:6px;font-size:.8em;z-index:1;opacity:0;transition:.12s}.clipboard-button>svg{fill:var(--light);filter:contrast(.3)}.clipboard-button:hover{cursor:pointer;border-color:var(--primary)}.clipboard-button:hover>svg{fill:var(--primary)}.clipboard-button:focus{outline:0}.highlight{position:relative}.highlight:hover>.clipboard-button{opacity:1;transition:.2s}.code-title{color:var(--primary);font-family:var(--font-mono);width:max-content;overflow-x:auto;display:inline-block;vertical-align:middle;font-weight:400;line-height:1em;position:relative;padding:.5em .6em .6em;max-width:calc(100% - 1.2em);margin-bottom:-.2em;z-index:-1;border-top-left-radius:.3em;border-top-right-radius:.3em;font-size:.9em;background-color:var(--lightgray);filter:hue-rotate(-30deg)contrast(1)opacity(.8)}:root{--light:#faf8f8;--dark:#141021;--secondary:#284b63;--tertiary:#84a59d;--visited:#afbfc9;--primary:#f28482;--gray:#4e4e4e;--lightgray:#f0f0f0;--outlinegray:#dadada;--million-progress-bar-color:var(--secondary)}[saved-theme=dark]{--light:#000000 !important;--dark:#fbfffe !important;--secondary:#6b879a !important;--visited:#4a575e !important;--tertiary:#84a59d !important;--primary:#f58382 !important;--gray:#d4d4d4 !important;--lightgray:#292633 !important;--outlinegray:#343434 !important}.darkmode{float:right;padding:1em;min-width:30px;position:relative}@media all and (max-width:450px){.darkmode{padding:1em}}.darkmode>.toggle{display:none;box-sizing:border-box}.darkmode svg{opacity:0;position:absolute;width:20px;height:20px;top:calc(50% - 10px);margin:0 7px;fill:var(--gray);transition:opacity .1s ease}.toggle:checked~label>#dayIcon{opacity:0}.toggle:checked~label>#nightIcon{opacity:1}.toggle:not(:checked)~label>#dayIcon{opacity:1}.toggle:not(:checked)~label>#nightIcon{opacity:0}.chroma{overflow:hidden!important;background-color:var(--lightgray)!important}.chroma .lntable{width:auto!important;overflow:auto!important;display:block!important}.chroma .hl{display:block!important;width:100%!important}.chroma .lnt{margin-right:0!important;padding:0 0!important}.chroma .ln{margin-right:0!important;padding:0 0!important}.chroma .gd{color:#8b080b!important}.chroma .gi{font-weight:700!important}.lntd:first-of-type>.chroma{padding-right:0!important}.chroma code{font-family:var(--font-mono)!important;font-size:.85em!important;line-height:2em!important;background:0 0!important;padding:0!important}.chroma{border-radius:3px!important;margin:0!important}pre.chroma{-moz-tab-size:4;-o-tab-size:4;tab-size:4}:root{--callout-summary:#00b0ff;--callout-summary-accent:#7fd7ff;--callout-bug:#f50057;--callout-bug-accent:#ff7aa9;--callout-danger:#ff1744;--callout-danger-accent:#ff8aa1;--callout-example:#7c4dff;--callout-example-accent:#bda5ff;--callout-fail:#ff5252;--callout-fail-accent:#ffa8a8;--callout-info:#00b8d4;--callout-info-accent:#69ebff;--callout-note:#448aff;--callout-note-accent:#a1c4ff;--callout-question:#64dd17;--callout-question-accent:#b0f286;--callout-quote:#9e9e9e;--callout-quote-accent:#cecece;--callout-done:#00c853;--callout-done-accent:#63ffa4;--callout-important:#00bfa5;--callout-important-accent:#5fffe9;--callout-warning:#ff9100;--callout-warning-accent:#ffc87f}[saved-theme=dark]{--callout-summary:#00b0ff !important;--callout-summary-accent:#00587f !important;--callout-bug:#f50057 !important;--callout-bug-accent:#7a002b !important;--callout-danger:#ff1744 !important;--callout-danger-accent:#8b001a !important;--callout-example:#7c4dff !important;--callout-example-accent:#2b00a6 !important;--callout-fail:#ff5252 !important;--callout-fail-accent:#a80000 !important;--callout-info:#00b8d4 !important;--callout-info-accent:#005c6a !important;--callout-note:#448aff !important;--callout-note-accent:#003ca1 !important;--callout-question:#64dd17 !important;--callout-question-accent:#006429 !important;--callout-quote:#9e9e9e !important;--callout-quote-accent:#4f4f4f !important;--callout-done:#00c853 !important;--callout-done-accent:#006429 !important;--callout-important:#00bfa5 !important;--callout-important-accent:#005f52 !important;--callout-warning:#ff9100 !important;--callout-warning-accent:#7f4800 !important}blockquote.callout-collapsible{cursor:pointer}blockquote.callout-collapsible.callout-collapsible::after{content:'-';right:6px;font-weight:bolder;font-family:Courier New,Courier,monospace}blockquote.callout-collapsed{padding-bottom:0!important}blockquote.callout-collapsed>p{border-bottom-right-radius:5px!important}blockquote.callout-collapsed::after{content:'+'!important}blockquote.callout-collapsed>*:not(:first-child){display:none!important}blockquote[class*=-callout]{margin-right:0;border-radius:5px;position:relative;padding-left:0!important;padding-bottom:.25em;color:var(--dark);background-color:var(--lightgray);border-left:6px solid var(--primary)!important}blockquote[class*=-callout]>p{border-top-right-radius:5px;padding:.5em 1em;margin:0;color:var(--gray)}blockquote[class*=-callout]>p:first-child{font-weight:600;color:var(--dark);padding:.4em 30px}blockquote[class*=-callout]>p:first-child::after,blockquote.callout-collapsible::after{display:inline-block;height:18px;width:18px;position:absolute;top:.4em;margin:.2em .4em}blockquote[class*=-callout]>p:first-child{font-weight:700;padding:.4em 35px}blockquote[class*=-callout]>p:first-child::after{left:0}blockquote.summary-callout{border-left:6px solid var(--callout-summary)!important}blockquote.summary-callout>p:first-child{background-color:var(--callout-summary-accent)!important}blockquote.summary-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='book' class='svg-inline--callout-fa fa-book fa-w-14' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 448 512'%3E%3Cpath fill='currentColor' d='M448 360V24c0-13.3-10.7-24-24-24H96C43 0 0 43 0 96v320c0 53 43 96 96 96h328c13.3 0 24-10.7 24-24v-16c0-7.5-3.5-14.3-8.9-18.7-4.2-15.4-4.2-59.3 0-74.7 5.4-4.3 8.9-11.1 8.9-18.6zM128 134c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm0 64c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm253.4 250H96c-17.7 0-32-14.3-32-32 0-17.6 14.4-32 32-32h285.4c-1.9 17.1-1.9 46.9 0 64z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='book' class='svg-inline--callout-fa fa-book fa-w-14' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 448 512'%3E%3Cpath fill='currentColor' d='M448 360V24c0-13.3-10.7-24-24-24H96C43 0 0 43 0 96v320c0 53 43 96 96 96h328c13.3 0 24-10.7 24-24v-16c0-7.5-3.5-14.3-8.9-18.7-4.2-15.4-4.2-59.3 0-74.7 5.4-4.3 8.9-11.1 8.9-18.6zM128 134c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm0 64c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm253.4 250H96c-17.7 0-32-14.3-32-32 0-17.6 14.4-32 32-32h285.4c-1.9 17.1-1.9 46.9 0 64z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-summary)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.abstract-callout{border-left:6px solid var(--callout-summary)!important}blockquote.abstract-callout>p:first-child{background-color:var(--callout-summary-accent)!important}blockquote.abstract-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='book' class='svg-inline--callout-fa fa-book fa-w-14' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 448 512'%3E%3Cpath fill='currentColor' d='M448 360V24c0-13.3-10.7-24-24-24H96C43 0 0 43 0 96v320c0 53 43 96 96 96h328c13.3 0 24-10.7 24-24v-16c0-7.5-3.5-14.3-8.9-18.7-4.2-15.4-4.2-59.3 0-74.7 5.4-4.3 8.9-11.1 8.9-18.6zM128 134c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm0 64c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm253.4 250H96c-17.7 0-32-14.3-32-32 0-17.6 14.4-32 32-32h285.4c-1.9 17.1-1.9 46.9 0 64z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='book' class='svg-inline--callout-fa fa-book fa-w-14' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 448 512'%3E%3Cpath fill='currentColor' d='M448 360V24c0-13.3-10.7-24-24-24H96C43 0 0 43 0 96v320c0 53 43 96 96 96h328c13.3 0 24-10.7 24-24v-16c0-7.5-3.5-14.3-8.9-18.7-4.2-15.4-4.2-59.3 0-74.7 5.4-4.3 8.9-11.1 8.9-18.6zM128 134c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm0 64c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm253.4 250H96c-17.7 0-32-14.3-32-32 0-17.6 14.4-32 32-32h285.4c-1.9 17.1-1.9 46.9 0 64z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-summary)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.tldr-callout{border-left:6px solid var(--callout-summary)!important}blockquote.tldr-callout>p:first-child{background-color:var(--callout-summary-accent)!important}blockquote.tldr-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='book' class='svg-inline--callout-fa fa-book fa-w-14' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 448 512'%3E%3Cpath fill='currentColor' d='M448 360V24c0-13.3-10.7-24-24-24H96C43 0 0 43 0 96v320c0 53 43 96 96 96h328c13.3 0 24-10.7 24-24v-16c0-7.5-3.5-14.3-8.9-18.7-4.2-15.4-4.2-59.3 0-74.7 5.4-4.3 8.9-11.1 8.9-18.6zM128 134c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm0 64c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm253.4 250H96c-17.7 0-32-14.3-32-32 0-17.6 14.4-32 32-32h285.4c-1.9 17.1-1.9 46.9 0 64z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='book' class='svg-inline--callout-fa fa-book fa-w-14' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 448 512'%3E%3Cpath fill='currentColor' d='M448 360V24c0-13.3-10.7-24-24-24H96C43 0 0 43 0 96v320c0 53 43 96 96 96h328c13.3 0 24-10.7 24-24v-16c0-7.5-3.5-14.3-8.9-18.7-4.2-15.4-4.2-59.3 0-74.7 5.4-4.3 8.9-11.1 8.9-18.6zM128 134c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm0 64c0-3.3 2.7-6 6-6h212c3.3 0 6 2.7 6 6v20c0 3.3-2.7 6-6 6H134c-3.3 0-6-2.7-6-6v-20zm253.4 250H96c-17.7 0-32-14.3-32-32 0-17.6 14.4-32 32-32h285.4c-1.9 17.1-1.9 46.9 0 64z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-summary)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.bug-callout{border-left:6px solid var(--callout-bug)!important}blockquote.bug-callout>p:first-child{background-color:var(--callout-bug-accent)!important}blockquote.bug-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='bug' class='svg-inline--callout-fa fa-bug fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M511.988 288.9c-.478 17.43-15.217 31.1-32.653 31.1H424v16c0 21.864-4.882 42.584-13.6 61.145l60.228 60.228c12.496 12.497 12.496 32.758 0 45.255-12.498 12.497-32.759 12.496-45.256 0l-54.736-54.736C345.886 467.965 314.351 480 280 480V236c0-6.627-5.373-12-12-12h-24c-6.627 0-12 5.373-12 12v244c-34.351 0-65.886-12.035-90.636-32.108l-54.736 54.736c-12.498 12.497-32.759 12.496-45.256 0-12.496-12.497-12.496-32.758 0-45.255l60.228-60.228C92.882 378.584 88 357.864 88 336v-16H32.666C15.23 320 .491 306.33.013 288.9-.484 270.816 14.028 256 32 256h56v-58.745l-46.628-46.628c-12.496-12.497-12.496-32.758 0-45.255 12.498-12.497 32.758-12.497 45.256 0L141.255 160h229.489l54.627-54.627c12.498-12.497 32.758-12.497 45.256 0 12.496 12.497 12.496 32.758 0 45.255L424 197.255V256h56c17.972 0 32.484 14.816 31.988 32.9zM257 0c-61.856 0-112 50.144-112 112h224C369 50.144 318.856 0 257 0z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='bug' class='svg-inline--callout-fa fa-bug fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M511.988 288.9c-.478 17.43-15.217 31.1-32.653 31.1H424v16c0 21.864-4.882 42.584-13.6 61.145l60.228 60.228c12.496 12.497 12.496 32.758 0 45.255-12.498 12.497-32.759 12.496-45.256 0l-54.736-54.736C345.886 467.965 314.351 480 280 480V236c0-6.627-5.373-12-12-12h-24c-6.627 0-12 5.373-12 12v244c-34.351 0-65.886-12.035-90.636-32.108l-54.736 54.736c-12.498 12.497-32.759 12.496-45.256 0-12.496-12.497-12.496-32.758 0-45.255l60.228-60.228C92.882 378.584 88 357.864 88 336v-16H32.666C15.23 320 .491 306.33.013 288.9-.484 270.816 14.028 256 32 256h56v-58.745l-46.628-46.628c-12.496-12.497-12.496-32.758 0-45.255 12.498-12.497 32.758-12.497 45.256 0L141.255 160h229.489l54.627-54.627c12.498-12.497 32.758-12.497 45.256 0 12.496 12.497 12.496 32.758 0 45.255L424 197.255V256h56c17.972 0 32.484 14.816 31.988 32.9zM257 0c-61.856 0-112 50.144-112 112h224C369 50.144 318.856 0 257 0z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-bug)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.danger-callout{border-left:6px solid var(--callout-danger)!important}blockquote.danger-callout>p:first-child{background-color:var(--callout-danger-accent)!important}blockquote.danger-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='bolt' class='svg-inline--callout-fa fa-bolt fa-w-10' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 320 512'%3E%3Cpath fill='currentColor' d='M296 160H180.6l42.6-129.8C227.2 15 215.7 0 200 0H56C44 0 33.8 8.9 32.2 20.8l-32 240C-1.7 275.2 9.5 288 24 288h118.7L96.6 482.5c-3.6 15.2 8 29.5 23.3 29.5 8.4 0 16.4-4.4 20.8-12l176-304c9.3-15.9-2.2-36-20.7-36z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='bolt' class='svg-inline--callout-fa fa-bolt fa-w-10' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 320 512'%3E%3Cpath fill='currentColor' d='M296 160H180.6l42.6-129.8C227.2 15 215.7 0 200 0H56C44 0 33.8 8.9 32.2 20.8l-32 240C-1.7 275.2 9.5 288 24 288h118.7L96.6 482.5c-3.6 15.2 8 29.5 23.3 29.5 8.4 0 16.4-4.4 20.8-12l176-304c9.3-15.9-2.2-36-20.7-36z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-danger)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.error-callout{border-left:6px solid var(--callout-danger)!important}blockquote.error-callout>p:first-child{background-color:var(--callout-danger-accent)!important}blockquote.error-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='bolt' class='svg-inline--callout-fa fa-bolt fa-w-10' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 320 512'%3E%3Cpath fill='currentColor' d='M296 160H180.6l42.6-129.8C227.2 15 215.7 0 200 0H56C44 0 33.8 8.9 32.2 20.8l-32 240C-1.7 275.2 9.5 288 24 288h118.7L96.6 482.5c-3.6 15.2 8 29.5 23.3 29.5 8.4 0 16.4-4.4 20.8-12l176-304c9.3-15.9-2.2-36-20.7-36z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='bolt' class='svg-inline--callout-fa fa-bolt fa-w-10' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 320 512'%3E%3Cpath fill='currentColor' d='M296 160H180.6l42.6-129.8C227.2 15 215.7 0 200 0H56C44 0 33.8 8.9 32.2 20.8l-32 240C-1.7 275.2 9.5 288 24 288h118.7L96.6 482.5c-3.6 15.2 8 29.5 23.3 29.5 8.4 0 16.4-4.4 20.8-12l176-304c9.3-15.9-2.2-36-20.7-36z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-danger)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.example-callout{border-left:6px solid var(--callout-example)!important}blockquote.example-callout>p:first-child{background-color:var(--callout-example-accent)!important}blockquote.example-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='list-ol' class='svg-inline--callout-fa fa-list-ol fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M61.77 401l17.5-20.15a19.92 19.92 0 0 0 5.07-14.19v-3.31C84.34 356 80.5 352 73 352H16a8 8 0 0 0-8 8v16a8 8 0 0 0 8 8h22.83a157.41 157.41 0 0 0-11 12.31l-5.61 7c-4 5.07-5.25 10.13-2.8 14.88l1.05 1.93c3 5.76 6.29 7.88 12.25 7.88h4.73c10.33 0 15.94 2.44 15.94 9.09 0 4.72-4.2 8.22-14.36 8.22a41.54 41.54 0 0 1-15.47-3.12c-6.49-3.88-11.74-3.5-15.6 3.12l-5.59 9.31c-3.72 6.13-3.19 11.72 2.63 15.94 7.71 4.69 20.38 9.44 37 9.44 34.16 0 48.5-22.75 48.5-44.12-.03-14.38-9.12-29.76-28.73-34.88zM496 224H176a16 16 0 0 0-16 16v32a16 16 0 0 0 16 16h320a16 16 0 0 0 16-16v-32a16 16 0 0 0-16-16zm0-160H176a16 16 0 0 0-16 16v32a16 16 0 0 0 16 16h320a16 16 0 0 0 16-16V80a16 16 0 0 0-16-16zm0 320H176a16 16 0 0 0-16 16v32a16 16 0 0 0 16 16h320a16 16 0 0 0 16-16v-32a16 16 0 0 0-16-16zM16 160h64a8 8 0 0 0 8-8v-16a8 8 0 0 0-8-8H64V40a8 8 0 0 0-8-8H32a8 8 0 0 0-7.14 4.42l-8 16A8 8 0 0 0 24 64h8v64H16a8 8 0 0 0-8 8v16a8 8 0 0 0 8 8zm-3.91 160H80a8 8 0 0 0 8-8v-16a8 8 0 0 0-8-8H41.32c3.29-10.29 48.34-18.68 48.34-56.44 0-29.06-25-39.56-44.47-39.56-21.36 0-33.8 10-40.46 18.75-4.37 5.59-3 10.84 2.8 15.37l8.58 6.88c5.61 4.56 11 2.47 16.12-2.44a13.44 13.44 0 0 1 9.46-3.84c3.33 0 9.28 1.56 9.28 8.75C51 248.19 0 257.31 0 304.59v4C0 316 5.08 320 12.09 320z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='list-ol' class='svg-inline--callout-fa fa-list-ol fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M61.77 401l17.5-20.15a19.92 19.92 0 0 0 5.07-14.19v-3.31C84.34 356 80.5 352 73 352H16a8 8 0 0 0-8 8v16a8 8 0 0 0 8 8h22.83a157.41 157.41 0 0 0-11 12.31l-5.61 7c-4 5.07-5.25 10.13-2.8 14.88l1.05 1.93c3 5.76 6.29 7.88 12.25 7.88h4.73c10.33 0 15.94 2.44 15.94 9.09 0 4.72-4.2 8.22-14.36 8.22a41.54 41.54 0 0 1-15.47-3.12c-6.49-3.88-11.74-3.5-15.6 3.12l-5.59 9.31c-3.72 6.13-3.19 11.72 2.63 15.94 7.71 4.69 20.38 9.44 37 9.44 34.16 0 48.5-22.75 48.5-44.12-.03-14.38-9.12-29.76-28.73-34.88zM496 224H176a16 16 0 0 0-16 16v32a16 16 0 0 0 16 16h320a16 16 0 0 0 16-16v-32a16 16 0 0 0-16-16zm0-160H176a16 16 0 0 0-16 16v32a16 16 0 0 0 16 16h320a16 16 0 0 0 16-16V80a16 16 0 0 0-16-16zm0 320H176a16 16 0 0 0-16 16v32a16 16 0 0 0 16 16h320a16 16 0 0 0 16-16v-32a16 16 0 0 0-16-16zM16 160h64a8 8 0 0 0 8-8v-16a8 8 0 0 0-8-8H64V40a8 8 0 0 0-8-8H32a8 8 0 0 0-7.14 4.42l-8 16A8 8 0 0 0 24 64h8v64H16a8 8 0 0 0-8 8v16a8 8 0 0 0 8 8zm-3.91 160H80a8 8 0 0 0 8-8v-16a8 8 0 0 0-8-8H41.32c3.29-10.29 48.34-18.68 48.34-56.44 0-29.06-25-39.56-44.47-39.56-21.36 0-33.8 10-40.46 18.75-4.37 5.59-3 10.84 2.8 15.37l8.58 6.88c5.61 4.56 11 2.47 16.12-2.44a13.44 13.44 0 0 1 9.46-3.84c3.33 0 9.28 1.56 9.28 8.75C51 248.19 0 257.31 0 304.59v4C0 316 5.08 320 12.09 320z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-example)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.fail-callout{border-left:6px solid var(--callout-fail)!important}blockquote.fail-callout>p:first-child{background-color:var(--callout-fail-accent)!important}blockquote.fail-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='times-circle' class='svg-inline--callout-fa fa-times-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119 8 8 119 8 256s111 248 248 248 248-111 248-248S393 8 256 8zm121.6 313.1c4.7 4.7 4.7 12.3 0 17L338 377.6c-4.7 4.7-12.3 4.7-17 0L256 312l-65.1 65.6c-4.7 4.7-12.3 4.7-17 0L134.4 338c-4.7-4.7-4.7-12.3 0-17l65.6-65-65.6-65.1c-4.7-4.7-4.7-12.3 0-17l39.6-39.6c4.7-4.7 12.3-4.7 17 0l65 65.7 65.1-65.6c4.7-4.7 12.3-4.7 17 0l39.6 39.6c4.7 4.7 4.7 12.3 0 17L312 256l65.6 65.1z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='times-circle' class='svg-inline--callout-fa fa-times-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119 8 8 119 8 256s111 248 248 248 248-111 248-248S393 8 256 8zm121.6 313.1c4.7 4.7 4.7 12.3 0 17L338 377.6c-4.7 4.7-12.3 4.7-17 0L256 312l-65.1 65.6c-4.7 4.7-12.3 4.7-17 0L134.4 338c-4.7-4.7-4.7-12.3 0-17l65.6-65-65.6-65.1c-4.7-4.7-4.7-12.3 0-17l39.6-39.6c4.7-4.7 12.3-4.7 17 0l65 65.7 65.1-65.6c4.7-4.7 12.3-4.7 17 0l39.6 39.6c4.7 4.7 4.7 12.3 0 17L312 256l65.6 65.1z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-fail)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.failure-callout{border-left:6px solid var(--callout-fail)!important}blockquote.failure-callout>p:first-child{background-color:var(--callout-fail-accent)!important}blockquote.failure-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='times-circle' class='svg-inline--callout-fa fa-times-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119 8 8 119 8 256s111 248 248 248 248-111 248-248S393 8 256 8zm121.6 313.1c4.7 4.7 4.7 12.3 0 17L338 377.6c-4.7 4.7-12.3 4.7-17 0L256 312l-65.1 65.6c-4.7 4.7-12.3 4.7-17 0L134.4 338c-4.7-4.7-4.7-12.3 0-17l65.6-65-65.6-65.1c-4.7-4.7-4.7-12.3 0-17l39.6-39.6c4.7-4.7 12.3-4.7 17 0l65 65.7 65.1-65.6c4.7-4.7 12.3-4.7 17 0l39.6 39.6c4.7 4.7 4.7 12.3 0 17L312 256l65.6 65.1z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='times-circle' class='svg-inline--callout-fa fa-times-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119 8 8 119 8 256s111 248 248 248 248-111 248-248S393 8 256 8zm121.6 313.1c4.7 4.7 4.7 12.3 0 17L338 377.6c-4.7 4.7-12.3 4.7-17 0L256 312l-65.1 65.6c-4.7 4.7-12.3 4.7-17 0L134.4 338c-4.7-4.7-4.7-12.3 0-17l65.6-65-65.6-65.1c-4.7-4.7-4.7-12.3 0-17l39.6-39.6c4.7-4.7 12.3-4.7 17 0l65 65.7 65.1-65.6c4.7-4.7 12.3-4.7 17 0l39.6 39.6c4.7 4.7 4.7 12.3 0 17L312 256l65.6 65.1z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-fail)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.missing-callout{border-left:6px solid var(--callout-fail)!important}blockquote.missing-callout>p:first-child{background-color:var(--callout-fail-accent)!important}blockquote.missing-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='times-circle' class='svg-inline--callout-fa fa-times-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119 8 8 119 8 256s111 248 248 248 248-111 248-248S393 8 256 8zm121.6 313.1c4.7 4.7 4.7 12.3 0 17L338 377.6c-4.7 4.7-12.3 4.7-17 0L256 312l-65.1 65.6c-4.7 4.7-12.3 4.7-17 0L134.4 338c-4.7-4.7-4.7-12.3 0-17l65.6-65-65.6-65.1c-4.7-4.7-4.7-12.3 0-17l39.6-39.6c4.7-4.7 12.3-4.7 17 0l65 65.7 65.1-65.6c4.7-4.7 12.3-4.7 17 0l39.6 39.6c4.7 4.7 4.7 12.3 0 17L312 256l65.6 65.1z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='times-circle' class='svg-inline--callout-fa fa-times-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119 8 8 119 8 256s111 248 248 248 248-111 248-248S393 8 256 8zm121.6 313.1c4.7 4.7 4.7 12.3 0 17L338 377.6c-4.7 4.7-12.3 4.7-17 0L256 312l-65.1 65.6c-4.7 4.7-12.3 4.7-17 0L134.4 338c-4.7-4.7-4.7-12.3 0-17l65.6-65-65.6-65.1c-4.7-4.7-4.7-12.3 0-17l39.6-39.6c4.7-4.7 12.3-4.7 17 0l65 65.7 65.1-65.6c4.7-4.7 12.3-4.7 17 0l39.6 39.6c4.7 4.7 4.7 12.3 0 17L312 256l65.6 65.1z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-fail)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.info-callout{border-left:6px solid var(--callout-info)!important}blockquote.info-callout>p:first-child{background-color:var(--callout-info-accent)!important}blockquote.info-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='info-circle' class='svg-inline--callout-fa fa-info-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119.043 8 8 119.083 8 256c0 136.997 111.043 248 248 248s248-111.003 248-248C504 119.083 392.957 8 256 8zm0 110c23.196 0 42 18.804 42 42s-18.804 42-42 42-42-18.804-42-42 18.804-42 42-42zm56 254c0 6.627-5.373 12-12 12h-88c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h12v-64h-12c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h64c6.627 0 12 5.373 12 12v100h12c6.627 0 12 5.373 12 12v24z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='info-circle' class='svg-inline--callout-fa fa-info-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119.043 8 8 119.083 8 256c0 136.997 111.043 248 248 248s248-111.003 248-248C504 119.083 392.957 8 256 8zm0 110c23.196 0 42 18.804 42 42s-18.804 42-42 42-42-18.804-42-42 18.804-42 42-42zm56 254c0 6.627-5.373 12-12 12h-88c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h12v-64h-12c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h64c6.627 0 12 5.373 12 12v100h12c6.627 0 12 5.373 12 12v24z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-info)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.todo-callout{border-left:6px solid var(--callout-info)!important}blockquote.todo-callout>p:first-child{background-color:var(--callout-info-accent)!important}blockquote.todo-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='info-circle' class='svg-inline--callout-fa fa-info-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119.043 8 8 119.083 8 256c0 136.997 111.043 248 248 248s248-111.003 248-248C504 119.083 392.957 8 256 8zm0 110c23.196 0 42 18.804 42 42s-18.804 42-42 42-42-18.804-42-42 18.804-42 42-42zm56 254c0 6.627-5.373 12-12 12h-88c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h12v-64h-12c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h64c6.627 0 12 5.373 12 12v100h12c6.627 0 12 5.373 12 12v24z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='info-circle' class='svg-inline--callout-fa fa-info-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M256 8C119.043 8 8 119.083 8 256c0 136.997 111.043 248 248 248s248-111.003 248-248C504 119.083 392.957 8 256 8zm0 110c23.196 0 42 18.804 42 42s-18.804 42-42 42-42-18.804-42-42 18.804-42 42-42zm56 254c0 6.627-5.373 12-12 12h-88c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h12v-64h-12c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h64c6.627 0 12 5.373 12 12v100h12c6.627 0 12 5.373 12 12v24z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-info)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.note-callout{border-left:6px solid var(--callout-note)!important}blockquote.note-callout>p:first-child{background-color:var(--callout-note-accent)!important}blockquote.note-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='pencil-alt' class='svg-inline--callout-fa fa-pencil-alt fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M497.9 142.1l-46.1 46.1c-4.7 4.7-12.3 4.7-17 0l-111-111c-4.7-4.7-4.7-12.3 0-17l46.1-46.1c18.7-18.7 49.1-18.7 67.9 0l60.1 60.1c18.8 18.7 18.8 49.1 0 67.9zM284.2 99.8L21.6 362.4.4 483.9c-2.9 16.4 11.4 30.6 27.8 27.8l121.5-21.3 262.6-262.6c4.7-4.7 4.7-12.3 0-17l-111-111c-4.8-4.7-12.4-4.7-17.1 0zM124.1 339.9c-5.5-5.5-5.5-14.3 0-19.8l154-154c5.5-5.5 14.3-5.5 19.8 0s5.5 14.3 0 19.8l-154 154c-5.5 5.5-14.3 5.5-19.8 0zM88 424h48v36.3l-64.5 11.3-31.1-31.1L51.7 376H88v48z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='pencil-alt' class='svg-inline--callout-fa fa-pencil-alt fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M497.9 142.1l-46.1 46.1c-4.7 4.7-12.3 4.7-17 0l-111-111c-4.7-4.7-4.7-12.3 0-17l46.1-46.1c18.7-18.7 49.1-18.7 67.9 0l60.1 60.1c18.8 18.7 18.8 49.1 0 67.9zM284.2 99.8L21.6 362.4.4 483.9c-2.9 16.4 11.4 30.6 27.8 27.8l121.5-21.3 262.6-262.6c4.7-4.7 4.7-12.3 0-17l-111-111c-4.8-4.7-12.4-4.7-17.1 0zM124.1 339.9c-5.5-5.5-5.5-14.3 0-19.8l154-154c5.5-5.5 14.3-5.5 19.8 0s5.5 14.3 0 19.8l-154 154c-5.5 5.5-14.3 5.5-19.8 0zM88 424h48v36.3l-64.5 11.3-31.1-31.1L51.7 376H88v48z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-note)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.question-callout{border-left:6px solid var(--callout-question)!important}blockquote.question-callout>p:first-child{background-color:var(--callout-question-accent)!important}blockquote.question-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='question-circle' class='svg-inline--callout-fa fa-question-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='question-circle' class='svg-inline--callout-fa fa-question-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-question)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.help-callout{border-left:6px solid var(--callout-question)!important}blockquote.help-callout>p:first-child{background-color:var(--callout-question-accent)!important}blockquote.help-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='question-circle' class='svg-inline--callout-fa fa-question-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='question-circle' class='svg-inline--callout-fa fa-question-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-question)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.faq-callout{border-left:6px solid var(--callout-question)!important}blockquote.faq-callout>p:first-child{background-color:var(--callout-question-accent)!important}blockquote.faq-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='question-circle' class='svg-inline--callout-fa fa-question-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='question-circle' class='svg-inline--callout-fa fa-question-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-question)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.quote-callout{border-left:6px solid var(--callout-quote)!important}blockquote.quote-callout>p:first-child{background-color:var(--callout-quote-accent)!important}blockquote.quote-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='quote-right' class='svg-inline--callout-fa fa-quote-right fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M464 32H336c-26.5 0-48 21.5-48 48v128c0 26.5 21.5 48 48 48h80v64c0 35.3-28.7 64-64 64h-8c-13.3 0-24 10.7-24 24v48c0 13.3 10.7 24 24 24h8c88.4 0 160-71.6 160-160V80c0-26.5-21.5-48-48-48zm-288 0H48C21.5 32 0 53.5 0 80v128c0 26.5 21.5 48 48 48h80v64c0 35.3-28.7 64-64 64h-8c-13.3 0-24 10.7-24 24v48c0 13.3 10.7 24 24 24h8c88.4 0 160-71.6 160-160V80c0-26.5-21.5-48-48-48z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='quote-right' class='svg-inline--callout-fa fa-quote-right fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M464 32H336c-26.5 0-48 21.5-48 48v128c0 26.5 21.5 48 48 48h80v64c0 35.3-28.7 64-64 64h-8c-13.3 0-24 10.7-24 24v48c0 13.3 10.7 24 24 24h8c88.4 0 160-71.6 160-160V80c0-26.5-21.5-48-48-48zm-288 0H48C21.5 32 0 53.5 0 80v128c0 26.5 21.5 48 48 48h80v64c0 35.3-28.7 64-64 64h-8c-13.3 0-24 10.7-24 24v48c0 13.3 10.7 24 24 24h8c88.4 0 160-71.6 160-160V80c0-26.5-21.5-48-48-48z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-quote)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.cite-callout{border-left:6px solid var(--callout-quote)!important}blockquote.cite-callout>p:first-child{background-color:var(--callout-quote-accent)!important}blockquote.cite-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='quote-right' class='svg-inline--callout-fa fa-quote-right fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M464 32H336c-26.5 0-48 21.5-48 48v128c0 26.5 21.5 48 48 48h80v64c0 35.3-28.7 64-64 64h-8c-13.3 0-24 10.7-24 24v48c0 13.3 10.7 24 24 24h8c88.4 0 160-71.6 160-160V80c0-26.5-21.5-48-48-48zm-288 0H48C21.5 32 0 53.5 0 80v128c0 26.5 21.5 48 48 48h80v64c0 35.3-28.7 64-64 64h-8c-13.3 0-24 10.7-24 24v48c0 13.3 10.7 24 24 24h8c88.4 0 160-71.6 160-160V80c0-26.5-21.5-48-48-48z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='quote-right' class='svg-inline--callout-fa fa-quote-right fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M464 32H336c-26.5 0-48 21.5-48 48v128c0 26.5 21.5 48 48 48h80v64c0 35.3-28.7 64-64 64h-8c-13.3 0-24 10.7-24 24v48c0 13.3 10.7 24 24 24h8c88.4 0 160-71.6 160-160V80c0-26.5-21.5-48-48-48zm-288 0H48C21.5 32 0 53.5 0 80v128c0 26.5 21.5 48 48 48h80v64c0 35.3-28.7 64-64 64h-8c-13.3 0-24 10.7-24 24v48c0 13.3 10.7 24 24 24h8c88.4 0 160-71.6 160-160V80c0-26.5-21.5-48-48-48z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-quote)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.done-callout{border-left:6px solid var(--callout-done)!important}blockquote.done-callout>p:first-child{background-color:var(--callout-done-accent)!important}blockquote.done-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='check-circle' class='svg-inline--callout-fa fa-check-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='check-circle' class='svg-inline--callout-fa fa-check-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-done)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.success-callout{border-left:6px solid var(--callout-done)!important}blockquote.success-callout>p:first-child{background-color:var(--callout-done-accent)!important}blockquote.success-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='check-circle' class='svg-inline--callout-fa fa-check-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='check-circle' class='svg-inline--callout-fa fa-check-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-done)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.check-callout{border-left:6px solid var(--callout-done)!important}blockquote.check-callout>p:first-child{background-color:var(--callout-done-accent)!important}blockquote.check-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='check-circle' class='svg-inline--callout-fa fa-check-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='check-circle' class='svg-inline--callout-fa fa-check-circle fa-w-16' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpath fill='currentColor' d='M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-done)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.important-callout{border-left:6px solid var(--callout-important)!important}blockquote.important-callout>p:first-child{background-color:var(--callout-important-accent)!important}blockquote.important-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='fire' class='svg-inline--callout-fa fa-fire fa-w-12' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 384 512'%3E%3Cpath fill='currentColor' d='M216 23.86c0-23.8-30.65-32.77-44.15-13.04C48 191.85 224 200 224 288c0 35.63-29.11 64.46-64.85 63.99-35.17-.45-63.15-29.77-63.15-64.94v-85.51c0-21.7-26.47-32.23-41.43-16.5C27.8 213.16 0 261.33 0 320c0 105.87 86.13 192 192 192s192-86.13 192-192c0-170.29-168-193-168-296.14z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='fire' class='svg-inline--callout-fa fa-fire fa-w-12' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 384 512'%3E%3Cpath fill='currentColor' d='M216 23.86c0-23.8-30.65-32.77-44.15-13.04C48 191.85 224 200 224 288c0 35.63-29.11 64.46-64.85 63.99-35.17-.45-63.15-29.77-63.15-64.94v-85.51c0-21.7-26.47-32.23-41.43-16.5C27.8 213.16 0 261.33 0 320c0 105.87 86.13 192 192 192s192-86.13 192-192c0-170.29-168-193-168-296.14z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-important)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.tip-callout{border-left:6px solid var(--callout-important)!important}blockquote.tip-callout>p:first-child{background-color:var(--callout-important-accent)!important}blockquote.tip-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='fire' class='svg-inline--callout-fa fa-fire fa-w-12' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 384 512'%3E%3Cpath fill='currentColor' d='M216 23.86c0-23.8-30.65-32.77-44.15-13.04C48 191.85 224 200 224 288c0 35.63-29.11 64.46-64.85 63.99-35.17-.45-63.15-29.77-63.15-64.94v-85.51c0-21.7-26.47-32.23-41.43-16.5C27.8 213.16 0 261.33 0 320c0 105.87 86.13 192 192 192s192-86.13 192-192c0-170.29-168-193-168-296.14z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='fire' class='svg-inline--callout-fa fa-fire fa-w-12' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 384 512'%3E%3Cpath fill='currentColor' d='M216 23.86c0-23.8-30.65-32.77-44.15-13.04C48 191.85 224 200 224 288c0 35.63-29.11 64.46-64.85 63.99-35.17-.45-63.15-29.77-63.15-64.94v-85.51c0-21.7-26.47-32.23-41.43-16.5C27.8 213.16 0 261.33 0 320c0 105.87 86.13 192 192 192s192-86.13 192-192c0-170.29-168-193-168-296.14z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-important)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.hint-callout{border-left:6px solid var(--callout-important)!important}blockquote.hint-callout>p:first-child{background-color:var(--callout-important-accent)!important}blockquote.hint-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='fire' class='svg-inline--callout-fa fa-fire fa-w-12' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 384 512'%3E%3Cpath fill='currentColor' d='M216 23.86c0-23.8-30.65-32.77-44.15-13.04C48 191.85 224 200 224 288c0 35.63-29.11 64.46-64.85 63.99-35.17-.45-63.15-29.77-63.15-64.94v-85.51c0-21.7-26.47-32.23-41.43-16.5C27.8 213.16 0 261.33 0 320c0 105.87 86.13 192 192 192s192-86.13 192-192c0-170.29-168-193-168-296.14z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='fire' class='svg-inline--callout-fa fa-fire fa-w-12' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 384 512'%3E%3Cpath fill='currentColor' d='M216 23.86c0-23.8-30.65-32.77-44.15-13.04C48 191.85 224 200 224 288c0 35.63-29.11 64.46-64.85 63.99-35.17-.45-63.15-29.77-63.15-64.94v-85.51c0-21.7-26.47-32.23-41.43-16.5C27.8 213.16 0 261.33 0 320c0 105.87 86.13 192 192 192s192-86.13 192-192c0-170.29-168-193-168-296.14z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-important)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.warning-callout{border-left:6px solid var(--callout-warning)!important}blockquote.warning-callout>p:first-child{background-color:var(--callout-warning-accent)!important}blockquote.warning-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='exclamation-triangle' class='svg-inline--callout-fa fa-exclamation-triangle fa-w-18' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 576 512'%3E%3Cpath fill='currentColor' d='M569.517 440.013C587.975 472.007 564.806 512 527.94 512H48.054c-36.937 0-59.999-40.055-41.577-71.987L246.423 23.985c18.467-32.009 64.72-31.951 83.154 0l239.94 416.028zM288 354c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='exclamation-triangle' class='svg-inline--callout-fa fa-exclamation-triangle fa-w-18' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 576 512'%3E%3Cpath fill='currentColor' d='M569.517 440.013C587.975 472.007 564.806 512 527.94 512H48.054c-36.937 0-59.999-40.055-41.577-71.987L246.423 23.985c18.467-32.009 64.72-31.951 83.154 0l239.94 416.028zM288 354c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-warning)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.caution-callout{border-left:6px solid var(--callout-warning)!important}blockquote.caution-callout>p:first-child{background-color:var(--callout-warning-accent)!important}blockquote.caution-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='exclamation-triangle' class='svg-inline--callout-fa fa-exclamation-triangle fa-w-18' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 576 512'%3E%3Cpath fill='currentColor' d='M569.517 440.013C587.975 472.007 564.806 512 527.94 512H48.054c-36.937 0-59.999-40.055-41.577-71.987L246.423 23.985c18.467-32.009 64.72-31.951 83.154 0l239.94 416.028zM288 354c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='exclamation-triangle' class='svg-inline--callout-fa fa-exclamation-triangle fa-w-18' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 576 512'%3E%3Cpath fill='currentColor' d='M569.517 440.013C587.975 472.007 564.806 512 527.94 512H48.054c-36.937 0-59.999-40.055-41.577-71.987L246.423 23.985c18.467-32.009 64.72-31.951 83.154 0l239.94 416.028zM288 354c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-warning)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center}blockquote.attention-callout{border-left:6px solid var(--callout-warning)!important}blockquote.attention-callout>p:first-child{background-color:var(--callout-warning-accent)!important}blockquote.attention-callout>p:first-child::after{content:'';-webkit-mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='exclamation-triangle' class='svg-inline--callout-fa fa-exclamation-triangle fa-w-18' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 576 512'%3E%3Cpath fill='currentColor' d='M569.517 440.013C587.975 472.007 564.806 512 527.94 512H48.054c-36.937 0-59.999-40.055-41.577-71.987L246.423 23.985c18.467-32.009 64.72-31.951 83.154 0l239.94 416.028zM288 354c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z'%3E%3C/path%3E%3C/svg%3E");mask:url("data:image/svg+xml,%3Csvg aria-hidden='true' focusable='false' data-icon='exclamation-triangle' class='svg-inline--callout-fa fa-exclamation-triangle fa-w-18' role='img' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 576 512'%3E%3Cpath fill='currentColor' d='M569.517 440.013C587.975 472.007 564.806 512 527.94 512H48.054c-36.937 0-59.999-40.055-41.577-71.987L246.423 23.985c18.467-32.009 64.72-31.951 83.154 0l239.94 416.028zM288 354c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z'%3E%3C/path%3E%3C/svg%3E");background-color:var(--callout-warning)!important;-webkit-mask-size:contain;mask-size:contain;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-position:center;mask-position:center} \ No newline at end of file diff --git a/styles/_dark_syntax.bec558461529f0dd343a0b008c343934.min.css b/styles/_dark_syntax.bec558461529f0dd343a0b008c343934.min.css deleted file mode 100644 index 8f6bb004b..000000000 --- a/styles/_dark_syntax.bec558461529f0dd343a0b008c343934.min.css +++ /dev/null @@ -1 +0,0 @@ -.bg{color:#f8f8f2;background-color:#282a36}.chroma{color:#f8f8f2;background-color:#282a36}.chroma .lntd{vertical-align:top;padding:0;margin:0;border:0}.chroma .lntable{border-spacing:0;padding:0;margin:0;border:0}.chroma .hl{background-color:#ffc}.chroma .lnt{white-space:pre;user-select:none;margin-right:.4em;padding:0 .4em;color:#7f7f7f}.chroma .ln{white-space:pre;user-select:none;margin-right:.4em;padding:0 .4em;color:#7f7f7f}.chroma .line{display:flex}.chroma .k{color:#ff79c6}.chroma .kc{color:#ff79c6}.chroma .kd{color:#8be9fd;font-style:italic}.chroma .kn{color:#ff79c6}.chroma .kp{color:#ff79c6}.chroma .kr{color:#ff79c6}.chroma .kt{color:#8be9fd}.chroma .na{color:#50fa7b}.chroma .nb{color:#8be9fd;font-style:italic}.chroma .nc{color:#50fa7b}.chroma .nf{color:#50fa7b}.chroma .nl{color:#8be9fd;font-style:italic}.chroma .nt{color:#ff79c6}.chroma .nv{color:#8be9fd;font-style:italic}.chroma .vc{color:#8be9fd;font-style:italic}.chroma .vg{color:#8be9fd;font-style:italic}.chroma .vi{color:#8be9fd;font-style:italic}.chroma .s{color:#f1fa8c}.chroma .sa{color:#f1fa8c}.chroma .sb{color:#f1fa8c}.chroma .sc{color:#f1fa8c}.chroma .dl{color:#f1fa8c}.chroma .sd{color:#f1fa8c}.chroma .s2{color:#f1fa8c}.chroma .se{color:#f1fa8c}.chroma .sh{color:#f1fa8c}.chroma .si{color:#f1fa8c}.chroma .sx{color:#f1fa8c}.chroma .sr{color:#f1fa8c}.chroma .s1{color:#f1fa8c}.chroma .ss{color:#f1fa8c}.chroma .m{color:#bd93f9}.chroma .mb{color:#bd93f9}.chroma .mf{color:#bd93f9}.chroma .mh{color:#bd93f9}.chroma .mi{color:#bd93f9}.chroma .il{color:#bd93f9}.chroma .mo{color:#bd93f9}.chroma .o{color:#ff79c6}.chroma .ow{color:#ff79c6}.chroma .c{color:#6272a4}.chroma .ch{color:#6272a4}.chroma .cm{color:#6272a4}.chroma .c1{color:#6272a4}.chroma .cs{color:#6272a4}.chroma .cp{color:#ff79c6}.chroma .cpf{color:#ff79c6}.chroma .gd{color:#f55}.chroma .ge{text-decoration:underline}.chroma .gh{font-weight:700}.chroma .gi{color:#50fa7b;font-weight:700}.chroma .go{color:#44475a}.chroma .gu{font-weight:700}.chroma .gl{text-decoration:underline} \ No newline at end of file diff --git a/styles/_light_syntax.86a48a52faebeaaf42158b72922b1c90.min.css b/styles/_light_syntax.86a48a52faebeaaf42158b72922b1c90.min.css deleted file mode 100644 index 80c22fed1..000000000 --- a/styles/_light_syntax.86a48a52faebeaaf42158b72922b1c90.min.css +++ /dev/null @@ -1 +0,0 @@ -.bg{color:#272822;background-color:#fafafa}.chroma{color:#272822;background-color:#fafafa}.chroma .lntd{vertical-align:top;padding:0;margin:0;border:0}.chroma .lntable{border-spacing:0;padding:0;margin:0;border:0}.chroma .hl{background-color:#ffc}.chroma .lnt{white-space:pre;user-select:none;margin-right:.4em;padding:0 .4em;color:#7f7f7f}.chroma .ln{white-space:pre;user-select:none;margin-right:.4em;padding:0 .4em;color:#7f7f7f}.chroma .line{display:flex}.chroma .k{color:#00a8c8}.chroma .kc{color:#00a8c8}.chroma .kd{color:#00a8c8}.chroma .kn{color:#f92672}.chroma .kp{color:#00a8c8}.chroma .kr{color:#00a8c8}.chroma .kt{color:#00a8c8}.chroma .n{color:#111}.chroma .na{color:#75af00}.chroma .nb{color:#111}.chroma .bp{color:#111}.chroma .nc{color:#75af00}.chroma .no{color:#00a8c8}.chroma .nd{color:#75af00}.chroma .ni{color:#111}.chroma .ne{color:#75af00}.chroma .nf{color:#75af00}.chroma .fm{color:#111}.chroma .nl{color:#111}.chroma .nn{color:#111}.chroma .nx{color:#75af00}.chroma .py{color:#111}.chroma .nt{color:#f92672}.chroma .nv{color:#111}.chroma .vc{color:#111}.chroma .vg{color:#111}.chroma .vi{color:#111}.chroma .vm{color:#111}.chroma .l{color:#ae81ff}.chroma .ld{color:#d88200}.chroma .s{color:#d88200}.chroma .sa{color:#d88200}.chroma .sb{color:#d88200}.chroma .sc{color:#d88200}.chroma .dl{color:#d88200}.chroma .sd{color:#d88200}.chroma .s2{color:#d88200}.chroma .se{color:#8045ff}.chroma .sh{color:#d88200}.chroma .si{color:#d88200}.chroma .sx{color:#d88200}.chroma .sr{color:#d88200}.chroma .s1{color:#d88200}.chroma .ss{color:#d88200}.chroma .m{color:#ae81ff}.chroma .mb{color:#ae81ff}.chroma .mf{color:#ae81ff}.chroma .mh{color:#ae81ff}.chroma .mi{color:#ae81ff}.chroma .il{color:#ae81ff}.chroma .mo{color:#ae81ff}.chroma .o{color:#f92672}.chroma .ow{color:#f92672}.chroma .p{color:#111}.chroma .c{color:#75715e}.chroma .ch{color:#75715e}.chroma .cm{color:#75715e}.chroma .c1{color:#75715e}.chroma .cs{color:#75715e}.chroma .cp{color:#75715e}.chroma .cpf{color:#75715e}.chroma .ge{font-style:italic}.chroma .gs{font-weight:700} \ No newline at end of file diff --git a/tags/139.html b/tags/139.html new file mode 100644 index 000000000..003c684f2 --- /dev/null +++ b/tags/139.html @@ -0,0 +1,52 @@ + +Tag: #139

1 items with this tag.

\ No newline at end of file diff --git a/tags/166.html b/tags/166.html new file mode 100644 index 000000000..a08792f5d --- /dev/null +++ b/tags/166.html @@ -0,0 +1,52 @@ + +Tag: #166

1 items with this tag.

\ No newline at end of file diff --git a/tags/479.html b/tags/479.html new file mode 100644 index 000000000..b62c5d4de --- /dev/null +++ b/tags/479.html @@ -0,0 +1,52 @@ + +Tag: #479

1 items with this tag.

\ No newline at end of file diff --git a/tags/TEAM-updates/index.html b/tags/TEAM-updates/index.html deleted file mode 100644 index 9fa967605..000000000 --- a/tags/TEAM-updates/index.html +++ /dev/null @@ -1,274 +0,0 @@ - - - - - - - - <TEAM>-updates - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - -
- -
- -
- - - diff --git a/tags/TEAM-updates/index.xml b/tags/TEAM-updates/index.xml deleted file mode 100644 index adba3c59c..000000000 --- a/tags/TEAM-updates/index.xml +++ /dev/null @@ -1,20 +0,0 @@ - - - - <TEAM>-updates on - https://roadmap.logos.co/tags/TEAM-updates/ - Recent content in <TEAM>-updates on - Hugo -- gohugo.io - en-us - Fri, 11 Aug 2023 00:00:00 +0000 - - 2023-08-17 <TEAM> weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11/ - Fri, 11 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-11/ - Logos Lab 11th of August Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - diff --git a/tags/TEAM-updates/page/1/index.html b/tags/TEAM-updates/page/1/index.html deleted file mode 100644 index 23ab1d2e2..000000000 --- a/tags/TEAM-updates/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/TEAM-updates/ - - - - - - diff --git a/tags/acid-updates.html b/tags/acid-updates.html new file mode 100644 index 000000000..e37808854 --- /dev/null +++ b/tags/acid-updates.html @@ -0,0 +1,52 @@ + +Tag: #acid-updates
\ No newline at end of file diff --git a/tags/acid-updates/index.html b/tags/acid-updates/index.html deleted file mode 100644 index e0fbe0f9a..000000000 --- a/tags/acid-updates/index.html +++ /dev/null @@ -1,292 +0,0 @@ - - - - - - - - acid-updates - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - -
- -
- -
- - - diff --git a/tags/acid-updates/index.xml b/tags/acid-updates/index.xml deleted file mode 100644 index 534d99f78..000000000 --- a/tags/acid-updates/index.xml +++ /dev/null @@ -1,30 +0,0 @@ - - - - acid-updates on - https://roadmap.logos.co/tags/acid-updates/ - Recent content in acid-updates on - Hugo -- gohugo.io - en-us - Wed, 09 Aug 2023 00:00:00 +0000 - - 2023-08-09 Acid weekly - https://roadmap.logos.co/roadmap/acid/updates/2023-08-09/ - Wed, 09 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/updates/2023-08-09/ - Top level priorities: Logos Growth Plan Status Relaunch Launch of LPE Podcasts (Target: Every week one podcast out) Hiring: TD studio and DC studio roles - - - - 2023-08-02 Acid weekly - https://roadmap.logos.co/roadmap/acid/updates/2023-08-02/ - Thu, 03 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/updates/2023-08-02/ - Leads roundup - acid Al / Comms -Status app relaunch comms campaign plan in the works. Approx. date for launch 31. - - - - diff --git a/tags/acid-updates/page/1/index.html b/tags/acid-updates/page/1/index.html deleted file mode 100644 index 1775f4b22..000000000 --- a/tags/acid-updates/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/acid-updates/ - - - - - - diff --git a/tags/codex-updates.html b/tags/codex-updates.html new file mode 100644 index 000000000..5bd76f528 --- /dev/null +++ b/tags/codex-updates.html @@ -0,0 +1,52 @@ + +Tag: #codex-updates
\ No newline at end of file diff --git a/tags/codex-updates/index.html b/tags/codex-updates/index.html deleted file mode 100644 index 299f60571..000000000 --- a/tags/codex-updates/index.html +++ /dev/null @@ -1,310 +0,0 @@ - - - - - - - - codex-updates - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - -
- -
- -
- - - diff --git a/tags/codex-updates/index.xml b/tags/codex-updates/index.xml deleted file mode 100644 index 023183b64..000000000 --- a/tags/codex-updates/index.xml +++ /dev/null @@ -1,39 +0,0 @@ - - - - codex-updates on - https://roadmap.logos.co/tags/codex-updates/ - Recent content in codex-updates on - Hugo -- gohugo.io - en-us - Fri, 11 Aug 2023 00:00:00 +0000 - - 2023-08-11 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-08-11/ - Fri, 11 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-08-11/ - Codex update August 11 Client Milestone: Merkelizing block data Initial Merkle Tree implementation - https://github.com/codex-storage/nim-codex/pull/504 Work on persisting/serializing Merkle Tree is underway, PR upcoming Milestone: Block discovery and retrieval Continued analysis of block discovery and retrieval - https://hackmd. - - - - 2023-08-01 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-08-01/ - Tue, 01 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-08-01/ - Codex update Aug 1st Client Milestone: Merkelizing block data Initial design writeup https://github.com/codex-storage/codex-research/blob/master/design/metadata-overhead.md Work break down and review for Ben and Tomasz (epic coming up) This is required to integrate the proving system Milestone: Block discovery and retrieval Some initial work break down and milestones here - https://docs. - - - - 2023-07-21 Codex weekly - https://roadmap.logos.co/roadmap/codex/updates/2023-07-21/ - Fri, 21 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/updates/2023-07-21/ - Codex update 07/12/2023 to 07/21/2023 Overall we continue working in various directions, distributed testing, marketplace, p2p client, research, etc&hellip; -Our main milestone is to have a fully functional testnet with the marketplace and durability guarantees deployed by end of year. - - - - diff --git a/tags/codex-updates/page/1/index.html b/tags/codex-updates/page/1/index.html deleted file mode 100644 index acb915ac0..000000000 --- a/tags/codex-updates/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/codex-updates/ - - - - - - diff --git a/tags/component.html b/tags/component.html new file mode 100644 index 000000000..28a0f9757 --- /dev/null +++ b/tags/component.html @@ -0,0 +1,62 @@ + +Components

Want to create your own custom component? Check out the advanced guide on creating components for more information.

\ No newline at end of file diff --git a/tags/ilab-updates.html b/tags/ilab-updates.html new file mode 100644 index 000000000..6f22df42c --- /dev/null +++ b/tags/ilab-updates.html @@ -0,0 +1,52 @@ + +Tag: #ilab-updates
\ No newline at end of file diff --git a/tags/ilab-updates/index.html b/tags/ilab-updates/index.html deleted file mode 100644 index d495a09f9..000000000 --- a/tags/ilab-updates/index.html +++ /dev/null @@ -1,292 +0,0 @@ - - - - - - - - ilab-updates - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - -
- -
- -
- - - diff --git a/tags/ilab-updates/index.xml b/tags/ilab-updates/index.xml deleted file mode 100644 index b1204f57a..000000000 --- a/tags/ilab-updates/index.xml +++ /dev/null @@ -1,29 +0,0 @@ - - - - ilab-updates on - https://roadmap.logos.co/tags/ilab-updates/ - Recent content in ilab-updates on - Hugo -- gohugo.io - en-us - Wed, 02 Aug 2023 00:00:00 +0000 - - 2023-08-02 Innovation Lab weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02/ - Wed, 02 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-08-02/ - Logos Lab 2nd of August Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - 2023-07-12 Innovation Lab Weekly - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12/ - Wed, 12 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/updates/2023-07-12/ - Logos Lab 12th of July Currently working on the Waku Objects prototype, which is a modular system for transactional chat objects. - - - - diff --git a/tags/ilab-updates/page/1/index.html b/tags/ilab-updates/page/1/index.html deleted file mode 100644 index a77982c23..000000000 --- a/tags/ilab-updates/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/ilab-updates/ - - - - - - diff --git a/tags/index.html b/tags/index.html index 3f2fa4e9b..46504b163 100644 --- a/tags/index.html +++ b/tags/index.html @@ -1,820 +1,52 @@ - - - - - - - Tags - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Found 12 total tags.

#479

1 items with this tag.

#166

1 items with this tag.

#team-updates

1 items with this tag.

#139

1 items with this tag.

- - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

All Tags

- -
- -
-

Vac updates

-

7 notes with this tag

-
- - - - - -
-

Milestones

-

4 notes with this tag

-
- - - - - -
-

Nomos updates

-

4 notes with this tag

-
- - - - - -
-

Waku updates

-

4 notes with this tag

-
- - - - - -
-

Codex updates

-

3 notes with this tag

-
- - - - - -
-

Acid updates

-

2 notes with this tag

-
- - - - - -
-

Ilab updates

-

2 notes with this tag

-
- - - - - -
-

Milestones overview

-

1 notes with this tag

-
- - - - - -
-

Team updates

-

1 notes with this tag

-
- - -
-
- -
- -
- -
- - - + const collapsed2 = parent.classList.contains(`is-collapsed`); + const height2 = collapsed2 ? parent.scrollHeight : parent.scrollHeight + current.scrollHeight; + parent.style.maxHeight = height2 + `px`; + current = parent; + parent = parent.parentElement; + } +} +function setupCallout() { + const collapsible = document.getElementsByClassName( + `callout is-collapsible` + ); + for (const div of collapsible) { + const title = div.firstElementChild; + if (title) { + title.removeEventListener(`click`, toggleCallout); + title.addEventListener(`click`, toggleCallout); + const collapsed = div.classList.contains(`is-collapsed`); + const height = collapsed ? title.scrollHeight : div.scrollHeight; + div.style.maxHeight = height + `px`; + } + } +} +document.addEventListener(`nav`, setupCallout); +window.addEventListener(`resize`, setupCallout); + \ No newline at end of file diff --git a/tags/index.xml b/tags/index.xml deleted file mode 100644 index ed8f8a51f..000000000 --- a/tags/index.xml +++ /dev/null @@ -1,92 +0,0 @@ - - - - Tags on - https://roadmap.logos.co/tags/ - Recent content in Tags on - Hugo -- gohugo.io - en-us - Mon, 21 Aug 2023 00:00:00 +0000 - - vac-updates - https://roadmap.logos.co/tags/vac-updates/ - Mon, 21 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/vac-updates/ - - - - - milestones - https://roadmap.logos.co/tags/milestones/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/milestones/ - - - - - nomos-updates - https://roadmap.logos.co/tags/nomos-updates/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/nomos-updates/ - - - - - waku-updates - https://roadmap.logos.co/tags/waku-updates/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/waku-updates/ - - - - - <TEAM>-updates - https://roadmap.logos.co/tags/TEAM-updates/ - Fri, 11 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/TEAM-updates/ - - - - - codex-updates - https://roadmap.logos.co/tags/codex-updates/ - Fri, 11 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/codex-updates/ - - - - - acid-updates - https://roadmap.logos.co/tags/acid-updates/ - Wed, 09 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/acid-updates/ - - - - - milestones-overview - https://roadmap.logos.co/tags/milestones-overview/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/milestones-overview/ - - - - - ilab-updates - https://roadmap.logos.co/tags/ilab-updates/ - Wed, 02 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/tags/ilab-updates/ - - - - - diff --git a/tags/milestones-overview.html b/tags/milestones-overview.html new file mode 100644 index 000000000..6c0205054 --- /dev/null +++ b/tags/milestones-overview.html @@ -0,0 +1,52 @@ + +Tag: #milestones-overview

1 items with this tag.

\ No newline at end of file diff --git a/tags/milestones-overview/index.html b/tags/milestones-overview/index.html deleted file mode 100644 index 13c074f0c..000000000 --- a/tags/milestones-overview/index.html +++ /dev/null @@ -1,274 +0,0 @@ - - - - - - - - milestones-overview - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - -
- -
- -
- - - diff --git a/tags/milestones-overview/index.xml b/tags/milestones-overview/index.xml deleted file mode 100644 index f3f17fd2b..000000000 --- a/tags/milestones-overview/index.xml +++ /dev/null @@ -1,20 +0,0 @@ - - - - milestones-overview on - https://roadmap.logos.co/tags/milestones-overview/ - Recent content in milestones-overview on - Hugo -- gohugo.io - en-us - Mon, 07 Aug 2023 00:00:00 +0000 - - Codex Milestones Overview - https://roadmap.logos.co/roadmap/codex/milestones-overview/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/codex/milestones-overview/ - Milestones Zenhub Tracker Miro Tracker - - - - diff --git a/tags/milestones-overview/page/1/index.html b/tags/milestones-overview/page/1/index.html deleted file mode 100644 index ee809a3e6..000000000 --- a/tags/milestones-overview/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/milestones-overview/ - - - - - - diff --git a/tags/milestones.html b/tags/milestones.html new file mode 100644 index 000000000..c0b7f1c3d --- /dev/null +++ b/tags/milestones.html @@ -0,0 +1,52 @@ + +Tag: #milestones
\ No newline at end of file diff --git a/tags/milestones/index.html b/tags/milestones/index.html deleted file mode 100644 index 2996c7735..000000000 --- a/tags/milestones/index.html +++ /dev/null @@ -1,328 +0,0 @@ - - - - - - - - milestones - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - -
- -
- -
- - - diff --git a/tags/milestones/index.xml b/tags/milestones/index.xml deleted file mode 100644 index 7369b6087..000000000 --- a/tags/milestones/index.xml +++ /dev/null @@ -1,49 +0,0 @@ - - - - milestones on - https://roadmap.logos.co/tags/milestones/ - Recent content in milestones on - Hugo -- gohugo.io - en-us - Thu, 17 Aug 2023 00:00:00 +0000 - - Comms Milestones Overview - https://roadmap.logos.co/roadmap/acid/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/acid/milestones-overview/ - Comms Roadmap Comms Projects Comms planner deadlines - - - - Innovation Lab Milestones Overview - https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/innovation_lab/milestones-overview/ - iLab Milestones can be found on the Notion Page - - - - Nomos Milestones Overview - https://roadmap.logos.co/roadmap/nomos/milestones-overview/ - Thu, 17 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/milestones-overview/ - Milestones Overview Notion Page - - - - Vac Milestones Overview - https://roadmap.logos.co/roadmap/vac/milestones-overview/ - Mon, 01 Jan 0001 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/milestones-overview/ - Overview Notion Page - Information copied here for now -Info Structure of milestone names: vac:&lt;unit&gt;:&lt;tag&gt;:&lt;for_project&gt;:&lt;title&gt;_&lt;counter&gt; -vac indicates it is a vac milestone unit indicates the vac unit p2p, dst, tke, acz, sc, zkvm, dr, rfc tag tags a specific area / project / epic within the respective vac unit, e. - - - - diff --git a/tags/milestones/page/1/index.html b/tags/milestones/page/1/index.html deleted file mode 100644 index 92e4d44be..000000000 --- a/tags/milestones/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/milestones/ - - - - - - diff --git a/tags/nomos-updates.html b/tags/nomos-updates.html new file mode 100644 index 000000000..4615e5da1 --- /dev/null +++ b/tags/nomos-updates.html @@ -0,0 +1,52 @@ + +Tag: #nomos-updates
\ No newline at end of file diff --git a/tags/nomos-updates/index.html b/tags/nomos-updates/index.html deleted file mode 100644 index 88b1fe09e..000000000 --- a/tags/nomos-updates/index.html +++ /dev/null @@ -1,328 +0,0 @@ - - - - - - - - nomos-updates - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - -
- -
- -
- - - diff --git a/tags/nomos-updates/index.xml b/tags/nomos-updates/index.xml deleted file mode 100644 index d2571242e..000000000 --- a/tags/nomos-updates/index.xml +++ /dev/null @@ -1,51 +0,0 @@ - - - - nomos-updates on - https://roadmap.logos.co/tags/nomos-updates/ - Recent content in nomos-updates on - Hugo -- gohugo.io - en-us - Mon, 14 Aug 2023 00:00:00 +0000 - - 2023-08-17 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-14/ - Nomos weekly report 14th August Network Privacy and Mixnet Research Mixnet architecture discussions. Potential agreement on architecture not very different from PoC Mixnet preliminary design [https://www. - - - - 2023-08-07 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-08-07/ - Nomos weekly report Network implementation and Mixnet: Research Researched the Nym mixnet architecture in depth in order to design our prototype architecture. - - - - 2023-07-31 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-31/ - Nomos 31st July -[Network implementation and Mixnet]: -Research -Initial analysis on the mixnet Proof of Concept (PoC) was performed, assessing components like Sphinx for packets and delay-forwarder. - - - - 2023-07-24 Nomos weekly - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24/ - Mon, 24 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/nomos/updates/2023-07-24/ - Research -Milestone 1: Understanding Data Availability (DA) Problem High-level exploration and discussion on data availability problems in a collaborative offsite meeting in Paris. - - - - diff --git a/tags/nomos-updates/page/1/index.html b/tags/nomos-updates/page/1/index.html deleted file mode 100644 index ea7e8ab25..000000000 --- a/tags/nomos-updates/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/nomos-updates/ - - - - - - diff --git a/tags/team-updates.html b/tags/team-updates.html new file mode 100644 index 000000000..d92a202c6 --- /dev/null +++ b/tags/team-updates.html @@ -0,0 +1,52 @@ + +Tag: #team-updates

1 items with this tag.

\ No newline at end of file diff --git a/tags/vac-updates.html b/tags/vac-updates.html new file mode 100644 index 000000000..2a413f722 --- /dev/null +++ b/tags/vac-updates.html @@ -0,0 +1,52 @@ + +Tag: #vac-updates \ No newline at end of file diff --git a/tags/vac-updates/index.html b/tags/vac-updates/index.html deleted file mode 100644 index 8809444c2..000000000 --- a/tags/vac-updates/index.html +++ /dev/null @@ -1,382 +0,0 @@ - - - - - - - - vac-updates - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - -
-

Tag: vac-updates

- - - - -
- -
- -
- -
- - - diff --git a/tags/vac-updates/index.xml b/tags/vac-updates/index.xml deleted file mode 100644 index 144e0bc8b..000000000 --- a/tags/vac-updates/index.xml +++ /dev/null @@ -1,79 +0,0 @@ - - - - vac-updates on - https://roadmap.logos.co/tags/vac-updates/ - Recent content in vac-updates on - Hugo -- gohugo.io - en-us - Mon, 21 Aug 2023 00:00:00 +0000 - - 2023-08-21 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-21/ - Mon, 21 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-21/ - Vac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 Vac Github Repos: https://www.notion.so/Vac-Repositories-75f7feb3861048f897f0fe95ead08b06 -Vac week 34 August 21th vsu::P2P vac:p2p:nim-libp2p:vac:maintenance Test-plans for the perf protocol (99%: need to find why the executable doesn&rsquo;t work) https://github. - - - - 2023-08-17 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-14/ - Vac Milestones: https://www.notion.so/Vac-Roadmap-907df7eeac464143b00c6f49a20bb632 -Vac week 33 August 14th vsu::P2P vac:p2p:nim-libp2p:vac:maintenance Improve gossipsub DDoS resistance https://github.com/status-im/nim-libp2p/pull/920 delivered: Perf protocol https://github.com/status-im/nim-libp2p/pull/925 delivered: Test-plans for the perf protocol https://github. - - - - 2023-08-07 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-08-07/ - Mon, 07 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-08-07/ - More info on Vac Milestones, including due date and progress (currently working on this, some milestones do not have the new format yet, first version planned for this week): https://www. - - - - 2023-08-03 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-24/ - Thu, 03 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-24/ - NOTE: This is a first experimental version moving towards the new reporting structure: -Last week -vc vc::Deep Research milestone (15%, 2023/11/30) paper on gossipsub improvements ready for submission related work section milestone (15%, 2023/08/31) Nimbus Tor-push PoC basic torpush encode/decode ( https://github. - - - - 2023-07-31 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-31/ - vc::Deep Research milestone (20%, 2023/11/30) paper on gossipsub improvements ready for submission proposed solution section milestone (15%, 2023/08/31) Nimbus Tor-push PoC establishing torswitch and testing code milestone (15%, 2023/11/30) paper on Tor push validator privacy addressed feedback on current version of paper vsu::P2P nim-libp2p: (100%, 2023/07/31) GossipSub optimizations for ETH&rsquo;s EIP-4844 Merged IDontWant ( https://github. - - - - 2023-07-17 Vac weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-17/ - Mon, 17 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-17/ - Last week -vc Vac day in Paris (13th) vc::Deep Research working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus: setting up goerli nim-eth2 node working towards comprehensive current/related work study on gossipsub scaling vsu::P2P Paris offsite Paris (all CCs) vsu::Tokenomics Bugs found and solved in the SNT staking contract attend events in Paris vsu::Distributed Systems Testing Events in Paris QoS on all four infras Continue work on theoretical gossipsub analysis (varying regular graph sizes) Peer extraction using WLS (almost finished) Discv5 testing Wakurtosis CI improvements Provide offline data vip::zkVM onboarding new researcher Prepared and presented ZKVM work during VAC offsite Deep research on Nova vs Stark in terms of performance and related open questions researching Sangria Worked on NEscience document ( https://www. - - - - 2023-07-10 Vac Weekly - https://roadmap.logos.co/roadmap/vac/updates/2023-07-10/ - Mon, 10 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/vac/updates/2023-07-10/ - vc::Deep Research refined deep research roadmaps https://github.com/vacp2p/research/issues/190, https://github.com/vacp2p/research/issues/192 working on comprehensive current/related work study on Validator Privacy working on PoC of Tor push in Nimbus working towards comprehensive current/related work study on gossipsub scaling vsu::P2P Prepared Paris talks Implemented perf protocol to compare the performances with other libp2ps https://github. - - - - diff --git a/tags/vac-updates/page/1/index.html b/tags/vac-updates/page/1/index.html deleted file mode 100644 index 5f1cb942b..000000000 --- a/tags/vac-updates/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/vac-updates/ - - - - - - diff --git a/tags/waku-updates.html b/tags/waku-updates.html new file mode 100644 index 000000000..badb0eedb --- /dev/null +++ b/tags/waku-updates.html @@ -0,0 +1,52 @@ + +Tag: #waku-updates
\ No newline at end of file diff --git a/tags/waku-updates/index.html b/tags/waku-updates/index.html deleted file mode 100644 index 6098ebd63..000000000 --- a/tags/waku-updates/index.html +++ /dev/null @@ -1,328 +0,0 @@ - - - - - - - - waku-updates - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
-
-
- - - - - - -
- -
-

Logos Collective Technical Roadmap and Activity

-
-
-

Search

- Search IconIcon to open search -
-
- - - -
-
- - - - -
- -
- -
- - - diff --git a/tags/waku-updates/index.xml b/tags/waku-updates/index.xml deleted file mode 100644 index 79e8ac1ec..000000000 --- a/tags/waku-updates/index.xml +++ /dev/null @@ -1,50 +0,0 @@ - - - - waku-updates on - https://roadmap.logos.co/tags/waku-updates/ - Recent content in waku-updates on - Hugo -- gohugo.io - en-us - Mon, 14 Aug 2023 00:00:00 +0000 - - 2023-08-14 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-08-14/ - Mon, 14 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-08-14/ - 2023-08-14 Waku weekly Epics Waku Network Can Support 10K Users {E:2023-10k-users} -All software has been delivered. Pending items are: -Running stress testing on PostgreSQL to confirm performance gain https://github. - - - - 2023-08-06 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-08-06/ - Tue, 08 Aug 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-08-06/ - Milestones for current works are created and used. Next steps are: -Refine scope of research work for rest of the year and create matching milestones for research and waku clients Review work not coming from research and setting dates Note that format matches the Notion page but can be changed easily as it&rsquo;s scripted nwaku Release Process Improvements {E:2023-qa} - - - - 2023-07-31 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-07-31/ - Mon, 31 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-07-31/ - Docs Milestone: Docs general improvement/incorporating feedback (continuous) next: rewrite docs in British English Milestone: Running nwaku in the cloud next: publish guides for Digital Ocean, Oracle, Fly. - - - - 2023-07-24 Waku weekly - https://roadmap.logos.co/roadmap/waku/updates/2023-07-24/ - Mon, 24 Jul 2023 00:00:00 +0000 - - https://roadmap.logos.co/roadmap/waku/updates/2023-07-24/ - Disclaimer: First attempt playing with the format. Incomplete as not everyone is back and we are still adjusting the milestones. - - - - diff --git a/tags/waku-updates/page/1/index.html b/tags/waku-updates/page/1/index.html deleted file mode 100644 index 780c1d084..000000000 --- a/tags/waku-updates/page/1/index.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - https://roadmap.logos.co/tags/waku-updates/ - - - - - - diff --git a/upgrading.html b/upgrading.html new file mode 100644 index 000000000..31890371e --- /dev/null +++ b/upgrading.html @@ -0,0 +1,81 @@ + +Upgrading Quartz
+
+
+

Note

+ +
+

This is specifically a guide for upgrading Quartz 4 version to a more recent update. If you are coming from Quartz 3, check out the migration guide for more info.

+
+

To fetch the latest Quartz updates, simply run

+
npx quartz update
+

As Quartz uses git under the hood for versioning, updating effectively ‘pulls’ in the updates from the official Quartz GitHub repository. If you have local changes that might conflict with the updates, you may need to resolve these manually yourself (or, pull manually using git pull origin upstream).

+
+
+
+

Tip

+ +
+

Quartz will try to cache your content before updating to try and prevent merge conflicts. If you get a conflict mid-merge, you can stop the merge and then run npx quartz restore to restore your content from the cache.

+
+

If you have the GitHub desktop app, this will automatically open to help you resolve the conflicts. Otherwise, you will need to resolve this in a text editor like VSCode. For more help on resolving conflicts manually, check out the GitHub guide on resolving merge conflicts.

\ No newline at end of file