add Korean translation (#69)

* add Korean translations to cursor with sonnet 3.5

* enable Korean translation and restore English files

---------

Co-authored-by: CryptoMarina <marina@status.im>
Co-authored-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
This commit is contained in:
Marina Petrichenko 2025-03-21 16:56:20 +01:00 committed by Slava
parent 2615fb9793
commit 23dbfddc12
No known key found for this signature in database
GPG Key ID: 351E7AA9BD0DFEB8
22 changed files with 3246 additions and 87 deletions

View File

@ -160,92 +160,92 @@ export default withMermaid({
lang: 'en'
},
// Korean
// ko: {
// label: '한국어',
// lang: 'ko-KP',
// link: '/ko',
// themeConfig: {
// nav: [
// { text: '백서', link: '/ko/learn/whitepaper' },
// { text: 'Tokenomics Litepaper', link: '/ko/learn/tokenomics-litepaper' },
// {
// text: 'Codex',
// items: [
// { text: '소개', link: '/ko/codex/about' },
// { text: '보안', link: '/ko/codex/security' },
// { text: '개인정보 처리방침', link: '/ko/codex/privacy-policy' },
// { text: '이용 약관', link: '/ko/codex/terms-of-use' }
// ]
// }
// ],
// editLink: {
// pattern: 'https://github.com/codex-storage/codex-docs/edit/master/:path',
// text: 'Edit this page on GitHub',
// },
// siteTitle: 'Codex • 문서',
// logoLink: '/ko/learn/what-is-codex',
// sidebar: [
// {
// text: 'Introduction',
// collapsed: false,
// items: [
// { text: 'Codex란 무엇인가?', link: '/ko/learn/what-is-codex' },
// { text: '아키텍처', link: '/ko/learn/architecture' },
// { text: '백서', link: '/ko/learn/whitepaper' },
// { text: 'Tokenomics Litepaper', link: '/ko/learn/tokenomics-litepaper' }
// ]
// },
// {
// text: 'Setup Codex with Installer',
// collapsed: false,
// items: [
// { text: '면책 조항', link: '/ko/codex/installer-disclaimer' },
// { text: 'Requirements', link: '/ko/learn/installer/requirements' },
// { text: 'Install and Run Codex', link: '/ko/learn/installer/install-and-run' },
// { text: 'Upload/Download', link: '/ko/learn/installer/upload-and-download' },
// ]
// },
// {
// text: 'Setup Codex Manually',
// collapsed: false,
// items: [
// { text: '면책 조항', link: '/ko/codex/disclaimer' },
// { text: '빠른 시작', link: '/ko/learn/quick-start' },
// { text: 'Build Codex', link: '/ko/learn/build' },
// { text: 'Run Codex', link: '/ko/learn/run' },
// { text: '사용하기', link: '/ko/learn/using' },
// { text: 'Local Two Client Test', link: '/ko/learn/local-two-client-test' },
// { text: 'Local Marketplace', link: '/ko/learn/local-marketplace' },
// { text: 'Download Flow', link: '/ko/learn/download-flow' },
// { text: '문제 해결', link: '/ko/learn/troubleshoot' }
// ]
// },
// {
// text: 'Codex networks',
// collapsed: false,
// items: [
// { text: '테스트넷', link: '/ko/networks/testnet' }
// ]
// },
// {
// text: 'Developers',
// collapsed: false,
// items: [
// { text: 'API', link: '/developers/api' }
// ]
// },
// {
// text: 'Codex',
// collapsed: false,
// items: [
// { text: '소개', link: '/ko/codex/about' },
// { text: '보안', link: '/ko/codex/security' },
// { text: '개인정보 처리방침', link: '/ko/codex/privacy-policy' },
// { text: '이용 약관', link: '/ko/codex/terms-of-use' }
// ]
// }
// ],
// }
// }
ko: {
label: '한국어',
lang: 'ko-KP',
link: '/ko',
themeConfig: {
nav: [
{ text: '백서', link: '/ko/learn/whitepaper' },
{ text: 'Tokenomics Litepaper', link: '/ko/learn/tokenomics-litepaper' },
{
text: 'Codex',
items: [
{ text: '소개', link: '/ko/codex/about' },
{ text: '보안', link: '/ko/codex/security' },
{ text: '개인정보 처리방침', link: '/ko/codex/privacy-policy' },
{ text: '이용 약관', link: '/ko/codex/terms-of-use' }
]
}
],
editLink: {
pattern: 'https://github.com/codex-storage/codex-docs/edit/master/:path',
text: 'Edit this page on GitHub',
},
siteTitle: 'Codex • 문서',
logoLink: '/ko/learn/what-is-codex',
sidebar: [
{
text: 'Introduction',
collapsed: false,
items: [
{ text: 'Codex란 무엇인가?', link: '/ko/learn/what-is-codex' },
{ text: '아키텍처', link: '/ko/learn/architecture' },
{ text: '백서', link: '/ko/learn/whitepaper' },
{ text: 'Tokenomics Litepaper', link: '/ko/learn/tokenomics-litepaper' }
]
},
{
text: 'Setup Codex with Installer',
collapsed: false,
items: [
{ text: '면책 조항', link: '/ko/codex/installer-disclaimer' },
{ text: 'Requirements', link: '/ko/learn/installer/requirements' },
{ text: 'Install and Run Codex', link: '/ko/learn/installer/install-and-run' },
{ text: 'Upload/Download', link: '/ko/learn/installer/upload-and-download' },
]
},
{
text: 'Setup Codex Manually',
collapsed: false,
items: [
{ text: '면책 조항', link: '/ko/codex/disclaimer' },
{ text: '빠른 시작', link: '/ko/learn/quick-start' },
{ text: 'Build Codex', link: '/ko/learn/build' },
{ text: 'Run Codex', link: '/ko/learn/run' },
{ text: '사용하기', link: '/ko/learn/using' },
{ text: 'Local Two Client Test', link: '/ko/learn/local-two-client-test' },
{ text: 'Local Marketplace', link: '/ko/learn/local-marketplace' },
{ text: 'Download Flow', link: '/ko/learn/download-flow' },
{ text: '문제 해결', link: '/ko/learn/troubleshoot' }
]
},
{
text: 'Codex networks',
collapsed: false,
items: [
{ text: '테스트넷', link: '/ko/networks/testnet' }
]
},
{
text: 'Developers',
collapsed: false,
items: [
{ text: 'API', link: '/developers/api' }
]
},
{
text: 'Codex',
collapsed: false,
items: [
{ text: '소개', link: '/ko/codex/about' },
{ text: '보안', link: '/ko/codex/security' },
{ text: '개인정보 처리방침', link: '/ko/codex/privacy-policy' },
{ text: '이용 약관', link: '/ko/codex/terms-of-use' }
]
}
],
}
}
}
})

3
ko/codex/about.md Normal file
View File

@ -0,0 +1,3 @@
# Codex 소개
작업 진행 중 :construction:

3
ko/codex/disclaimer.md Normal file
View File

@ -0,0 +1,3 @@
# 면책 조항
Codex의 실행은 귀하의 책임 하에 이루어집니다. 따라서 당사는 귀하의 하드웨어, 소프트웨어, 데이터 또는 네트워크에 발생할 수 있는 어떠한 손상이나, 제공된 코드와 지침의 사용과 관련하여 발생하는 손실, 청구, 어떠한 성격의 손해 또는 기타 책임에 대해 책임지지 않으며 의무를 지지 않습니다.

View File

@ -0,0 +1,72 @@
---
lastUpdated: false
---
# 개인정보 처리방침
최종 업데이트: 2024년 2월 9일
본 개인정보 처리방침은 이 웹사이트("**웹사이트**")와 관련하여 사용자에게 당사의 개인정보 보호 접근 방식을 알리기 위한 것입니다. 이와 관련하여, 귀하가 당사의 웹사이트를 방문하는 경우 본 개인정보 처리방침이 적용됩니다.
### 1) Who we are
For the purposes of this Privacy Policy and the collection and processing of personal data as a controller, the relevant entity is the Logos Collective Association, which has its registered office in Zug and its legal domicile address at
```
Logos Collective Association
c/o PST Consulting GmbH
Baarerstrasse 10
6300 Zug
Switzerland
```
Whenever we refer to "Logos", "we" or other similar references, we are referring to the Logos Collective Association.
### 2) We limit the collection and processing of personal data from your use of the Website
We aim to limit the collection and collection and processing of personal data from users of the Website. We only collect and process certain personal data for specific purposes and where we have the legal basis to do so under applicable privacy legislation. We will not collect or process any personal data that we don't need and where we do store any personal data, we will only store it for the least amount of time needed for the indicated purpose.
In this regard, we collect and process the following personal data from your use of the Website:
* **IP address**: As part of such use of the Website, we briefly process your IP address but we have no way of identifying you. We however have a legitimate interest in processing such IP addresses to ensure the technical functionality and enhance the security measures of the Website. This IP address is not stored by us over time.
### 3) Third party processing of personal data
In addition to our limited and collection of personal data, third parties may collect or process personal data as a result of the Website making use of certain features or to provide certain content. To the extent you interact with such third party content or features, their respective privacy policies will apply.
### 4) Security measures we take in respect of the Website
As a general approach, we take data security seriously and we have implemented a variety of security measures on the Website to maintain the safety of your personal data when you submit such information to us.
### 5) Exporting data outside the European Union and Switzerland
We are obliged to protect the privacy of personal data that you may have submitted in the unlikely event that we export your personal data to places outside the European Union or Switzerland. This means that personal data will only be processed in countries or by parties that provide an adequate level of protection as deemed by Switzerland or the European Commission. Otherwise, we will use other forms of protections, such as specific forms of contractual clauses to ensure such personal data is provided the same protection as required in Switzerland or Europe. In any event, the transmission of personal data outside the European Union and Switzerland will always occur in conformity with applicable privacy legislation.
### 6) Your choices and rights
As explained in this Privacy Policy, we limit our collection and processing of your personal data wherever possible. Nonetheless, you still have certain choices and rights in respect of the personal data which we do collect and process. As laid out in relevant privacy legislation, you have the right to:
* Ask us to correct or update your personal data (where reasonably possible);
* Ask us to remove your personal data from our systems;
* Ask us for a copy of your personal data, which may also be transferred to another data controller at your request;
* Withdraw your consent to process your personal data (only if consent was asked for a processing activity), which only affects processing activities that are based on your consent and doesn't affect the validity of such processing activities before you have withdrawn your consent;
* Object to the processing of your personal data; and
* File a complaint with the Federal Data Protection and Information Commissioner (FDPIC), if you believe that your personal data has been processed unlawfully.
### 7) Third party links
On this Website, you may come across links to third party websites. These third party sites have separate and independent privacy policies. We therefore have no responsibility or liability for the content and activities of these third party websites.
### 8) This Privacy Policy might change
We may modify or replace any part of this Privacy Policy at any time and without notice. Please check the Website periodically for any changes. The new Privacy Policy will be effective immediately upon its posting on our Website.
### 9) Contact information
To the extent that you have any questions about the Privacy Policy, please contact us at <a href="mailto:legal@free.technology">legal@free.technology</a>.
This document is licensed under CC-BY-SA.

7
ko/codex/security.md Normal file
View File

@ -0,0 +1,7 @@
# 보안
Codex와 <a href="https://free.technology/" target="_blank">자유 기술 연구소</a> 및 관련 기관에서는 보안을 매우 중요하게 생각합니다.
보안 관련 사고는 <a href="mailto:security@free.technology">security@free.technology</a>를 통해 신고해 주시기 바랍니다.
우리의 프로토콜과 소프트웨어가 안전하게 유지될 수 있도록 <a href="https://hackenproof.com/ift" target="_blank">HackenProof</a>의 바운티 프로그램을 통해 발견된 취약점을 신고해 주시기 바랍니다.

94
ko/codex/terms-of-use.md Normal file
View File

@ -0,0 +1,94 @@
---
lastUpdated: false
---
# 이용 약관
최종 업데이트: 2024년 2월 14일
본 웹사이트 이용 약관("**웹사이트 이용 약관**")은 귀하의 웹사이트 사용에 적용됩니다. 웹사이트를 사용함으로써 귀하는 본 웹사이트 이용 약관에 동의하게 됩니다.
If you do not agree with these Website Terms of Use, you must not access or use the Website.
### 1) Who we are
For the purposes of these Website Terms of Use, the relevant entity is the Logos Collective Association, which has its registered office in Zug and its legal domicile address at:
```
Logos Collective Association
c/o PST Consulting GmbH
Baarerstrasse 10
6300 Zug
Switzerland
```
Whenever we refer to "Logos", "we", "us" or any other similar references, we are referring to the Logos Collective Association.
### 2) Disclaimers
The Website is provided by us on an 'as is' basis and you use the Website at your own sole discretion and risk.
We disclaim all warranties of any kind, express or implied, including without limitation the warranties of merchantability, fitness for a particular purpose, and non-infringement of intellectual property or other violation of rights. We do not warrant or make any representations concerning the completeness, accuracy, legality, utility, reliability, suitability or availability of the use of the Website, the content on this Website or otherwise relating to the Website, such content or on any sites linked to this site.These disclaimers will apply to the maximum extent permitted by applicable law.
We make no claims that the Website or any of its content is accessible, legally compliant or appropriate in your jurisdiction. Your access or use of the Website is at your own sole discretion and you are solely responsible for complying with any applicable local laws.
The content herein or as accessible through this website is intended to be made available for informational purposes only and should not be considered as creating any expectations or forming the basis of any contract, commitment or binding obligation with us. No information herein shall be considered to contain or be relied upon as a promise, representation, warranty or guarantee, whether express or implied and whether as to the past, present or the future in relation to the projects and matters described herein.
The information contained herein does not constitute financial, legal, tax, or other advice and should not be treated as such.
Nothing in this Website should be construed by you as an offer to buy or sell, or soliciting any offer to buy or sell any tokens or any security.
### 3) Forward looking statements
The Website may also contain forward-looking statements that are based on current expectations, estimates, forecasts, assumptions and projections about the technology, industry and markets in general.
The forward looking statements, which may include statements about the roadmap, project descriptions, technical details, functionalities, features, the development and use of tokens by projects, and any other statements related to such matters or as accessible through this website are subject to a high degree of risk and uncertainty. The forward looking statements are subject to change based on, among other things, market conditions, technical developments, and regulatory environment. The actual development and results, including the order and the timeline, might vary from what's presented. The information contained herein is a summary and does not purport to be accurate, reliable or complete and we bear no responsibility for the accuracy, reliability or completeness of information contained herein. Because of the high degree of risk and uncertainty described above, you should not place undue reliance on any matters described in this website or as accessible through this website.
While we aim to update our website regularly, all information, including the timeline and the specifics of each stage, is subject to change and may be amended or supplemented at any time, without notice and at our sole discretion.
### 4) Intellectual property rights
The Website and its contents are made available under Creative Commons Attribution 4.0 International license (CC-BY 4.0). In essence this licence allows users to copy, modify and distribute the content in any format for any purpose, including commercial use, subject to certain requirements such as attributing us. For the full terms of this licence, please refer to the following website: https://creativecommons.org/licenses/by/4.0/.
### 5) Third party website links
To the extent the Website provides any links to a third party website, then their terms and conditions, including privacy policies, govern your use of those third party websites. By linking such third party websites, Status does not represent or imply that it endorses or supports such third party websites or content therein, or that it believes such third party websites and content therein to be accurate, useful or non-harmful. We have no control over such third party websites and will not be liable for your use of or activities on any third party websites accessed through the Website. If you access such third party websites through the Website, it is at your own risk and you are solely responsible for your activities on such third party websites.
### 6) Limitation of liability
We will not be held liable to you under any contract, negligence, strict liability, or other legal or equitable theory for any lost profits, cost of procurement for substitute services, or any special, incidental, or consequential damages related to, arising from, or in any way connected with these Website Terms of Use, the Website, the content on the Website, or your use of the Website, even if we have been advised of the possibility of such damages. In any event, our aggregate liability for such claims is limited to EUR 100 (one hundred Euros). This limitation of liability will apply to the maximum extent permitted by applicable law.
### 7) Indemnity
You shall indemnify us and hold us harmless from and against any and all claims, damages and expenses, including attorneys' fees, arising from or related to your use of the Website, the content on the Website, including without limitation your violation of these Website Terms of Use.
### 8) Modifications
We may modify or replace any part of this Website Terms of Use at any time and without notice. You are responsible for checking the Website periodically for any changes. The new Website Terms of Use will be effective immediately upon its posting on the Website.
### 9) Governing law
Swiss law governs these Website Terms of Use and any disputes between you and us, whether in court or arbitration, without regard to conflict of laws provisions.
### 10) Disputes
In these terms, "dispute" has the broadest meaning enforceable by law and includes any claim you make against or controversy you may have in relation to these Website Terms of Use, the Website, the content on the Website, or your use of the Website.
We prefer arbitration over litigation as we believe it meets our principle of resolving disputes in the most effective and cost effective manner. You are bound by the following arbitration clause, which waives your right to litigation and to be heard by a judge. Please note that court review of an arbitration award is limited. You also waive all your rights to a jury trial (if any) in any and all jurisdictions.
If a (potential) dispute arises, you must first use your reasonable efforts to resolve it amicably with us. If these efforts do not result in a resolution of such dispute, you shall then send us a written notice of dispute setting out (i) the nature of the dispute, and the claim you are making; and (ii) the remedy you are seeking.
If we and you are unable to further resolve this dispute within sixty (60) calendar days of us receiving this notice of dispute, then any such dispute will be referred to and finally resolved by you and us through an arbitration administered by the Swiss Chambers' Arbitration Institution in accordance with the Swiss Rules of International Arbitration for the time being in force, which rules are deemed to be incorporated herein by reference. The arbitral decision may be enforced in any court. The arbitration will be held in Zug, Switzerland, and may be conducted via video conference virtual/online methods if possible. The tribunal will consist of one arbitrator, and all proceedings as well as communications between the parties will be kept confidential. The language of the arbitration will be in English. Payment of all relevant fees in respect of the arbitration, including filing, administration and arbitrator fees will be in accordance with the Swiss Rules of International Arbitration.
Regardless of any applicable statute of limitations, you must bring any claims within one year after the claim arose or the time when you should have reasonably known about the claim. You also waive the right to participate in a class action lawsuit or a classwide arbitration against us.
### 11) About these Website Terms of Use
These Website Terms of Use cover the entire agreement between you and us regarding the Website and supersede all prior and contemporaneous understandings, agreements, representations and warranties, both written and oral, with respect to the Website.
The captions and headings identifying sections and subsections of these Website Terms of Use are for reference only and do not define, modify, expand, limit, or affect the interpretation of any provisions of these Website Terms of Use.
If any part of these Website Terms of Use is held invalid or unenforceable, that part will be severable from these Website Terms of Use, and the remaining portions will remain in full force and effect. If we fail to enforce any of these Website Terms of Use, that does not mean that we have waived our right to enforce them.
If you have any specific questions about these Website Terms of Use, please contact us at <a href="mailto:legal@free.technology">llegal@free.technology</a>.
This document is licensed under CC-BY-SA.

5
ko/developers/api.md Normal file
View File

@ -0,0 +1,5 @@
# Codex API
Codex는 노드와 상호작용하기 위해 REST API를 사용하며, HTTP 클라이언트를 사용하여 상호작용 및 구성할 수 있습니다.
API 명세는 [api.codex.storage](https://api.codex.storage)에서 확인할 수 있으며 [openapi.yaml](https://github.com/codex-storage/nim-codex/blob/master/openapi.yaml)을 기반으로 생성됩니다. 또한 [Postman 컬렉션](https://api.codex.storage/postman.json)도 제공합니다.

33
ko/index.md Normal file
View File

@ -0,0 +1,33 @@
---
# https://vitepress.dev/reference/default-theme-home-page
layout: home
hero:
name: Codex
text: 분산형 데이터 저장 플랫폼
tagline: Codex는 세계 공동체가 검열의 위험 없이 가장 중요한 지식을 보존할 수 있도록 만들어진 내구성 있는 분산형 데이터 저장 프로토콜입니다.
actions:
- theme: brand
text: Codex란?
link: /learn/what-is-codex
- theme: alt
text: 빠른 시작
link: /learn/quick-start
- theme: alt
text: Join Codex Testnet
link: /networks/testnet
features:
- title: 학습
details: Codex에 대해 자세히 알아보기
link: /learn/what-is-codex
icon: 📚
- title: 네트워크
details: 저장소 운영자 또는 구매자로서 Codex 네트워크에 참여
link: /networks/networks
icon: 🚦
- title: 개발자
details: Codex로 구축하기
link: /developers/api
icon: 🏗️
---

123
ko/learn/architecture.md Normal file
View File

@ -0,0 +1,123 @@
# 설명 및 아키텍처
Codex는 web3 애플리케이션에 부패 및 검열 저항성을 제공하는 완전히 분산된 내구성 있는 데이터 저장 엔진을 구축하고 있습니다. 노드 운영자에게는 저장하는 데이터에 대한 타당한 부인 가능성을, 클라이언트에게는 최대 99.99%의 입증 가능한 내구성 보장을 제공하면서도 저장소와 대역폭 효율성을 유지합니다.
These four key features combine to differentiate Codex from existing projects in the decentralised storage niche:
- **Erasure coding:** Provides efficient data redundancy, which increases data durability guarantees.
- **ZK-based proof-of-retrievability:** For lightweight data durability assurances.
- **Lazy repair mechanism:** For efficient data reconstruction and loss prevention.
- **Incentivization:** To encourage rational behaviour, widespread network participation, and the efficient provision of finite network resources.
### Incentivized decentralisation
Incentivization mechanisms are one of the key pieces missing from traditional file-sharing networks. Codex believes that a robust marketplace-based incentive structure will ensure wide participation across the node types detailed below.
The development of an adequate incentive structure is driven by the following goals:
- Supply and demand to encourage optimum network resource usage.
- Increase participation by enabling nodes to utilise their competitive advantages to maximise profits.
- Prevent spam and discourage malicious participation.
Although still to be finalised, the Codex incentive structure will involve a marketplace of participants who want to store data, and those provisioning storage posting collateral, with the latter bidding on open storage contracts. This structure aims to ensure that participants' incentives align, resulting in Codex functioning as intended.
### Network architecture
Codex is composed of multiple node types, each taking a different role in the network's operation. Similarly, the hardware demands for each node type vary, enabling those operating resource-restricted devices to participate.
**Storage nodes**
As Codex's long-term reliable storage providers, storage nodes stake collateral based on the collateral posted on the request side of contracts, and the number of slots that a contract has. This is tied to the durability demanded by the user. Failure to provide periodic proof of data possession results in slashing penalties.
**Aggregator Node**
A method for off-loading erasure coding, proof generation and proof aggregation by a client node with low-resources, currently a WIP and will be part of subsequent Codex release Q2/Q4 next year.
**Client nodes**
Client nodes make requests for other nodes to store, find, and retrieve data. Most of the Codex network will be Client nodes, and these participants can double as caching nodes to offset the cost of the network resources they consume.
When a node commits to a storage contract and a user uploads data, the network will proactively verify that the storage node is online and that the data is retrievable. Storage nodes are then randomly queried to broadcast proofs of data possession over an interval corresponding to the contract duration and 9's of retrievability guarantee the protocol provides.
If the storage node sends invalid proofs or fails to provide them in time, the network evicts the storage node from the slot, and the slot will become available for the first node that generates a valid proof for that slot.
When the contract is reposted, some of the faulty node's collateral pays for the new storage node's bandwidth fees. Erasure coding complements the repair scheme by allowing the reconstruction of the missing chunks from data in other slots within the same storage contract hosted by faultless storage nodes.
![architect](/learn/architecture.png)
### Marketplace architecture ###
The marketplace consists of a smart contract that is deployed on-chain, and the
purchasing and sales modules that are part of the node software. The purchasing
module is responsible for posting storage requests to the smart contract. The
sales module is its counterpart that storage providers use to determine which
storage requests they are interested in.
#### Smart contract ####
The smart contract facilitates matching between storage providers and storage
clients. A storage client can request a certain amount of storage for a certain
duration. This request is then posted on-chain, so that storage providers can
see it, and decide whether they want to fill a slot in the request.
The main parameters of a storage request are:
- the amount of bytes of storage that is requested
- a content identifier (CID) of the data that should be stored
- the duration for which the data should be stored
- the number of slots (based on the erasure coding parameters)
- an amount of tokens to pay for the storage
At the protocol level a storage client is free to determine these parameters as
it sees fit, so that it can choose a level of durability that is suitable for
the data, and adjust for changing storage prices. Applications built on Codex
can provide guidance to their users for picking the correct parameters,
analogous to how Ethereum wallets help with determining gas fees.
The smart contract also checks that storage providers keep their promises.
Storage providers post collateral when they promise to fill a slot of a storage
request. They are expected to post periodic storage proofs to the contract,
either directly or through an aggregator. If they fail to do so repeatedly, then
their collateral can be forfeited. Their slot is then awarded to another storage
provider.
The smart contract indicates when a certain storage provider has to provide a
storage proof. This is not done on a fixed time interval, but determined
stochastically to ensure that it is not possible for a storage provider to
predict when it should provide the next storage proof.
#### Purchasing ####
The purchasing module in the node software interacts with the smart contract on
behalf of the node operator. It posts storage requests, and handles any other
interactions that are required during the lifetime of the request. For instance,
when a request is canceled because there are not enough interested storage
providers, then the purchasing module can withdraw the tokens that were
associated with the request.
#### Sales ####
The sales module is the counterpart to the sales module. It monitors the smart
contract to be notified of incoming storage requests. It keeps a list of the
most promising requests that it can fulfill. It will favor those requests that
have a high reward and low collateral. As soon as it finds a suitable request,
it will then try to first reserve and then fill a slot by downloading the
associated data, creating a storage proof, and posting it to the smart contract.
It will then continue to monitor the smart contract to provide it with storage
proofs when they are required.
The sales module contains a best effort strategy for determining which storage
requests it is interested in. Over time, we expect more specialized strategies
to emerge to cater to the needs of e.g. large providers versus providers that
run a node from their home.
### 백서 ###
[Codex 백서](/learn/whitepaper)를 읽어보세요.

241
ko/learn/build.md Normal file
View File

@ -0,0 +1,241 @@
# Build Codex
## Table of Contents
- [Install developer tools](#prerequisites)
- [Linux](#linux)
- [macOS](#macos)
- [Windows + MSYS2](#windows-msys2)
- [Other](#other)
- [Clone and prepare the Git repository](#repository)
- [Build the executable](#executable)
- [Run the example](#example-usage)
**Optional**
- [Run the tests](#tests)
## Prerequisites
To build nim-codex, developer tools need to be installed and accessible in the OS.
Instructions below correspond roughly to environmental setups in nim-codex's [CI workflow](https://github.com/codex-storage/nim-codex/blob/master/.github/workflows/ci.yml) and are known to work.
Other approaches may be viable. On macOS, some users may prefer [MacPorts](https://www.macports.org/) to [Homebrew](https://brew.sh/). On Windows, rather than use MSYS2, some users may prefer to install developer tools with [winget](https://docs.microsoft.com/en-us/windows/package-manager/winget/), [Scoop](https://scoop.sh/), or [Chocolatey](https://chocolatey.org/), or download installers for e.g. Make and CMake while otherwise relying on official Windows developer tools. Community contributions to these docs and our build system are welcome!
### Rust
The current implementation of Codex's zero-knowledge proving circuit requires the installation of rust v1.79.0 or greater. Be sure to install it for your OS and add it to your terminal's path such that the command `cargo --version` gives a compatible version.
### Linux
> [!WARNING]
> Linux builds currently require gcc $\leq$ 13. If this is not an option in your
> system, you can try [building within Docker](#building-within-docker) as a workaround.
*Package manager commands may require `sudo` depending on OS setup.*
On a bare bones installation of Debian (or a distribution derived from Debian, such as Ubuntu), run
```shell
apt-get update && apt-get install build-essential cmake curl git rustc cargo
```
Non-Debian distributions have different package managers: `apk`, `dnf`, `pacman`, `rpm`, `yum`, etc.
For example, on a bare bones installation of Fedora, run
```shell
dnf install @development-tools cmake gcc-c++ rust cargo
```
In case your distribution does not provide required Rust version, we may install it using [rustup](https://www.rust-lang.org/tools/install)
```shell
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs/ | sh -s -- --default-toolchain=1.79.0 -y
. "$HOME/.cargo/env"
```
Note that you will currently not be able to build Codex with gcc 14. To verify that
you have a supported version, run:
```shell
gcc --version
```
If you get a number that starts with 14 (e.g. `14.2.0`), then you need to either
downgrade, or try a workaround like [building within Docker](#building-within-docker).
### macOS
Install the [Xcode Command Line Tools](https://mac.install.guide/commandlinetools/index.html) by opening a terminal and running
```shell
xcode-select --install
```
Install [Homebrew (`brew`)](https://brew.sh/) and in a new terminal run
```shell
brew install bash cmake rust
```
Check that `PATH` is setup correctly
```shell
which bash cmake
# /usr/local/bin/bash
# /usr/local/bin/cmake
```
### Windows + MSYS2
*Instructions below assume the OS is 64-bit Windows and that the hardware or VM is [x86-64](https://en.wikipedia.org/wiki/X86-64) compatible.*
Download and run the installer from [msys2.org](https://www.msys2.org/).
Launch an MSYS2 [environment](https://www.msys2.org/docs/environments/). UCRT64 is generally recommended: from the Windows *Start menu* select `MSYS2 MinGW UCRT x64`.
Assuming a UCRT64 environment, in Bash run
```shell
pacman -Suy
pacman -S base-devel git unzip mingw-w64-ucrt-x86_64-toolchain mingw-w64-ucrt-x86_64-cmake mingw-w64-ucrt-x86_64-rust
```
We should downgrade GCC to version 13 [^gcc-14]
```shell
pacman -U --noconfirm \
https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-13.2.0-6-any.pkg.tar.zst \
https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-libs-13.2.0-6-any.pkg.tar.zst
```
#### Optional: VSCode Terminal integration
You can link the MSYS2-UCRT64 terminal into VSCode by modifying the configuration file as shown below.
File: `C:/Users/<username>/AppData/Roaming/Code/User/settings.json`
```json
{
...
"terminal.integrated.profiles.windows": {
...
"MSYS2-UCRT64": {
"path": "C:\\msys64\\usr\\bin\\bash.exe",
"args": [
"--login",
"-i"
],
"env": {
"MSYSTEM": "UCRT64",
"CHERE_INVOKING": "1",
"MSYS2_PATH_TYPE": "inherit"
}
}
}
}
```
### Other
It is possible that nim-codex can be built and run on other platforms supported by the [Nim](https://nim-lang.org/) language: BSD family, older versions of Windows, etc. There has not been sufficient experimentation with nim-codex on such platforms, so instructions are not provided. Community contributions to these docs and our build system are welcome!
## Repository
In Bash run
```shell
git clone https://github.com/codex-storage/nim-codex.git repos/nim-codex && cd repos/nim-codex
```
nim-codex uses the [nimbus-build-system](https://github.com/status-im/nimbus-build-system), so next run
```shell
make update
```
This step can take a while to complete because by default it builds the [Nim compiler](https://nim-lang.org/docs/nimc.html).
To see more output from `make` pass `V=1`. This works for all `make` targets in projects using the nimbus-build-system
```shell
make V=1 update
```
## Executable
In Bash run
```shell
make
```
The default `make` target creates the `build/codex` executable.
## Tools
### Circuit download tool
To build the circuit download tool located in `tools/cirdl` run:
```shell
make cirdl
```
## Example usage
See the instructions in the [Quick Start](/learn/quick-start).
## Tests
In Bash run
```shell
make test
```
### testAll
#### Prerequisites
To run the integration tests, an Ethereum test node is required. Follow these instructions to set it up.
##### Windows (do this before 'All platforms')
1. Download and install Visual Studio 2017 or newer. (Not VSCode!) In the Workloads overview, enable `Desktop development with C++`. ( https://visualstudio.microsoft.com )
##### All platforms
1. Install NodeJS (tested with v18.14.0), consider using NVM as a version manager. [Node Version Manager (`nvm`)](https://github.com/nvm-sh/nvm#readme)
1. Open a terminal
1. Go to the vendor/codex-contracts-eth folder: `cd /<git-root>/vendor/codex-contracts-eth/`
1. `npm install` -> Should complete with the number of packages added and an overview of known vulnerabilities.
1. `npm test` -> Should output test results. May take a minute.
Before the integration tests are started, you must start the Ethereum test node manually.
1. Open a terminal
1. Go to the vendor/codex-contracts-eth folder: `cd /<git-root>/vendor/codex-contracts-eth/`
1. `npm start` -> This should launch Hardhat, and output a number of keys and a warning message.
#### Run
The `testAll` target runs the same tests as `make test` and also runs tests for nim-codex's Ethereum contracts, as well a basic suite of integration tests.
To run `make testAll`.
Use a new terminal to run:
```shell
make testAll
```
## Building Within Docker
For the specific case of Linux distributions which ship with gcc 14
and a downgrade to 13 is not possible/desirable, building within a Docker
container and pulling the binaries out by copying or mounting remains an
option; e.g.:
```bash=
# Clone original repo.
git clone https://github.com/codex-storage/nim-codex
# Build inside docker
docker build -t codexstorage/nim-codex:latest -f nim-codex/docker/codex.Dockerfile nim-codex
# Extract executable
docker create --name=codex-build codexstorage/nim-codex:latest
docker cp codex-build:/usr/local/bin/codex ./codex
docker cp codex-build:/usr/local/bin/cirdl ./cirdl
```
and voilà, you should have the binaries available in the current folder.

67
ko/learn/download-flow.md Normal file
View File

@ -0,0 +1,67 @@
# Download Flow
Sequence of interactions that result in dat blocks being transferred across the network.
## Local Store
When data is available in the local blockstore,
```mermaid
sequenceDiagram
actor Alice
participant API
Alice->>API: Download(CID)
API->>+Node/StoreStream: Retrieve(CID)
loop Get manifest block, then data blocks
Node/StoreStream->>NetworkStore: GetBlock(CID)
NetworkStore->>LocalStore: GetBlock(CID)
LocalStore->>NetworkStore: Block
NetworkStore->>Node/StoreStream: Block
end
Node/StoreStream->>Node/StoreStream: Handle erasure coding
Node/StoreStream->>-API: Data stream
API->>Alice: Stream download of block
```
## Network Store
When data is not found ih the local blockstore, the block-exchange engine is used to discover the location of the block within the network. Connection will be established to the node(s) that have the block, and exchange can take place.
```mermaid
sequenceDiagram
box
actor Alice
participant API
participant Node/StoreStream
participant NetworkStore
participant Discovery
participant Engine
end
box
participant OtherNode
end
Alice->>API: Download(CID)
API->>+Node/StoreStream: Retrieve(CID)
Node/StoreStream->>-API: Data stream
API->>Alice: Download stream begins
loop Get manifest block, then data blocks
Node/StoreStream->>NetworkStore: GetBlock(CID)
NetworkStore->>Engine: RequestBlock(CID)
opt CID not known
Engine->>Discovery: Discovery Block
Discovery->>Discovery: Locates peers who provide block
Discovery->>Engine: Peers
Engine->>Engine: Update peers admin
end
Engine->>Engine: Select optimal peer
Engine->>OtherNode: Send WantHave list
OtherNode->>Engine: Send BlockPresence
Engine->>Engine: Update peers admin
Engine->>Engine: Decide to buy block
Engine->>OtherNode: Send WantBlock list
OtherNode->>Engine: Send Block
Engine->>NetworkStore: Block
NetworkStore->>NetworkStore: Add to Local store
NetworkStore->>Node/StoreStream: Resolve Block
Node/StoreStream->>Node/StoreStream: Handle erasure coding
Node/StoreStream->>API: Push data to stream
end
API->>Alice: Download stream finishes
```

View File

@ -0,0 +1,669 @@
---
outline: [2, 3]
---
# Running a Local Codex Network with Marketplace Support
This tutorial will teach you how to run a small Codex network with the
_storage marketplace_ enabled; i.e., the functionality in Codex which
allows participants to offer and buy storage in a market, ensuring that
storage providers honor their part of the deal by means of cryptographic proofs.
In this tutorial, you will:
1. [Set Up a Geth PoA network](#_1-set-up-a-geth-poa-network);
2. [Set up The Marketplace](#_2-set-up-the-marketplace);
3. [Run Codex](#_3-run-codex);
4. [Buy and Sell Storage in the Marketplace](#_4-buy-and-sell-storage-on-the-marketplace).
## Prerequisites
To complete this tutorial, you will need:
* the [geth](https://github.com/ethereum/go-ethereum) Ethereum client;
You need version `1.13.x` of geth as newer versions no longer support
Proof of Authority (PoA). This tutorial was tested using geth version `1.13.15`.
* a Codex binary, which [you can compile from source](https://github.com/codex-storage/nim-codex?tab=readme-ov-file#build-and-run).
We will also be using [bash](https://en.wikipedia.org/wiki/Bash_(Unix_shell))
syntax throughout. If you use a different shell, you may need to adapt
things to your platform.
To get started, create a new folder where we will keep the tutorial-related
files so that we can keep them separate from the codex repository.
We assume the name of the folder to be `marketplace-tutorial`.
## 1. Set Up a Geth PoA Network
For this tutorial, we will use a simple
[Proof-of-Authority](https://github.com/ethereum/EIPs/issues/225) network
with geth. The first step is creating a _signer account_: an account which
will be used by geth to sign the blocks in the network.
Any block signed by a signer is accepted as valid.
### 1.1. Create a Signer Account
To create a signer account, from the `marketplace-tutorial` directory run:
```bash
geth account new --datadir geth-data
```
The account generator will ask you to input a password, which you can
leave blank. It will then print some information,
including the account's public address:
```bash
INFO [09-29|16:49:24.244] Maximum peer count ETH=50 total=50
Your new account is locked with a password. Please give a password. Do not forget this password.
Password:
Repeat password:
Your new key was generated
Public address of the key: 0x33A904Ad57D0E2CB8ffe347D3C0E83C2e875E7dB
Path of the secret key file: geth-data/keystore/UTC--2024-09-29T14-49-31.655272000Z--33a904ad57d0e2cb8ffe347d3c0e83c2e875e7db
- You can share your public address with anyone. Others need it to interact with you.
- You must NEVER share the secret key with anyone! The key controls access to your funds!
- You must BACKUP your key file! Without the key, it's impossible to access account funds!
- You must REMEMBER your password! Without the password, it's impossible to decrypt the key!
```
In this example, the public address of the signer account is
`0x33A904Ad57D0E2CB8ffe347D3C0E83C2e875E7dB`.
Yours will print a different address. Save it for later usage.
Next set an environment variable for later usage:
```bash
export GETH_SIGNER_ADDR="0x0000000000000000000000000000000000000000"
echo ${GETH_SIGNER_ADDR} > geth_signer_address.txt
```
> Here make sure you replace `0x0000000000000000000000000000000000000000`
> with your public address of the signer account
> (`0x33A904Ad57D0E2CB8ffe347D3C0E83C2e875E7dB` in our example).
### 1.2. Configure The Network and Create the Genesis Block
The next step is telling geth what kind of network you want to run.
We will be running a [pre-merge](https://ethereum.org/en/roadmap/merge/)
network with Proof-of-Authority consensus.
To get that working, create a `network.json` file.
If you set the `GETH_SIGNER_ADDR` variable above you can run the following
command to create the `network.json` file:
```bash
echo "{\"config\": { \"chainId\": 12345, \"homesteadBlock\": 0, \"eip150Block\": 0, \"eip155Block\": 0, \"eip158Block\": 0, \"byzantiumBlock\": 0, \"constantinopleBlock\": 0, \"petersburgBlock\": 0, \"istanbulBlock\": 0, \"berlinBlock\": 0, \"londonBlock\": 0, \"arrowGlacierBlock\": 0, \"grayGlacierBlock\": 0, \"clique\": { \"period\": 1, \"epoch\": 30000 } }, \"difficulty\": \"1\", \"gasLimit\": \"8000000\", \"extradata\": \"0x0000000000000000000000000000000000000000000000000000000000000000${GETH_SIGNER_ADDR:2}0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\", \"alloc\": { \"${GETH_SIGNER_ADDR}\": { \"balance\": \"10000000000000000000000\"}}}" > network.json
```
You can also manually create the file remembering update it with your
signer public address:
```json
{
"config": {
"chainId": 12345,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"berlinBlock": 0,
"londonBlock": 0,
"arrowGlacierBlock": 0,
"grayGlacierBlock": 0,
"clique": {
"period": 1,
"epoch": 30000
}
},
"difficulty": "1",
"gasLimit": "8000000",
"extradata": "0x000000000000000000000000000000000000000000000000000000000000000033A904Ad57D0E2CB8ffe347D3C0E83C2e875E7dB0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"alloc": {
"0x33A904Ad57D0E2CB8ffe347D3C0E83C2e875E7dB": {
"balance": "10000000000000000000000"
}
}
}
```
Note that the signer account address is embedded in two different places:
* inside of the `"extradata"` string, surrounded by zeroes and stripped of
its `0x` prefix;
* as an entry key in the `alloc` session.
Make sure to replace that ID with the account ID that you wrote down in
[Step 1.1](#_1-1-create-a-signer-account).
Once `network.json` is created, you can initialize the network with:
```bash
geth init --datadir geth-data network.json
```
The output of the above command you may include some warnings, like:
```bash
WARN [08-21|14:48:12.305] Unknown config environment variable envvar=GETH_SIGNER_ADDR
```
or even errors when running the command for the first time:
```bash
ERROR[08-21|14:48:12.399] Head block is not reachable
```
The important part is that at the end you should see something similar to:
```bash
INFO [08-21|14:48:12.639] Successfully wrote genesis state database=lightchaindata hash=768bf1..42d06a
```
### 1.3. Start your PoA Node
We are now ready to start our $1$-node, private blockchain.
To launch the signer node, open a separate terminal in the same working
directory and make sure you have the `GETH_SIGNER_ADDR` set.
For convenience use the `geth_signer_address.txt`:
```bash
export GETH_SIGNER_ADDR=$(cat geth_signer_address.txt)
```
Having the `GETH_SIGNER_ADDR` variable set, run:
```bash
geth\
--datadir geth-data\
--networkid 12345\
--unlock ${GETH_SIGNER_ADDR}\
--nat extip:127.0.0.1\
--netrestrict 127.0.0.0/24\
--mine\
--miner.etherbase ${GETH_SIGNER_ADDR}\
--http\
--allow-insecure-unlock
```
Note that, once again, the signer account created in
[Step 1.1](#_1-1-create-a-signer-account) appears both in
`--unlock` and `--allow-insecure-unlock`.
Geth will prompt you to insert the account's password as it starts up.
Once you do that, it should be able to start up and begin "mining" blocks.
Also here, you may encounter errors like:
```bash
ERROR[08-21|15:00:27.625] Bootstrap node filtered by netrestrict id=c845e51a5e470e44 ip=18.138.108.67
ERROR[08-21|15:00:27.625] Bootstrap node filtered by netrestrict id=f23ac6da7c02f84a ip=3.209.45.79
ERROR[08-21|15:00:27.625] Bootstrap node filtered by netrestrict id=ef2d7ab886910dc8 ip=65.108.70.101
ERROR[08-21|15:00:27.625] Bootstrap node filtered by netrestrict id=6b36f791352f15eb ip=157.90.35.166
```
You can safely ignore them.
If the command above fails with:
```bash
Fatal: Failed to register the Ethereum service: only PoS networks are supported, please transition old ones with Geth v1.13.x
```
make sure, you are running the correct Geth version
(see Section [Prerequisites](#prerequisites))
## 2. Set Up The Marketplace
You will need to open new terminal for this section and geth needs to be
running already. Setting up the Codex marketplace entails:
1. Deploying the Codex Marketplace contracts to our private blockchain
2. Setup Ethereum accounts we will use to buy and sell storage in
the Codex marketplace
3. Provisioning those accounts with the required token balances
### 2.1. Deploy the Codex Marketplace Contracts
Make sure you leave the `marketplace-tutorial` directory, and clone
the `codex-storage/nim-codex.git`:
```bash
git clone https://github.com/codex-storage/nim-codex.git
```
> If you just want to clone the repo to run the tutorial, you can
> skip the history and just download the head of the master branch by using
> `--depth 1` option: `git clone --depth 1 https://github.com/codex-storage/nim-codex.git`
Thus, our directory structure for the purpose of this tutorial looks like this:
```bash
|
|-- nim-codex
└-- marketplace-tutorial
```
> You could clone the `codex-storage/nim-codex.git` to some other location.
> Just to keeps things nicely separated it is best to make sure that
> `nim-codex` is not under `marketplace-tutorial` directory.
Now, from the `nim-codex` folder run:
```bash
make update && make
```
> This may take a moment as it will also build the `nim` compiler. Be patient.
Now, in order to start a local Ethereum network run:
```bash
cd vendor/codex-contracts-eth
npm install
```
> While writing the document we used `node` version `v20.17.0` and
> `npm` version `10.8.2`.
Before continuing you now must **wait until $256$ blocks are mined**
**in your PoAnetwork**, or deploy will fail. This should take about
$4$ minutes and $30$ seconds. You can check which block height you are
currently at by running the following command
**from the `marketplace-tutorial` folder**:
```bash
geth attach --exec web3.eth.blockNumber ./geth-data/geth.ipc
```
once that gets past $256$, you are ready to go.
To deploy contracts, from the `codex-contracts-eth` directory run:
```bash
export DISTTEST_NETWORK_URL=http://localhost:8545
npx hardhat --network codexdisttestnetwork deploy
```
If the command completes successfully, you will see the output similar
to this one:
```bash
Deployed Marketplace with Groth16 Verifier at:
0xCf0df6C52B02201F78E8490B6D6fFf5A82fC7BCd
```
> of course your address will be different.
You are now ready to prepare the accounts.
### 2.2. Generate the Required Accounts
We will run $2$ Codex nodes: a **storage provider**, which will sell storage
on the network, and a **client**, which will buy and use such storage;
we therefore need two valid Ethereum accounts. We could create random
accounts by using one of the many tools available to that end but, since
this is a tutorial running on a local private network, we will simply
provide you with two pre-made accounts along with their private keys,
which you can copy and paste instead:
First make sure you're back in the `marketplace-tutorial` folder and
not the `codex-contracts-eth` subfolder. Then set these variables:
**Storage:**
```bash
export ETH_STORAGE_ADDR=0x45BC5ca0fbdD9F920Edd12B90908448C30F32a37
export ETH_STORAGE_PK=0x06c7ac11d4ee1d0ccb53811b71802fa92d40a5a174afad9f2cb44f93498322c3
echo $ETH_STORAGE_PK > storage.pkey && chmod 0600 storage.pkey
```
**Client:**
```bash
export ETH_CLIENT_ADDR=0x9F0C62Fe60b22301751d6cDe1175526b9280b965
export ETH_CLIENT_PK=0x5538ec03c956cb9d0bee02a25b600b0225f1347da4071d0fd70c521fdc63c2fc
echo $ETH_CLIENT_PK > client.pkey && chmod 0600 client.pkey
```
### 2.3. Provision Accounts with Tokens
We now need to transfer some ETH to each of the accounts, as well as provide
them with some Codex tokens for the storage node to use as collateral and
for the client node to buy actual storage.
Although the process is not particularly complicated, I suggest you use
[the script we prepared](https://github.com/gmega/local-codex-bare/blob/main/scripts/mint-tokens.js)
for that. This script, essentially:
1. reads the Marketplace contract address and its ABI from the deployment data;
2. transfers $1$ ETH from the signer account to a target account if the target
account has no ETH balance;
3. mints $n$ Codex tokens and adds it into the target account's balance.
To use the script, just download it into a local file named `mint-tokens.js`,
for instance using `curl` (make sure you are in
the `marketplace-tutorial` directory):
```bash
# download script
curl https://raw.githubusercontent.com/gmega/codex-local-bare/main/scripts/mint-tokens.js -o mint-tokens.js
```
Then run:
```bash
# set the contract file location (we assume you are in the marketplace-tutorial directory)
export CONTRACT_DEPLOY_FULL="../nim-codex/vendor/codex-contracts-eth/deployments/codexdisttestnetwork"
export GETH_SIGNER_ADDR=$(cat geth_signer_address.txt)
# Installs Web3-js
npm install web3
# Provides tokens to the storage account.
node ./mint-tokens.js $CONTRACT_DEPLOY_FULL/TestToken.json $GETH_SIGNER_ADDR 0x45BC5ca0fbdD9F920Edd12B90908448C30F32a37 10000000000
# Provides tokens to the client account.
node ./mint-tokens.js $CONTRACT_DEPLOY_FULL/TestToken.json $GETH_SIGNER_ADDR 0x9F0C62Fe60b22301751d6cDe1175526b9280b965 10000000000
```
If you get a message like
```bash
Usage: mint-tokens.js <token-hardhat-deploy-json> <signer-account> <receiver-account> <token-ammount>
```
then you need to ensure you provided all the required arguments.
In particular you need to ensure that the `GETH_SIGNER_ADDR` env variable
holds the signer address (we used
`export GETH_SIGNER_ADDR=$(cat geth_signer_address.txt)` above to
make sure it is set).
## 3. Run Codex
With accounts and geth in place, we can now start the Codex nodes.
### 3.1. Storage Node
The storage node will be the one storing data and submitting the proofs of
storage to the chain. To do that, it needs access to:
1. the address of the Marketplace contract that has been deployed to
the local geth node in [Step 2.1](#_2-1-deploy-the-codex-marketplace-contracts);
2. the sample ceremony files which are shipped in the Codex contracts repo
(`nim-codex/vendor/codex-contracts-eth`).
**Address of the Marketplace Contract.** The contract address can be found
inside of the file `nim-codex/vendor/codex-contracts-eth/deployments/codexdisttestnetwork/Marketplace.json`.
We captured that location above in `CONTRACT_DEPLOY_FULL` variable, thus, from
the `marketplace-tutorial` folder just run:
```bash
grep '"address":' ${CONTRACT_DEPLOY_FULL}/Marketplace.json
```
which should print something like:
```bash
"address": "0xCf0df6C52B02201F78E8490B6D6fFf5A82fC7BCd",
```
> This address should match the address we got earlier when deploying
> the Marketplace contract above.
Then run the following with the correct market place address:
```bash
export MARKETPLACE_ADDRESS="0x0000000000000000000000000000000000000000"
echo ${MARKETPLACE_ADDRESS} > marketplace_address.txt
```
where you replace `0x0000000000000000000000000000000000000000` with
the Marketplace contract above in
[Step 2.1](#_2-1-deploy-the-codex-marketplace-contracts).
**Prover ceremony files.** The ceremony files are under the
`nim-codex/vendor/codex-contracts-eth/verifier/networks/codexdisttestnetwork`
subdirectory. There are three of them: `proof_main.r1cs`, `proof_main.zkey`,
and `prooof_main.wasm`. We will need all of them to start the Codex storage node.
**Starting the storage node.** Let:
* `PROVER_ASSETS` contain the directory where the prover ceremony files are
located. **This must be an absolute path**;
* `CODEX_BINARY` contain the location of your Codex binary;
* `MARKETPLACE_ADDRESS` contain the address of the Marketplace contract
(we have already set it above).
Set these paths into environment variables (make sure you are in
the `marketplace-tutorial` directory):
```bash
export CONTRACT_DEPLOY_FULL=$(realpath "../nim-codex/vendor/codex-contracts-eth/deployments/codexdisttestnetwork")
export PROVER_ASSETS=$(realpath "../nim-codex/vendor/codex-contracts-eth/verifier/networks/codexdisttestnetwork/")
export CODEX_BINARY=$(realpath "../nim-codex/build/codex")
export MARKETPLACE_ADDRESS=$(cat marketplace_address.txt)
```
> you may notice, that we have already set the `CONTRACT_DEPLOY_FULL` variable
> above. Here, we make sure it is an absolute path.
To launch the storage node, run:
```bash
${CODEX_BINARY}\
--data-dir=./codex-storage\
--listen-addrs=/ip4/0.0.0.0/tcp/8080\
--api-port=8000\
--disc-port=8090\
persistence\
--eth-provider=http://localhost:8545\
--eth-private-key=./storage.pkey\
--marketplace-address=${MARKETPLACE_ADDRESS}\
--validator\
--validator-max-slots=1000\
prover\
--circom-r1cs=${PROVER_ASSETS}/proof_main.r1cs\
--circom-wasm=${PROVER_ASSETS}/proof_main.wasm\
--circom-zkey=${PROVER_ASSETS}/proof_main.zkey
```
**Starting the client node.**
The client node is started similarly except that:
* we need to pass the SPR of the storage node so it can form a network with it;
* since it does not run any proofs, it does not require any ceremony files.
We get the Signed Peer Record (SPR) of the storage node so we can bootstrap
the client node with it. To get the SPR, issue the following call:
```bash
curl -H 'Accept: text/plain' 'http://localhost:8000/api/codex/v1/spr' --write-out '\n'
```
You should get the SPR back starting with `spr:`.
Before you proceed, open new terminal, and enter `marketplace-tutorial` directory.
Next set these paths into environment variables:
```bash
# set the SPR for the storage node
export STORAGE_NODE_SPR=$(curl -H 'Accept: text/plain' 'http://localhost:8000/api/codex/v1/spr')
# basic vars
export CONTRACT_DEPLOY_FULL=$(realpath "../nim-codex/vendor/codex-contracts-eth/deployments/codexdisttestnetwork")
export CODEX_BINARY=$(realpath "../nim-codex/build/codex")
export MARKETPLACE_ADDRESS=$(cat marketplace_address.txt)
```
and then run:
```bash
${CODEX_BINARY}\
--data-dir=./codex-client\
--listen-addrs=/ip4/0.0.0.0/tcp/8081\
--api-port=8001\
--disc-port=8091\
--bootstrap-node=${STORAGE_NODE_SPR}\
persistence\
--eth-provider=http://localhost:8545\
--eth-private-key=./client.pkey\
--marketplace-address=${MARKETPLACE_ADDRESS}
```
## 4. Buy and Sell Storage on the Marketplace
Any storage negotiation has two sides: a buyer and a seller.
Therefore, before we can actually request storage, we must first offer
some of it for sale.
### 4.1 Sell Storage
The following request will cause the storage node to put out $50\text{MB}$
of storage for sale for $1$ hour, at a price of $1$ Codex token
per slot per second, while expressing that it's willing to take at most
a $1000$ Codex token penalty for not fulfilling its part of the contract.[^1]
```bash
curl 'http://localhost:8000/api/codex/v1/sales/availability' \
--header 'Content-Type: application/json' \
--data '{
"totalSize": "50000000",
"duration": "3600",
"minPrice": "1",
"maxCollateral": "1000"
}'
```
This should return a JSON response containing an `id` (e.g. `"id": "0xb55b3bc7aac2563d5bf08ce8a177a38b5a40254bfa7ee8f9c52debbb176d44b0"`)
which identifies this storage offer.
> To make JSON responses more readable, you can try
> [jq](https://jqlang.github.io/jq/) JSON formatting utility
> by just adding `| jq` after the command.
> On macOS you can install with `brew install jq`.
To check the current storage offers for this node, you can issue:
```bash
curl 'http://localhost:8000/api/codex/v1/sales/availability'
```
or with `jq`:
```bash
curl 'http://localhost:8000/api/codex/v1/sales/availability' | jq
```
This should print a list of offers, with the one you just created figuring
among them (for our tutorial, there will be only one offer returned
at this time).
### 4.2. Buy Storage
Before we can buy storage, we must have some actual data to request
storage for. Start by uploading a small file to your client node.
On Linux (or macOS) you could, for instance, use `dd` to generate a $1M$ file:
```bash
dd if=/dev/urandom of=./data.bin bs=1M count=1
```
Assuming your file is named `data.bin`, you can upload it with:
```bash
curl --request POST http://localhost:8001/api/codex/v1/data --header 'Content-Type: application/octet-stream' --write-out '\n' -T ./data.bin
```
Once the upload completes, you should see a _Content Identifier_,
or _CID_ (e.g. `zDvZRwzm2mK7tvDzKScRLapqGdgNTLyyEBvx1TQY37J2CdWdS6Sj`)
for the uploaded file printed to the terminal.
Use that CID in the purchase request:
```bash
# make sure to replace the CID before with the CID you got in the previous step
export CID=zDvZRwzm2mK7tvDzKScRLapqGdgNTLyyEBvx1TQY37J2CdWdS6Sj
```
```bash
curl "http://localhost:8001/api/codex/v1/storage/request/${CID}" \
--header 'Content-Type: application/octet-stream' \
--data "{
\"duration\": \"600\",
\"reward\": \"1\",
\"proofProbability\": \"3\",
\"expiry\": \"500\",
\"nodes\": 3,
\"tolerance\": 1,
\"collateral\": \"1000\"
}" \
--write-out '\n'
```
The parameters under `--data` say that:
1. we want to purchase storage for our file for $5$ minutes (`"duration": "600"`);
2. we are willing to pay up to $1$ token per slot per second (`"reward": "1"`)
3. our file will be split into three pieces (`"nodes": 3`).
Because we set `"tolerance": 1` we only need two (`nodes - tolerance`)
pieces to rebuild the file; i.e., we can tolerate that at most one node
stops storing our data; either due to failure or other reasons;
4. we demand `1000` tokens in collateral from storage providers for each piece.
Since there are $3$ such pieces, there will be `3000` in total collateral
committed by the storage provider(s) once our request is started.
5. finally, the `expiry` puts a time limit for filling all the slots by
the storage provider(s). If slot are not filled by the `expire` interval,
the request will timeout and fail.
### 4.3. Track your Storage Requests
POSTing a storage request will make it available in the storage market,
and a storage node will eventually pick it up.
You can poll the status of your request by means of:
```bash
export STORAGE_PURCHASE_ID="1d0ec5261e3364f8b9d1cf70324d70af21a9b5dccba380b24eb68b4762249185"
curl "http://localhost:8001/api/codex/v1/storage/purchases/${STORAGE_PURCHASE_ID}"
```
For instance:
```bash
> curl 'http://localhost:8001/api/codex/v1/storage/purchases/6c698cd0ad71c41982f83097d6fa75beb582924e08a658357a1cd4d7a2a6766d'
```
This returns a result like:
```json
{
"requestId": "0x86501e4677a728c6a8031971d09b921c3baa268af06b9f17f1b745e7dba5d330",
"request": {
"client": "0x9f0c62fe60b22301751d6cde1175526b9280b965",
"ask": {
"slots": 3,
"slotSize": "262144",
"duration": "1000",
"proofProbability": "3",
"reward": "1",
"collateral": "1",
"maxSlotLoss": 1
},
"content": {
"cid": "zDvZRwzkyw1E7ABaUSmgtNEDjC7opzhUoHo99Vpvc98cDWeCs47u"
},
"expiry": "1711992852",
"nonce": "0x9f5e651ecd3bf73c914f8ed0b1088869c64095c0d7bd50a38fc92ebf66ff5915",
"id": "0x6c698cd0ad71c41982f83097d6fa75beb582924e08a658357a1cd4d7a2a6766d"
},
"state": "submitted",
"error": null
}
```
Shows that a request has been submitted but has not yet been filled.
Your request will be successful once `"state"` shows `"started"`.
Anything other than that means the request has not been completely
processed yet, and an `"error"` state other than `null` means it failed.
Well, it was quite a journey, wasn't it? You can congratulate yourself for
successfully finishing the codex marketplace tutorial!
[^1]: Codex files get partitioned into pieces called "slots" and distributed
to various storage providers. The collateral refers to one such slot,
and will be slowly eaten away as the storage provider fails to deliver
timely proofs, but the actual logic is [more involved than that](https://github.com/codex-storage/codex-contracts-eth/blob/6c9f797f408608958714024b9055fcc330e3842f/contracts/Marketplace.sol#L209).

View File

@ -0,0 +1,235 @@
# Codex 두 클라이언트 테스트
두 클라이언트 테스트는 설정을 확인하고 Codex API에 익숙해지기 위해 수행할 수 있는 수동 테스트입니다. 이 단계들은 두 개의 노드를 실행하고 연결하여 하나에 파일을 업로드한 다음 다른 노드에서 해당 파일을 다운로드하는 과정을 안내합니다. 이 테스트에는 마켓플레이스 기능을 사용할 수 있도록 로컬 블록체인 노드를 실행하는 것도 포함됩니다.
## Prerequisite
Make sure you have [built the client](/learn/build) or obtained [compiled binary](/learn/quick-start#get-codex-binary).
## Steps
### 0. Setup blockchain node (optional)
You need to have installed NodeJS and npm in order to spinup a local blockchain node.
Go to directory `vendor/codex-contracts-eth` and run these two commands:
```
npm ci
npm start
```
This will launch a local Ganache blockchain.
### 1. Launch Node #1
Open a terminal and run:
- Mac/Linux:
```shell
codex \
--data-dir="$(pwd)/Data1" \
--api-port=8080 \
--disc-port=8090 \
--listen-addrs="/ip4/127.0.0.1/tcp/8070"
```
- Windows:
```batch
codex.exe ^
--data-dir="Data1" ^
--api-port=8080 ^
--disc-port=8090 ^
--listen-addrs="/ip4/127.0.0.1/tcp/8070"
```
Optionally, if you want to use the Marketplace blockchain functionality, you need to also include these flags: `--persistence --eth-account=<account>`, where `account` can be one following:
- `0x70997970C51812dc3A010C7d01b50e0d17dc79C8`
- `0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC`
- `0x90F79bf6EB2c4f870365E785982E1f101E93b906`
- `0x15d34AAf54267DB7D7c367839AAf71A00a2C6A65`
**For each node use a different account!**
| Argument | Description |
|----------------|-----------------------------------------------------------------------|
| `data-dir` | We specify a relative path where the node will store its data. |
| `listen-addrs` | Multiaddress where the node will accept connections from other nodes. |
| `api-port` | Port on localhost where the node will expose its API. |
| `disc-port` | Port the node will use for its discovery service. |
| `persistence` | Enables Marketplace functionality. Requires a blockchain connection. |
| `eth-account` | Defines which blockchain account the node should use. |
Codex uses sane defaults for most of its arguments. Here we specify some explicitly for the purpose of this walk-through.
### 2. Sign of life
Run the command :
```bash
curl -X GET http://127.0.0.1:8080/api/codex/v1/debug/info
```
This GET request will return the node's debug information. The response will be in JSON and should look like:
```json
{
"id": "16Uiu2HAmJ3TSfPnrJNedHy2DMsjTqwBiVAQQqPo579DuMgGxmG99",
"addrs": [
"/ip4/127.0.0.1/tcp/8070"
],
"repo": "/Users/user/projects/nim-codex/Data1",
"spr": "spr:CiUIAhIhA1AL2J7EWfg7x77iOrR9YYBisY6CDtU2nEhuwDaQyjpkEgIDARo8CicAJQgCEiEDUAvYnsRZ-DvHvuI6tH1hgGKxjoIO1TacSG7ANpDKOmQQ2MWasAYaCwoJBH8AAAGRAh-aKkYwRAIgB2ooPfAyzWEJDe8hD2OXKOBnyTOPakc4GzqKqjM2OGoCICraQLPWf0oSEuvmSroFebVQx-3SDtMqDoIyWhjq1XFF",
"announceAddresses": [
"/ip4/127.0.0.1/tcp/8070"
],
"table": {
"localNode": {
"nodeId": "f6e6d48fa7cd171688249a57de0c1aba15e88308c07538c91e1310c9f48c860a",
"peerId": "16Uiu2HAmJ3TSfPnrJNedHy2DMsjTqwBiVAQQqPo579DuMgGxmG99",
"record": "...",
"address": "0.0.0.0:8090",
"seen": false
},
"nodes": []
},
"codex": {
"version": "untagged build",
"revision": "b3e626a5"
}
}
```
| Field | Description |
| ------------------- | ---------------------------------------------------------------------------------------- |
| `id` | Id of the node. Also referred to as 'peerId'. |
| `addrs` | Multiaddresses currently open to accept connections from other nodes. |
| `repo` | Path of this node's data folder. |
| `spr` | Signed Peer Record, encoded information about this node and its location in the network. |
| `announceAddresses` | Multiaddresses used for annoucning this node |
| `table` | Table of nodes present in the node's DHT |
| `codex` | Codex version information |
### 3. Launch Node #2
We will need the signed peer record (SPR) from the first node that you got in the previous step.
Replace `<SPR HERE>` in the following command with the SPR returned from the previous command, note that it should include the `spr:` at the beginning.
Open a new terminal and run:
- Mac/Linux:
```shell
codex \
--data-dir="$(pwd)/Data2" \
--api-port=8081 \
--disc-port=8091 \
--listen-addrs=/ip4/127.0.0.1/tcp/8071 \
--bootstrap-node=<SPR HERE>
```
- Windows:
```
codex.exe ^
--data-dir="Data2" ^
--api-port=8081 ^
--disc-port=8091 ^
--listen-addrs=/ip4/127.0.0.1/tcp/8071 ^
--bootstrap-node=<SPR HERE>
```
Alternatively on Mac, Linux, or MSYS2 and a recent Codex binary you can run it in one command like:
```shell
codex \
--data-dir="$(pwd)/Data2" \
--api-port=8081 \
--disc-port=8091 \
--listen-addrs=/ip4/127.0.0.1/tcp/8071 \
--bootstrap-node=$(curl -H "Accept: text/plain" http://127.0.0.1:8080/api/codex/v1/spr)
```
Notice we're using a new data-dir, and we've increased each port number by one. This is needed so that the new node won't try to open ports already in use by the first node.
We're now also including the `bootstrap-node` argument. This allows us to link the new node to another one, bootstrapping our own little peer-to-peer network. SPR strings always start with `spr:`.
### 4. Connect The Two
Normally the two nodes will automatically connect. If they do not automatically connect or you want to manually connect nodes you can use the peerId to connect nodes.
You can get the first node's peer id by running the following command and finding the `"peerId"` in the results:
```shell
curl -X GET \
-H "Accept: text/plain" \
http://127.0.0.1:8081/api/codex/v1/peerid
```
Next replace `<PEER ID HERE>` in the following command with the peerId returned from the previous command:
```shell
curl -X GET \
http://127.0.0.1:8080/api/codex/v1/connect/<PEER ID HERE>?addrs=/ip4/127.0.0.1/tcp/8071
```
Alternatively on Mac, Linux, or MSYS2 and a recent Codex binary you can run it in one command like:
```shell
curl -X GET \
http://127.0.0.1:8080/api/codex/v1/connect/$(curl -X GET -H "Accept: text/plain" http://127.0.0.1:8081/api/codex/v1/peerid)\?addrs=/ip4/127.0.0.1/tcp/8071
```
Notice that we are sending the "`peerId`" and the multiaddress of node 2 to the `/connect` endpoint of node 1. This provides node 1 all the information it needs to communicate with node 2. The response to this request should be `Successfully connected to peer`.
### 5. Upload The File
We're now ready to upload a file to the network. In this example we'll use node 1 for uploading and node 2 for downloading. But the reverse also works.
Next replace `<FILE PATH>` with the path to the file you want to upload in the following command:
```shell
curl -X POST \
127.0.0.1:8080/api/codex/v1/data \
-H "Content-Type: application/octet-stream" \
-H "Expect: 100-continue" \
-T "<FILE PATH>"
```
> [!TIP]
> If curl is reluctant to show you the response, add `-o <FILENAME>` to write the result to a file.
Depending on the file size this may take a moment. Codex is processing the file by cutting it into blocks and generating erasure-recovery data. When the process is finished, the request will return the content-identifier (CID) of the uploaded file. It should look something like `zdj7WVxH8HHHenKtid8Vkgv5Z5eSUbCxxr8xguTUBMCBD8F2S`.
### 6. Download The File
Replace `<CID>` with the identifier returned in the previous step. Replace `<OUTPUT FILE>` with the filename where you want to store the downloaded file:
```bash
curl -X GET \
127.0.0.1:8081/api/codex/v1/data/<CID>/network \
-o <OUTPUT FILE>
```
Notice we are connecting to the second node in order to download the file. The CID we provide contains the information needed to locate the file within the network.
### 7. Verify The Results
If your file is downloaded and identical to the file you uploaded, then this manual test has passed. Rejoice! If on the other hand that didn't happen or you were unable to complete any of these steps, please leave us a message detailing your troubles.
## Notes
When using the Ganache blockchain, there are some deviations from the expected behavior, mainly linked to how blocks are mined, which affects certain functionalities in the Sales module.
Therefore, if you are manually testing processes such as payout collection after a request is finished or proof submissions, you need to mine some blocks manually for it to work correctly. You can do this by using the following curl command:
```shell
curl -X POST \
127.0.0.1:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"evm_mine","params":[],"id":67}'
```
## 알려진 문제점
요청이 완료된 후 지불금 수집이나 증명 제출과 같은 프로세스를 수동으로 테스트하는 경우, 제대로 작동하려면 수동으로 블록을 채굴해야 합니다. 다음 curl 명령을 사용하여 이 작업을 수행할 수 있습니다:
```shell
curl -X POST \
127.0.0.1:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"evm_mine","params":[],"id":67}'
```

150
ko/learn/quick-start.md Normal file
View File

@ -0,0 +1,150 @@
# 빠른 시작
이 가이드를 통해 Codex를 실행하려면 다음 단계를 수행해야 합니다:
- [면책 조항 검토](/codex/disclaimer)
- [Codex 바이너리 얻기](#get-codex-binary)
- [Codex 실행하기](#run-codex)
- [Codex와 상호작용하기](#interact-with-codex)
## Get Codex binary
For quick a start we will use precompiled binaries from [GitHub release page](https://github.com/codex-storage/nim-codex/releases). If you prefer to compile from the sources, please check [Build Codex](/learn/build).
Please follow the steps for your OS from the list:
- [Linux/macOS](#linux-macos)
- [Windows](#windows)
### Linux/macOS
1. Install latest Codex release
```shell
curl -s https://get.codex.storage/install.sh | bash
```
2. Install dependencies
```shell
# Debian-based Linux
sudo apt update && sudo apt install libgomp1
```
3. Check the result
```shell
codex --version
```
### Windows
1. Install latest Codex release
```batch
curl -sO https://get.codex.storage/install.cmd && install.cmd
```
> [!WARNING]
> Windows antivirus software and built-in firewalls may cause steps to fail. We will cover some possible errors here, but always consider checking your setup if requests fail - in particular, if temporarily disabling your antivirus fixes it, then it is likely to be the culprit.
If you see an error like:
```batch
curl: (35) schannel: next InitializeSecurityContext failed: CRYPT_E_NO_REVOCATION_CHECK (0x80092012) - The revocation function was unable to check revocation for the certificate.
```
You may need to add the `--ssl-no-revoke` option to your curl calls, i.e., modify the calls above so they look like this:
```batch
curl -LO --ssl-no-revoke https://...
```
2. Update path using console output
- Current session only
```batch
:: Default installation directory
set "PATH=%PATH%%LOCALAPPDATA%\Codex;"
```
- Update PATH permanently
- Control Panel --> System --> Advanced System settings --> Environment Variables
- Alternatively, type `environment variables` into the Windows Search box
3. Check the result
```shell
codex --version
```
## Run Codex
We may [run Codex in different modes](/learn/run#run), and for a quick start we will run [Codex node](/learn/run#codex-node), to be able to share files in the network.
1. Run Codex
**Linux/macOS**
```shell
codex \
--data-dir=datadir \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070 \
--nat=`curl -s https://ip.codex.storage` \
--api-cors-origin="*" \
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P
```
**Windows**
> [!WARNING]
> Windows might at this stage prompt you to grant internet access to Codex. You must allow it for things to work.
> It also might be required to add incoming firewall rules for Codex and we can use `netsh` utility.
<details>
<summary>add firewall rules using netsh</summary>
```batch
:: Add rules
netsh advfirewall firewall add rule name="Allow Codex (TCP-In)" protocol=TCP dir=in localport=8070 action=allow
netsh advfirewall firewall add rule name="Allow Codex (UDP-In)" protocol=UDP dir=in localport=8090 action=allow
:: List rules
netsh advfirewall firewall show rule name=all | find /I "Codex"
:: Delete rules
netsh advfirewall firewall delete rule name="Allow Codex (TCP-In)"
netsh advfirewall firewall delete rule name="Allow Codex (UDP-In)"
```
</details>
```batch
:: Get Public IP
for /f "delims=" %a in ('curl -s --ssl-reqd ip.codex.storage') do set nat=%a
:: Run Codex
codex ^
--data-dir=datadir ^
--disc-port=8090 ^
--listen-addrs=/ip4/0.0.0.0/tcp/8070 ^
--nat=%nat% ^
--api-cors-origin="*" ^
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P
```
> [!TIP]
> In the example above we use [Codex Testnet](/networks/testnet#bootstrap-nodes) bootstrap nodes and thus we join Testnet. If you would like to join a different network, please use [appropriate value](/networks/networks).
2. Configure port-forwarding for the TCP/UDP ports on your Internet router
| Protocol | Service | Port |
| -------- | --------- | ------ |
| UDP | Discovery | `8090` |
| TCP | Transport | `8070` |
If you would like to purchase or sell storage, please consider to run [Codex node with marketplace support](/learn/run#codex-node-with-marketplace-support) or [Codex storage node](/learn/run#codex-storage-node).
## Interact with Codex
When your Codex node is up and running you can interact with it using [Codex App UI](https://app.codex.storage) for files sharing.
Also, you can interact with Codex using [Codex API](/developers/api) and for a walk-through of the API, consider following the [Using Codex](/learn/using) guide.
## Stay in touch
Want to stay up-date, or looking for further assistance? Try our [discord-server](https://discord.gg/codex-storage).
Ready to explore Codex functionality? Please [Join Codex Testnet](/networks/testnet).
If you want to run Codex locally without joining the Testnet, consider trying the [Codex Two-Client Test](/learn/local-two-client-test) or the [Running a Local Codex Network with Marketplace Support](/learn/local-marketplace).

586
ko/learn/run.md Normal file
View File

@ -0,0 +1,586 @@
---
outline: [2, 4]
---
# Run Codex
As for now, Codex is implemented only in [Nim](https://nim-lang.org) and can be found in [nim-codex](https://github.com/codex-storage/nim-codex) repository.
It is a command-line application which may be run in a different ways:
- [Using binary](#using-binary)
- [Run as a daemon in Linux](#run-as-a-daemon-in-linux) (not supported yet)
- [Run as a service in Windows](#run-as-a-service-in-windows) (not supported yet)
- [Using Docker](#using-docker)
- [Using Docker Compose](#using-docker-compose)
- [On Kubernetes](#on-kubernetes)
During the run, it is required to pass [configuration](#configuration) option to the application, which can be done in a different ways.
## Configuration
It is possible to configure Codex node in several ways:
1. [CLI options](#cli-options)
2. [Environment variables](#environment-variables)
3. [Configuration file](#configuration-file)
The order of priority is the same as above:
[CLI options](#cli-options) --> [Environment variables](#environment-variables) --> [Configuration file](#configuration-file).
### Common information
#### Units
For some configuration options, we can pass values in common units like following:
```shell
--cache-size=1m/1M/1mb/1MB
--storage-quota=2m/2M/2mb/2MB
--block-mi=1s/1S/1m/1M/1h/1H/1d/1D/1w/1W
--block-ttl=2s/2S/2m/2M/2h/2H/2d/2D/2w/2W
```
#### Logging
Codex uses [Chronicles](https://github.com/status-im/nim-chronicles) logging library, which allows great flexibility in working with logs.
Chronicles has the concept of topics, which categorize log entries into semantic groups.
Using the `log-level` parameter, you can set the top-level log level like `--log-level="trace"`, but more importantly,
you can set log levels for specific topics like `--log-level="info; trace: marketplace,node; error: blockexchange"`,
which sets the top-level log level to `info` and then for topics `marketplace` and `node` sets the level to `trace` and so on.
### CLI options
```shell
codex --help
Usage:
codex [OPTIONS]... command
The following options are available:
--config-file Loads the configuration from a TOML file [=none].
--log-level Sets the log level [=info].
--metrics Enable the metrics server [=false].
--metrics-address Listening address of the metrics server [=127.0.0.1].
--metrics-port Listening HTTP port of the metrics server [=8008].
-d, --data-dir The directory where codex will store configuration and data
[=/root/.cache/codex].
-i, --listen-addrs Multi Addresses to listen on [=/ip4/0.0.0.0/tcp/0].
-a, --nat IP Addresses to announce behind a NAT [=127.0.0.1].
-e, --disc-ip Discovery listen address [=0.0.0.0].
-u, --disc-port Discovery (UDP) port [=8090].
--net-privkey Source of network (secp256k1) private key file path or name [=key].
-b, --bootstrap-node Specifies one or more bootstrap nodes to use when connecting to the network.
--max-peers The maximum number of peers to connect to [=160].
--agent-string Node agent string which is used as identifier in network [=Codex].
--api-bindaddr The REST API bind address [=127.0.0.1].
-p, --api-port The REST Api port [=8080].
--api-cors-origin The REST Api CORS allowed origin for downloading data. '*' will allow all
origins, '' will allow none. [=Disallow all cross origin requests to download
data].
--repo-kind Backend for main repo store (fs, sqlite, leveldb) [=fs].
-q, --storage-quota The size of the total storage quota dedicated to the node [=$DefaultQuotaBytes].
-t, --block-ttl Default block timeout in seconds - 0 disables the ttl [=$DefaultBlockTtl].
--block-mi Time interval in seconds - determines frequency of block maintenance cycle: how
often blocks are checked for expiration and cleanup
[=$DefaultBlockMaintenanceInterval].
--block-mn Number of blocks to check every maintenance cycle [=1000].
-c, --cache-size The size of the block cache, 0 disables the cache - might help on slow hardrives
[=0].
Available sub-commands:
codex persistence [OPTIONS]... command
The following options are available:
--eth-provider The URL of the JSON-RPC API of the Ethereum node [=ws://localhost:8545].
--eth-account The Ethereum account that is used for storage contracts.
--eth-private-key File containing Ethereum private key for storage contracts.
--marketplace-address Address of deployed Marketplace contract.
--validator Enables validator, requires an Ethereum node [=false].
--validator-max-slots Maximum number of slots that the validator monitors [=1000].
--reward-recipient Address to send payouts to (eg rewards and refunds).
Available sub-commands:
codex persistence prover [OPTIONS]...
The following options are available:
-cd, --circuit-dir Directory where Codex will store proof circuit data
[=/root/.cache/codex/circuits].
--circom-r1cs The r1cs file for the storage circuit
[=/root/.cache/codex/circuits/proof_main.r1cs].
--circom-wasm The wasm file for the storage circuit
[=/root/.cache/codex/circuits/proof_main.wasm].
--circom-zkey The zkey file for the storage circuit
[=/root/.cache/codex/circuits/proof_main.zkey].
--circom-no-zkey Ignore the zkey file - use only for testing! [=false].
--proof-samples Number of samples to prove [=5].
--max-slot-depth The maximum depth of the slot tree [=32].
--max-dataset-depth The maximum depth of the dataset tree [=8].
--max-block-depth The maximum depth of the network block merkle tree [=5].
--max-cell-elements The maximum number of elements in a cell [=67].
```
### Environment variables
In order to set a configuration option using environment variables, first find the desired [CLI option](#cli-options)
and then transform it in the following way:
1. prepend it with `CODEX_`
2. make it uppercase
3. replace `-` with `_`
For example, to configure `--log-level`, use `CODEX_LOG_LEVEL` as the environment variable name.
> [!WARNING]
> Some options can't be configured via environment variables for now [^multivalue-env-var] [^sub-commands].
### Configuration file
A [TOML](https://toml.io/en/) configuration file can also be used to set configuration values. Configuration option names and corresponding values are placed in the file, separated by `=`. Configuration option names can be obtained from the [`codex --help`](#cli-options) command, and should not include the `--` prefix. For example, a node's log level (`--log-level`) can be configured using TOML as follows:
```toml
log-level = "trace"
```
For option, like `bootstrap-node` and `listen-addrs` which accept multiple values we can specify data as an array
```toml
listen-addrs = [
"/ip4/0.0.0.0/tcp/1234",
"/ip4/0.0.0.0/tcp/5678"
]
```
The Codex node can then read the configuration from this file using the `--config-file` CLI parameter, like:
```shell
codex --config-file=/path/to/your/config.toml
```
## Run
Basically, we can run Codex in three different modes:
- [Codex node](#codex-node) - useful for local testing/development and basic/files sharing.
- [Codex node with marketplace support](#codex-node-with-marketplace-support) - you can share files and buy the storage, this is the main mode and should be used by the end users.
- [Codex storage node](#codex-storage-node) - should be used by storage providers or if you would like to sell your local storage.
We also will touch in some words [Codex bootstrap node](#codex-bootstrap-node).
### Using binary
#### Codex node
We can run Codex in a simple way like following:
```shell
codex
```
> [!WARNING]
> This command may not work properly when we use GitHub releases [^data-dir].
But, it will use a default `data-dir` value and we can pass a custom one:
```shell
codex --data-dir=datadir
```
This will run Codex as an isolated instance, and if we would like to join an existing network, it is required to pass a [bootstrap node](#codex-bootstrap-node). We can pass multiple nodes as well:
```shell
codex \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P \
--bootstrap-node=spr:CiUIAhIhAyUvcPkKoGE7-gh84RmKIPHJPdsX5Ugm_IHVJgF-Mmu_EgIDARo8CicAJQgCEiEDJS9w-QqgYTv6CHzhGYog8ck92xflSCb8gdUmAX4ya78QoemesAYaCwoJBES39Q2RAnVOKkYwRAIgLi3rouyaZFS_Uilx8k99ySdQCP1tsmLR21tDb9p8LcgCIG30o5YnEooQ1n6tgm9fCT7s53k6XlxyeSkD_uIO9mb3
```
> [!IMPORTANT]
> Make sure you are using a proper value for the [network](/networks/networks) you would like to join.
Also, to make your Codex node accessible for other network participants, it is required to specify a public IP address which can be used to access your node:
```shell
codex \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P \
--nat=<your public IP>
```
> [!TIP]
> We can set public IP using curl and IP lookup service, like [ip.codex.storage](https://ip.codex.storage).
After that, node will announce itself using your public IP, default UDP ([discovery](https://docs.libp2p.io/concepts/discovery-routing/overview/)) and dynamic TCP port ([data transfer](https://docs.libp2p.io/concepts/transports/overview/)), which can be adjusted in the following way:
```shell
codex \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P \
--nat=`curl -s https://ip.codex.storage` \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070
```
In that way, node will announce itself using specified [multiaddress](https://docs.libp2p.io/concepts/fundamentals/addressing/) and we can check that via [API](https://api.codex.storage/#tag/Debug/operation/getDebugInfo) call:
```shell
curl -s localhost:8080/api/codex/v1/debug/info | jq -r '.announceAddresses'
```
```json
[
"/ip4/<your public IP>/tcp/8070"
]
```
Basically, for P2P communication we should specify and configure two ports:
| # | Protocol | Function | CLI option | Example |
| - | -------- | ------------------------------------------------------------------------ | ---------------- | -------------------------------------- |
| 1 | UDP | [Discovery](https://docs.libp2p.io/concepts/discovery-routing/overview/) | `--disc-port` | `--disc-port=8090` |
| 2 | TCP | [Transport](https://docs.libp2p.io/concepts/transports/overview/) | `--listen-addrs` | `--listen-addrs=/ip4/0.0.0.0/tcp/8070` |
And, also it is required to setup port-forwarding on your Internet router, to make your node accessible for participants [^port-forwarding].
So, a fully working basic configuration will looks like following:
```shell
codex \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P \
--nat=`curl -s https://ip.codex.storage` \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070 \
--api-cors-origin="*"
```
After node is up and running and port-forwarding configurations was done, we should be able to [Upload a file](/learn/using#upload-a-file)/[Download a file](/learn/using#download-a-file) in the network using [API](/developers/api).
You also can use [Codex App UI](https://app.codex.storage) for files upload/download.
And to be able to purchase a storage, we should run [Codex node with marketplace support](#codex-node-with-marketplace-support).
#### Codex node with marketplace support
[Marketplace](/learn/architecture.md#marketplace-architecture) support permits to purchase the storage in Codex network. Basically, we should add just a `persistence` sub-command and required [CLI options](#cli-options) to the [previous run](#codex-node).
> [!NOTE]
> Please ignore `--eth-account` CLI option, as it is obsolete [^eth-account].
1. For a daily use, we should consider to run a local blockchain node based on the [network](/networks/networks) you would like to join. That process is described in the [Join Codex Testnet](/networks/testnet) guide, but for a quick start we can use a public RPC endpoint.
2. Create a file with ethereum private key and set a proper permissions:
> [!CAUTION]
> Please use key generation service for demo purpose only.
```shell
response=$(curl -s https://key.codex.storage)
awk -F ': ' '/private/ {print $2}' <<<"${response}" > eth.key
awk -F ': ' '/address/ {print $2}' <<<"${response}" > eth.address
chmod 600 eth.key
```
Show your ethereum address:
```shell
cat eth.address
```
```
0x412665aFAb17768cd9aACE6E00537Cc6D5524Da9
```
3. Fill-up your ethereum address with ETH and Tokens based on the the [network](/networks/networks) you would like to join.
4. Specify bootstrap nodes and marketplace address based on the [network](/networks/networks) you would like to join.
5. Run the node:
```shell
codex \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P \
--nat=`curl -s https://ip.codex.storage` \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070 \
--api-cors-origin="*" \
persistence \
--eth-provider=https://rpc.testnet.codex.storage \
--eth-private-key=eth.key \
--marketplace-address=0xAB03b6a58C5262f530D54146DA2a552B1C0F7648
```
> [!NOTE]
> Codex also has a marketplace contract address autodiscovery mechanism based on the chain id, that mapping is done in the [source code](https://github.com/codex-storage/nim-codex/blob/master/codex/contracts/deployment.nim). In that way we can skip `--marketplace-address` argument or use it to override a hardcoded value.
After node is up and running, and your address has founds, you should be able to [Purchase storage](/learn/using#purchase-storage) using [API](/developers/api).
You also can use [Codex App UI](https://app.codex.storage) for storage purchase.
#### Codex storage node
Codex [storage node](architecture#network-architecture) should be run by storage providers or in case you would like to sell your local storage.
For that, additionally to the [Codex node with marketplace support](#codex-node-with-marketplace-support) we should use `prover` sub-command and required [CLI options](#cli-options).
That sub-command will make Codex to listen for a proof requests on the blockchain and answer them. To compute an answer for the proof request, Codex will use stored data and circuit files generated by the code in the [codex-storage-proofs-circuits](https://github.com/codex-storage/codex-storage-proofs-circuits) repository.
Every [network](/networks/networks) uses its own generated set of the files which are stored in the [codex-contracts-eth](https://github.com/codex-storage/codex-contracts-eth/tree/master/verifier/networks) repository and also uploaded to the CDN. Hash of the files set is also known by the [marketplace smart contract](/learn/architecture#smart-contract).
To download circuit files and make them available to Codex app, we have a stand-alone utility - `cirdl`. It can be [compiled from the sources](/learn/build#circuit-download-tool) or downloaded from the [GitHub release page](https://github.com/codex-storage/nim-codex/releases).
1. Create ethereum key file
<details>
<summary>example</summary>
> [!CAUTION]
> Please use key generation service for demo purpose only.
```shell
response=$(curl -s https://key.codex.storage)
awk -F ': ' '/private/ {print $2}' <<<"${response}" > eth.key
awk -F ': ' '/address/ {print $2}' <<<"${response}" > eth.address
chmod 600 eth.key
```
Show your ethereum address:
```shell
cat eth.address
```
```
0x412665aFAb17768cd9aACE6E00537Cc6D5524Da9
```
</details>
2. To download circuit files, we should pass directory, RPC endpoint and marketplace address to the circuit downloader:
```shell
# Create circuit files folder
mkdir -p datadir/circuits
chmod 700 datadir/circuits
# Download circuit files
cirdl \
datadir/circuits \
https://rpc.testnet.codex.storage \
0xAB03b6a58C5262f530D54146DA2a552B1C0F7648
```
2. Start Codex storage node
```shell
codex \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P \
--nat=`curl -s https://ip.codex.storage` \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070 \
persistence \
--eth-provider=https://rpc.testnet.codex.storage \
--eth-private-key=eth.key \
--marketplace-address=0xAB03b6a58C5262f530D54146DA2a552B1C0F7648 \
prover \
--circuit-dir=datadir/circuits
```
> [!NOTE]
> You would need to pass a bootstrap nodes, blockchain RPC endpoint and marketplace address based on the [network](/networks/networks) you would like to join.
After node is up and running, and your address has founds, you should be able to [sell the storage](/learn/using#create-storage-availability) using [API](/developers/api).
You also can use [Codex App UI](https://app.codex.storage) to sell the storage.
#### Codex bootstrap node
Bootstrap nodes are used just to help peers with the initial nodes discovery and we need to run Codex with just some basic options:
```shell
codex \
--data-dir=datadir \
--nat=`curl -s https://ip.codex.storage` \
--disc-port=8090
```
To get bootstrap node SPR we can use [API](https://api.codex.storage/#tag/Debug/operation/getDebugInfo) call:
```shell
curl -s localhost:8080/api/codex/v1/debug/info | jq -r '.spr'
```
```shell
spr:CiUIAhIhApd79-AxPqwRDmu7Pk-berTDtoIoMz0ovKjo85Tz8CUdEgIDARo8CicAJQgCEiECl3v34DE-rBEOa7s-T5t6tMO2gigzPSi8qOjzlPPwJR0Qjv_WtwYaCwoJBFxzjbKRAh-aKkYwRAIgCiTq5jBTaJJb6lUxN-0uNCj8lkV9AGY682D21kIAMiICIE1yxrjbDdiSCiARnS7I2zqJpXC2hOvjB4JoL9SAAk67
```
That SPR record then can be used then by other peers for initial nodes discovery.
We should keep in mind some important things about SPR record (see [ENR](https://eips.ethereum.org/EIPS/eip-778)):
- It uses node IP (`--nat`), discovery port (`--disc-port`) and private key (`--net-privkey`) for record creation
- Specified data is signed on each run and will be changed but still contain specified node data when decoded
- You can decode it by passing to the Codex node at run and with `--log-level=trace`
For bootstrap node, it is required to forward just discovery port on your Internet router.
### Run as a daemon in Linux
This functionality is not supported yet :construction:
### Run as a service in Windows
This functionality is not supported yet :construction:
### Using Docker
We also ship Codex in Docker containers, which can be run on `amd64` and `arm64` platforms.
#### Docker entrypoint
[Docker entrypoint](https://github.com/codex-storage/nim-codex/blob/master/docker/docker-entrypoint.sh), supports some additional options, which can be used for easier configuration:
- `ENV_PATH` - path to the file, in form `env=value` which will be sourced and available for Codex at run. That is useful for Kubernetes Pods configuration.
- `NAT_IP_AUTO` - when set to `true`, will set `CODEX_NAT` variable with container internal IP address. It also is useful for Kubernetes Pods configuration, when we perform automated tests.
- `NAT_PUBLIC_IP_AUTO` - used to set `CODEX_NAT` to public IP address using lookup services, like [ip.codex.storage](https://ip.codex.storage). Can be used for Docker/Kubernetes to set public IP in auto mode.
- `ETH_PRIVATE_KEY` - can be used to pass ethereum private key, which will be saved and passed as a value of the `CODEX_ETH_PRIVATE_KEY` variable. It should be considered as unsafe option and used for testing purposes only.
- When we set `prover` sub-command, entrypoint will run `cirdl` tool to download ceremony files, required by [Codex storage node](#codex-storage-node).
#### Docker network
When we are running Codex using Docker with default [bridge network](https://docs.docker.com/engine/network/drivers/bridge/), it will create a double NAT:
- One on the Docker side
- Second on your Internet router
If your Internet router does not support [Full Cone NAT](https://learningnetwork.cisco.com/s/question/0D56e0000CWxJ9sCQF/lets-explain-in-details-full-cone-nat-restricted-cone-nat-and-symmetric-nat-terminologies-vs-cisco-nat-terminologies), you might have an issue and peer discovery and data transport will not work or might work unexpected.
In that case, we should consider the following solutions:
- Use [host network](https://docs.docker.com/engine/network/drivers/host/) for Docker, which is supported only in Linux
- Run [Using binary](#using-binary)
- Use VM/VPS in the Cloud to run Docker with bridge or host network
#### Run using Docker
And we basically can use same options we [used for binary](#using-binary) and additionally it is required to mount volumes and map the ports.
[Codex storage node](#codex-storage-node)
1. Create ethereum key file
<details>
<summary>example</summary>
> [!CAUTION]
> Please use key generation service for demo purpose only.
```shell
response=$(curl -s https://key.codex.storage)
awk -F ': ' '/private/ {print $2}' <<<"${response}" > eth.key
awk -F ': ' '/address/ {print $2}' <<<"${response}" > eth.address
chmod 600 eth.key
```
Show your ethereum address:
```shell
cat eth.address
```
```
0x412665aFAb17768cd9aACE6E00537Cc6D5524Da9
```
</details>
2. Run Codex:
```shell
docker run \
--rm \
-v $PWD/datadir:/datadir \
-v $PWD/eth.key:/opt/eth.key \
-p 8070:8070 \
-p 8080:8080 \
-p 8090:8090/udp \
codexstorage/nim-codex:latest \
codex \
--data-dir=/datadir \
--bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P \
--nat=`curl -s https://ip.codex.storage` \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070 \
--api-cors-origin="*" \
--api-bindaddr=0.0.0.0 \
--api-port=8080 \
persistence \
--eth-provider=https://rpc.testnet.codex.storage \
--eth-private-key=/opt/eth.key \
--marketplace-address=0xAB03b6a58C5262f530D54146DA2a552B1C0F7648 \
prover \
--circuit-dir=/datadir/circuits
```
> [!NOTE]
> You would need to pass a bootstrap nodes, blockchain RPC endpoint and marketplace address based on the [network](/networks/networks) you would like to join.
### Using Docker Compose
For Docker Compose, it is more suitable to use [environment variables](#environment-variables) for Codex configuration and we can reuse commands from example above, for Docker.
[Codex storage node](#codex-storage-node)
1. Create ethereum key file
<details>
<summary>example</summary>
> [!CAUTION]
> Please use key generation service for demo purpose only.
```shell
response=$(curl -s https://key.codex.storage)
awk -F ': ' '/private/ {print $2}' <<<"${response}" > eth.key
awk -F ': ' '/address/ {print $2}' <<<"${response}" > eth.address
chmod 600 eth.key
```
Show your ethereum address:
```shell
cat eth.address
```
```
0x412665aFAb17768cd9aACE6E00537Cc6D5524Da9
```
</details>
2. Create `docker-compose.yaml` file:
```yaml
services:
codex:
image: codexstorage/nim-codex:latest
container_name: codex
command:
- codex
- persistence
- prover
- --bootstrap-node=spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P
- --bootstrap-node=spr:CiUIAhIhAyUvcPkKoGE7-gh84RmKIPHJPdsX5Ugm_IHVJgF-Mmu_EgIDARo8CicAJQgCEiEDJS9w-QqgYTv6CHzhGYog8ck92xflSCb8gdUmAX4ya78QoemesAYaCwoJBES39Q2RAnVOKkYwRAIgLi3rouyaZFS_Uilx8k99ySdQCP1tsmLR21tDb9p8LcgCIG30o5YnEooQ1n6tgm9fCT7s53k6XlxyeSkD_uIO9mb3
- --bootstrap-node=spr:CiUIAhIhA6_j28xa--PvvOUxH10wKEm9feXEKJIK3Z9JQ5xXgSD9EgIDARo8CicAJQgCEiEDr-PbzFr74--85TEfXTAoSb195cQokgrdn0lDnFeBIP0QzOGesAYaCwoJBK6Kf1-RAnVEKkcwRQIhAPUH5nQrqG4OW86JQWphdSdnPA98ErQ0hL9OZH9a4e5kAiBBZmUl9KnhSOiDgU3_hvjXrXZXoMxhGuZ92_rk30sNDA
- --bootstrap-node=spr:CiUIAhIhA7E4DEMer8nUOIUSaNPA4z6x0n9Xaknd28Cfw9S2-cCeEgIDARo8CicAJQgCEiEDsTgMQx6vydQ4hRJo08DjPrHSf1dqSd3bwJ_D1Lb5wJ4Qt_CesAYaCwoJBEDhWZORAnVYKkYwRAIgFNzhnftocLlVHJl1onuhbSUM7MysXPV6dawHAA0DZNsCIDRVu9gnPTH5UkcRXLtt7MLHCo4-DL-RCMyTcMxYBXL0
- --bootstrap-node=spr:CiUIAhIhAzZn3JmJab46BNjadVnLNQKbhnN3eYxwqpteKYY32SbOEgIDARo8CicAJQgCEiEDNmfcmYlpvjoE2Np1Wcs1ApuGc3d5jHCqm14phjfZJs4QrvWesAYaCwoJBKpA-TaRAnViKkcwRQIhANuMmZDD2c25xzTbKSirEpkZYoxbq-FU_lpI0K0e4mIVAiBfQX4yR47h1LCnHznXgDs6xx5DLO5q3lUcicqUeaqGeg
- --bootstrap-node=spr:CiUIAhIhAgybmRwboqDdUJjeZrzh43sn5mp8jt6ENIb08tLn4x01EgIDARo8CicAJQgCEiECDJuZHBuioN1QmN5mvOHjeyfmanyO3oQ0hvTy0ufjHTUQh4ifsAYaCwoJBI_0zSiRAnVsKkcwRQIhAJCb_z0E3RsnQrEePdJzMSQrmn_ooHv6mbw1DOh5IbVNAiBbBJrWR8eBV6ftzMd6ofa5khNA2h88OBhMqHCIzSjCeA
- --bootstrap-node=spr:CiUIAhIhAntGLadpfuBCD9XXfiN_43-V3L5VWgFCXxg4a8uhDdnYEgIDARo8CicAJQgCEiECe0Ytp2l-4EIP1dd-I3_jf5XcvlVaAUJfGDhry6EN2dgQsIufsAYaCwoJBNEmoCiRAnV2KkYwRAIgXO3bzd5VF8jLZG8r7dcLJ_FnQBYp1BcxrOvovEa40acCIDhQ14eJRoPwJ6GKgqOkXdaFAsoszl-HIRzYcXKeb7D9
environment:
- CODEX_DATA_DIR=/datadir
- NAT_PUBLIC_IP_AUTO=https://ip.codex.storage
- CODEX_DISC_PORT=8090
- CODEX_LISTEN_ADDRS=/ip4/0.0.0.0/tcp/8070
- CODEX_API_CORS_ORIGIN="*"
- CODEX_API_PORT=8080
- CODEX_API_BINDADDR=0.0.0.0
- CODEX_ETH_PROVIDER=https://rpc.testnet.codex.storage
- CODEX_ETH_PRIVATE_KEY=/opt/eth.key
- CODEX_MARKETPLACE_ADDRESS=0xAB03b6a58C5262f530D54146DA2a552B1C0F7648
- CODEX_CIRCUIT_DIR=/datadir/circuits
ports:
- 8080:8080/tcp # API
- 8090:8090/udp # Discovery
- 8070:8070/tcp # Transport
volumes:
- ./datadir:/datadir
- ./eth.key:/opt/eth.key
logging:
driver: json-file
options:
max-size: 100m
max-file: 5
```
3. Run Codex:
```shell
docker compose up
```
> [!NOTE]
> You would need to pass a bootstrap nodes, blockchain RPC endpoint and marketplace address based on the [network](/networks/networks) you would like to join.
### On Kubernetes
Helm chart code is available in [helm-charts](https://github.com/codex-storage/helm-charts) repository, but chart was not published yet.
## Known issues
[^multivalue-env-var]: Environment variables like `CODEX_BOOTSTRAP_NODE` and `CODEX_LISTEN_ADDRS` does not support multiple values. Please check [[Feature request] Support multiple SPR records via environment variable #525](https://github.com/codex-storage/nim-codex/issues/525), for more information.
[^sub-commands]: Sub-commands `persistence` and `persistence prover` can't be set via environment variables.
[^data-dir]: We should set data-dir explicitly when we use GitHub releases - [[BUG] Change codex default datadir from compile-time to run-time #923](https://github.com/codex-storage/nim-codex/issues/923)
[^port-forwarding]: [NAT traversal #753](https://github.com/codex-storage/nim-codex/issues/753) is not implemented yet and we would need to setup port-forwarding for discovery and transport protocols.
[^eth-account]: Please ignore `--eth-account` CLI option - [Drop support for --eth-account #727](https://github.com/codex-storage/nim-codex/issues/727).

View File

@ -0,0 +1,276 @@
---
outline: [1, 3]
---
# Codex Tokenomics Litepaper - Testnet Version
**Codex: A Decentralized Storage Protocol for Durable Information**
# Legal Notices
*The information contained in this document is intended to be made available for informational purposes only and does not constitute a prospectus, nor an offer to buy, a solicitation or an invitation to buy, or a recommendation for any token or any security. Neither this document nor any of its content should be considered as creating any expectations or forming the basis of any contract, commitment or binding obligation. No information herein should be considered to contain or be relied upon as a promise, representation, warranty or guarantee, whether express or implied and whether as to the past, present or the future in relation to the projects and matters described herein. The information presented is a summary and does not purport to be accurate, reliable or complete. This document is under continuous legal review and may be amended or supplemented at any time without prior notice.  No responsibility will be borne for the accuracy, reliability or completeness of information contained herein. Because of the high degree of risk and uncertainty described above, undue reliance should not be placed by anyone on any matters described in this document. Any tokens referenced in this document have not been registered under any securities laws and may not be offered or sold in any jurisdiction where such offer or sale would be prohibited.*
*This document may contain forward-looking statements that are based only on current expectations, estimates, forecasts, assumptions and projections about the technology, industry and markets in general. The forward looking statements, projects, content and any other matters described in this document are subject to a high degree of risk and uncertainty. The roadmap, results, project descriptions, technical details, functionalities, and other features are subject to change based on, among other things, market conditions, technical developments, and regulatory environment. The actual development and results, including the order and the timeline, might differ materially from those anticipated in these forward-looking statements.*
*The information contained in this document does not constitute financial, legal, tax, investment, professional or other advice and should not be treated as such.*
# Overview
## Scope
This document describes the Codex Tokenomics with elements that reflect the Testnet deployment of the Codex Protocol.
## What Codex Does
Codex is a state-of-the-art decentralized storage platform that offers a novel solution that enhances data durability guarantees for storing vast amounts of data while eliminating any reliance on centralized institutions that could lead to a single point of failure.
While centralized storage systems such as Google Cloud tout eleven nines of durability, durable file storage in distributed systems that provide censorship resistance and privacy are a vital prerequisite to use cases such as preserving factual records of history in network states.
While no system can guarantee absolute protection against data loss, through its technical architecture, economic incentives, and algorithmic encoding, Codex is designed to provide highly decentralized data storage with high durability, resiliency to cloud failures, and resistance to censorship.
## How Codex Works
Codex operates as a network of storage nodes, referred to herein as **Storage Providers** (SP), that store user data for the duration of a contract entered into by SPs and storage users, referred to herein simply as **Clients**.
Storage contracts are initiated by a **Client** requesting to store a specified amount of data, for a specified amount of time, and at a specific price per the full contract. **Storage Providers** commit to slots to store redundant fragments of this data.
The fact that **SPs** must post collateral (stake) in order to fill a slot helps protect against Sybil attacks, promoting diversity in storage nodes fulfilling each contract. Additionally, this collateral acts as an economic incentive to ensure that **SPs** fulfill their obligations to periodically prove that they are still in possession of the data in question.
This is achieved by periodic challenges to **SPs** to provide cryptographic proofs that demonstrate the data they have contracted to store can be retrieved. Codex incorporates Zero Knowledge (ZK) and Data Availability Sampling (DAS) to achieve low-cost, highly efficient, and reliable data loss detection.
**SPs** are required to respond to these challenges, sending their proofs to **Validators,** who verify the validity of the proofs and posts to the blockchain only the absence of a proof. This reduces costs of validating proofs, while not affecting the **Protocol**s security.
Should SPs fail to prove a fixed number of times they still have the data in question, or send an invalid proof, their collateral is partially slashed. The slash penalty is a fixed percentage of the total collateral. This slashing continues until a certain number of slashings is reached, at which point the entire collateral is slashed. At this moment, the SP slot is considered “abandoned”. The slashed collateral is used as an incentive for a new **SP** to take over the failed slot through the “slot recovery mechanism” (discussed further later). This ensures the collateral provides an economic incentive to ensure the durability of the data.
Codex is thus designed such that rational behavior for **SPs** consists of storing the data in the most space-efficient manner to minimize excess storage costs, while balancing the need for enough redundancy to recover from the possibility of data loss/corruption by the penalty of forfeiture of their collateral (slashing).
While Codexs tech maximizes recoverability and durability in the event of partial data loss, Codexs economic incentives coordinate rational actors to provide a stable and predictable environment for data storage users. At the heart of these economic incentives is the Codex utility token (CDX), which serves as the collateral to protect file durability and facilitate slot repair, and the means of payment to coordinate successful storage contracts.
# Contract Lifecycle
The marketplace coordinates matching **Clients** who want to pay for storing files with **Storage Providers** who are offering storage space and posting collateral in order to earn payments for the contract.
## Contract Request Initiation
As a design principle, **Clients** should post the deal terms they are looking for, and Storage Providers prioritize which deals meet their criteria and pose the best deals to take.
When the contract request is created, the **Client** deposits the full price of the length of the contract at that time. This deposit acts as a spam prevention mechanism and ensures that **SP** time and resources are not wasted filling slots for deals that a **Client** does not complete payment for.
## Storage Providers Fill Requests
Ahead of matching with storage contracts, **Storage Providers** specify their aggregate availabilities for new contracts.
Based on each **SPs** availabilities, a queue is created for each **SP**, ranking the open **Client** request for contract deals with the most favorable deals at the top. Over time, this queue resolves by pairing **SPs** with contracts that are compatible with their availabilities, starting with the highest ranked deals first.
At launch, **SPs** will not be able to customize the queue creation algorithm, which means **SPs** with the same availabilities will have identical queues (other than differences due to a randomness function that increases **SP** diversity per each contract). In the future, **SPs** are expected to be able to customize their queue ranking algorithm.
If a **SP** matches with a storage contract and they're eligible to reserve a slot in the contract, they reserve an open slot, download the slot data from the **Client** or existing **SPs** whose data can be used to reconstruct the slots contents, create an initial storage proof, and submit this proof, along with collateral, to the **Protocol**.
Note that a slot is not considered confirmed as filled until after an **SP** both posts associated collateral and produces a proof for the slot.
## Contract Expires Before Beginning
If there are still empty slots when the timeout/expiry for the contract request expires, the deal is terminated.
The **Storage Providers** who did fill slots, if any, are compensated for the amount of time which they did store the slot data, at the contract requests specified price per TB per Month. The remainder of the **Client**s deposit is returned.
As there is a high probability of having at least a few slots occupied, there should be no need for further penalties on the **Client** to prevent spam requests and incentivise **Clients** to submit attractive deals.
## Contract Begins
The contract begins if *all* slots are occupied, that is, **SPs** have downloaded the data, posted collateral, and posted proofs before the timeout/expiry for the contract request is reached.
At this moment, a *protocol fee* is applied on the users deposit. The proceedings are burned.
The remaining of the clients deposit is held by the protocol and will be used to pay **SPs** at the end of the contract.
## Contract is Running
**Storage Providers** must submit proofs to **Validators** according to the storage requests proof frequency, a parameter set by the Client in the request.
### Missing Proofs
If an **SP** fails to submit proofs within the rolling last periods, they are partially slashed. The penalty is a fixed percentage of the collateral. Upon provision of a proof after being partially slashed, the SP should top up the missing collateral.
Should the SP be slashed enough times, their entire collateral will be slashed and confiscated, and the SP is considered to have abandoned its slot. The provision of a correct proof at this moment will not revert the start of the slot recovery mechanism.
## A Slot in the Contract is Abandoned
When an **SP** fails to submit enough proofs such that they are slashed enough times, their slot is considered abandoned. In order to incentivize a new **SP** to come in and takeover the abandoned slot (slot recovery), 50% of the collateral confiscated from the **SP** which has abandoned the slot is used as an incentive to the new **SP**. The remaining confiscated collateral is burned.
This helps align the economic incentive for **SPs** to take over abandoned slots before filling new deals, since they can effectively earn forfeited collateral for taking over and fulfilling abandoned slots.
## Contract Defaults
If, at any time during the life of the storage contract, the number of slots currently in an abandoned state (not yet recovered), meets or exceeds the maximum number of storage slots that can be lost before the data is unrecoverable, then the entire storage deal is considered to be in a *failed* state.
Each **Storage Providers** posted collateral is burned. This incentivizes **SPs** not to let storage deals to be at risk of defaulting. **SPs** are incentivized to *proactively* avoid this by diversifying their infrastructure and the storage contracts they enter, and *reactively* by backing up their own slot data, or even backing up data from other slots so they can better assist slot recovery, as the deal approaches a failed state.
Clients also receive back any leftover from their original payment.
## Contract Ends Its Full Duration
When a started contract reaches its pre-specified duration without having previously defaulted, the contract completes.
All collateral is returned to **SPs t**hat currently fill the slots (note due to slot recovery these are not necessarily the same **SPs** that filled the slots at contract inception), and all remaining payment is returned to the client.
Deals can not be automatically rolled forward or extended. If a **Client** desires to continue a deal, they must create a new storage contract request. Otherwise, Clients can retrieve their data.
# CDX Testnet Tokenomics
The CDX token does not exist in the Testnet Phase. The description below refers to the mechanics of a Testnet token and not CDX itself and will be referred to as *CDX (Testnet token)* for this purpose.
For the avoidance of doubt, the *CDX (Testnet token)* are virtual items with no value of any kind and they are not convertible to any other currency, token, or any other form of property. They are solely intended to be utilised for the purposes of enabling the tokenomics and facilitating the different roles in this Testnet Phase.
## Roles
The Codex protocol has two primary roles fulfilled by network participants.
- **Clients**: pay Storage Providers in *CDX (Testnet token)* to securely store their data on the Codex network for an agreed upon amount of time.
- **Storage Providers**: post *CDX (Testnet token)* collateral to enter into storage contracts with Clients in exchange for a *CDX (Testnet token)* denominated payment.
- **Validators**: post *CDX (Testnet token)* collateral to validate storage proofs in exchange for a *CDX (Testnet token)* denominated payment.
## Token Utility
The *CDX (Testnet token)* is used as both a form of posted collateral and a means of payment in order to secure the network and access its services.
Collateral is primarily used as a spam and sybil-attack prevention mechanism, liability insurance (e.g. compensating Clients in case of catastrophic loss of data), and to enforce rational behavior.
Payments are made by Clients to Providers for services rendered, such as for storing data for a certain amount of time or retrieving data. This is implemented through the Marketplace contract, which serves as an escrow. Data in a storage contract is distributed into slots where each is, ideally, hosted by a different Storage Provider.
### **For Clients**
- Pay storage costs and fees in *CDX (Testnet token)* for storing files.
### **For Storage Providers**
- Post collateral in *CDX (Testnet token)* when committing to new storage contracts. This collateral is slashed if they do not fulfill their agreed upon services.
- Earn *CDX (Testnet token)* from the collateral of slashed Storage Providers by participating in the slot recovery mechanism.
- Earn *CDX (Testnet token)* from Clients when successfully completing the storage service.
### For Validators
- Post collateral in *CDX (Testnet token)* to operate the validation service. This collateral is slashed if they do not mark a proof as missing within a predetermined period.
- Earn *CDX (Testnet token)* from the collateral of slashed Storage Providers by marking proofs as missed
Figure below depicts the flow of the *CDX (Testnet token)* token within the system.
![Flow of the *CDX token within the system](/learn/tokenomics-token-flow.png)
## Value Capture and Accrual Mechanisms
Codex creates *value* to participants:
- Clients can benefit from storing data with strong durability guarantees;
- Storage Providers can earn yield from their spare resources or capital by providing a service.
- Validators earn payouts for marking proofs as missing.
Clients need *CDX (Testnet token)* tokens to request storage deals. *CDX (Testnet token)* captures the value created to Clients by being a *Value Transfer Token* to them.
Storage Providers and Validators are rewarded in *CDX (Testnet token)* token and also need it as a proof of commitment to the Protocol. They risk being slashed in exchange for rewards. *CDX (Testnet token)* captures the value created to Providers by being a *Work Token* to them.
The following mechanisms describe how the value accrues to the *CDX (Testnet token)* token.
### Protocol Fee over Contracts
If the contract is canceled before it starts, Client's deposited amount is charged a small penalty and returned, aiding to prevent low quality spam deal requests.
If the contract successfully initiates, the protocol collects a fee for facilitating the transaction. The remaining amount is made available for payments to Storage Providers.
The collected fees are burned in both cases. This creates a small but constant deflationary force on the token supply, which is proportional to the product demand.
## Behavior & Motivations
### Clients
Clients have the following rational behavior:
- Requesting storage from the network with a fee at fair market rates
- Providing data to storage nodes that meet their criteria
They may also exhibit the following adversarial behavior, whether for profit driven, malicious, or censorship motivations:
- Requesting storage from the network but never making the data available for any or all slots
- Requesting storage from the network but not releasing the data within required time period to begin the contract successfully
- Requesting storage from the network but not releasing the data to specific Providers
- Attacking SPs that host their data to attempt to relieve their payment obligations at the end of the contract.
### **Storage Providers**
Storage Providers have the following rational behavior:
- Committing to slots of storage contracts to earn a fee.
- Providing proofs of storage for their committed slots to avoid collateral slashing penalties.
- Releasing the data to anyone who requests it.
- Committing to failed slots of storage contracts to maintain the integrity of the data
They may also exhibit the following adversarial behavior, whether for profit driven, malicious, or censorship motivations:
- Reserving a contract slot but never filling it (attempt to prevent contract from starting)
- Ceasing to provide proofs mid the lifespan of a contract
- Producing proofs, but not making data available for other nodes to retrieve
### Validators
Validators have the following rational behavior:
- Marking a proof as missing to earn a fee
- Tracking the history of missed proofs of a **SP**
- Triggering the Slot Recovery Mechanism when an **SP** reaches the maximum allowed number of missed proofs
They may also exhibit the following adversarial behavior, whether for profit driven, malicious, or censorship motivations:
- Colluding with SPs to ignore missed proofs
- Observing a missed proof but do not post it onchain
## Incentive Mechanisms
The following mechanisms help incentivize the expected behavior of each role and mitigate the detrimental ones.
### Clients Provide Full Payment Upfront
Clients must deposit the full amount in *CDX (Testnet token)* that covers the entirety of the storage contract duration upfront. This indicates their pledge to pay a certain amount for the storage contract, though the contract only begins when and if all data slots are filled by Storage Providers.
### Delayed Payment to Storage Providers
Storage Providers only receive payment related to the provision of services at the end of the contract duration.
### Collateral Requirement
In order to fill a data slot, Storage Providers first stake and commit the required collateral in the form of the *CDX (Testnet token)* for that slot which is then subject to slashing if they do not post a proof to confirm the slot.
Validators also need to post collateral to participate in the validation service.
### Proof of Storage
Contracts only start when all data slots are filled. Slots are only considered filled after a Storage Provider has posted collateral and the associated proof for its slot.
Once the contract begins, Storage Providers regularly provide proof of storage.
### **Slashing for Missed Proofs of Storage**
At any point during the duration of the storage contract, the storage provider is slashed if it fails to provide a certain number of proof of storage in a row. Should the SP resume providing proof of storage, it needs to top up the slashed collateral. The penalty is a fixed percentage of the total collateral.
### Slot Recovery Mechanism
If a Storage Provider does not submit the required storage proofs when required, after a number of slashings their entire collateral will be seized. A portion of the confiscated collateral is used as an incentive for the new Storage Provider who recovers and starts serving the abandoned slot. The remainder of the confiscated collateral in *CDX (Testnet token)* is burned.
### Slashing Defaulted Contract
If, at any point during the duration of the storage contract, the number of data slots currently abandoned (and not yet recovered) reaches or surpasses the maximum allowable lost slots (meaning the data becomes irretrievable), then the entire storage contract is deemed to be *failed*.
At this stage, collaterals of all Storage Providers serving data slots in the contract are entirely slashed.
### Client Reimbursement
If at any point during the contract, sufficient slots are abandoned such that the data is not fully recoverable, Clients receive back any leftover from their original payment.
## Token Lifecycle
### Burning
*CDX (Testnet token)* tokens are burned in these instances:
- When a storage deal contract fails to initiate, a small portion of the Client's payment for the storage deal is burned. This serves primarily as a mechanism to deter spam and ensure that deal requests are submitted at market-appropriate prices for storage.
- When a storage deal contract successfully initiates, the protocol applies a fee for facilitating the transaction.
- Whenever a Storage Provider misses a certain number of storage proofs, a portion of the collateral is slashed and burned.
- Once the slot recovery mechanism resolves, the remaining of the abandoning Storage Providers collateral is burned.

30
ko/learn/troubleshoot.md Normal file
View File

@ -0,0 +1,30 @@
---
outline: [2, 3]
---
# 문제 해결
테스트넷에 Codex 노드를 연결하는 데 문제가 있으신가요? 여기 일반적인 Codex 연결 문제와 이를 진단하고 해결하는 단계들을 모아놨습니다. 여기서 해결되지 않는 문제가 있다면, Github의 미해결 이슈를 확인하거나 Discord 서버를 통해 연락해 주세요.
## 기본 사항
You've probably already considered these. But just in case:
1. Are you using a VPN? Make sure it's configured correctly to forward the right ports, and make sure you announce your node by the public IP address where you can be reached.
1. Are you using a firewall or other security software? Make sure it's configured to allow incoming connections to Codex's discovery and peer-to-peer ports.
## Check your announce address
Your node announces your public address to the network, so other nodes can connect to you. A common issue is connection failure due to incorrect announce addresses. Follow these steps to check your announce address.
1. Go to a whats-my-ip site, or `ip.codex.storage` and note the IP address.
1. Go into your router/modem WAN settings and find the public IP address.
1. These two addresses should match.
1. If they do not, it's possible that A) you're behind a VPN. In this case, it's up to you to disable the VPN or make sure all forwarding is configured correctly. or B) Your internet-service-provider has placed your uplink behind a secondary NAT. ISPs do this to save public IP addresses. The address assigned to your router/moderm is not a 'true' public internet address. Usually this issue can be solved by your ISP. Contact customer support and ask them to give you a public address (sometimes also called Dynamic IP address).
1. Call Codex's debug/info endpoint. See the [Using Codex](/learn/using) for the details.
1. In the JSON response, you'll find "announceAddresses".
1. The IP address listed there should match your public IP.
1. If the announce address in the JSON is incorrect, you can adjust it manually by changing Codex's CLI argument `--nat` or setting the environment variable `CODEX_NAT`. After you've changed your announce address and restarted your node, please allow some time (20-30mins) for the network to disseminate the updated address.
If you've performed these steps and haven't found any issues, your announce address is probably not the problem.

275
ko/learn/using.md Normal file
View File

@ -0,0 +1,275 @@
---
outline: [2, 3]
---
# Codex 사용하기
[REST API](/developers/api)를 사용하여 Codex와 상호작용할 수 있습니다. 이 문서에서는 몇 가지 유용한 예제를 보여드리겠습니다.
또한 [Codex App UI](https://app.codex.storage)를 확인할 수 있습니다.
[Linux/macOS](#linux-macos)와 [Windows](#windows)의 명령줄 인터프리터는 약간 다르게 작동하므로 사용 중인 OS에 맞는 단계를 사용하세요.
## Linux/macOS
### Overview
1. [Debug](#debug)
2. [Upload a file](#upload-a-file)
3. [Download a file](#download-a-file)
4. [Local data](#local-data)
5. [Create storage availability](#create-storage-availability)
6. [Purchase storage](#purchase-storage)
7. [View purchase status](#view-purchase-status)
### Debug
An easy way to check that your node is up and running is:
```shell
curl http://localhost:8080/api/codex/v1/debug/info \
-w '\n'
```
This will return a JSON structure with plenty of information about your local node. It contains peer information that may be useful when troubleshooting connection issues.
### Upload a file
> [!Warning]
> Once you upload a file to Codex, other nodes in the network can download it. Please do not upload anything you don't want others to access, or, properly encrypt your data *first*.
```shell
curl -X POST \
http://localhost:8080/api/codex/v1/data \
-H 'Content-Type: application/octet-stream' \
-w '\n' \
-T <FILE>
```
On successful upload, you'll receive a CID. This can be used to download the file from any node in the network.
> [!TIP]
> Are you on the [Codex Discord server](https://discord.gg/codex-storage)? Post your CID in the [# :wireless: | share-cids](https://discord.com/channels/895609329053474826/1278383098102284369) channel, see if others are able to download it. Codex does not (yet?) provide file metadata, so if you want others to be able to open your file, tell them which extension to give it.
### Download a file
When you have a CID of data you want to download, you can use the following commands:
```shell
# paste your CID from the previous step here between the quotes
CID="..."
```
```shell
curl "http://localhost:8080/api/codex/v1/data/${CID}/network/stream" \
-o "${CID}.png"
```
Please use the correct extension for the downloaded file, because Codex does not store yet content-type or extension information.
### Local data
You can view which datasets are currently being stored by your node:
```shell
curl http://localhost:8080/api/codex/v1/data \
-w '\n'
```
### Create storage availability
> [!WARNING]
> This step requires that Codex was started with the [`prover`](/learn/run#codex-storage-node) option.
In order to start selling storage space to the network, you must configure your node with the following command. Once configured, the node will monitor on-chain requests-for-storage and will automatically enter into contracts that meet these specifications. In order to enter and maintain storage contracts, your node is required to submit zero-knowledge storage proofs. The calculation of these proofs will increase the CPU and RAM usage of Codex.
```shell
curl -X POST \
http://localhost:8080/api/codex/v1/sales/availability \
-H 'Content-Type: application/json' \
-w '\n' \
-d '{
"totalSize": "8000000",
"duration": "7200",
"minPrice": "10",
"maxCollateral": "10"
}'
```
For descriptions of each parameter, please view the [spec](https://api.codex.storage/#tag/Marketplace/operation/offerStorage).
### Purchase storage
To purchase storage space from the network, first you must upload your data. Once you have the CID, use the following to create a request-for-storage.
Set your CID:
```shell
# paste your CID from the previous step here between the quotes
CID="..."
echo "CID: ${CID}"
```
Next you can run:
```shell
curl -X POST \
"http://localhost:8080/api/codex/v1/storage/request/${CID}" \
-w '\n' \
-d '{
"duration": "3600",
"reward": "1",
"proofProbability": "5",
"expiry": "1200",
"nodes": 5,
"tolerance": 2,
"collateral": "1"
}'
```
For descriptions of each parameter, please view the [spec](https://api.codex.storage/#tag/Marketplace/operation/createStorageRequest).
When successful, this request will return a Purchase-ID.
### View purchase status
Using a Purchase-ID, you can check the status of your request-for-storage contract:
```shell
# paste your PURCHASE_ID from the previous step here between the quotes
PURCHASE_ID="..."
```
Then:
```shell
curl "http://localhost:8080/api/codex/v1/storage/purchases/${PURCHASE_ID}" \
-w '\n'
```
This will display state and error information for your purchase.
| State | Description |
|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Pending | Request is waiting for chain confirmation. |
| Submitted | Request is on-chain. Hosts may now attempt to download the data. |
| Started | Hosts have downloaded the data and provided proof-of-storage. |
| Failed | The request was started, but (too many) hosts failed to provide proof-of-storage on time. While the data may still be available in the network, for the purpose of the purchase it is considered lost. |
| Finished | The request was started successfully and the duration has elapsed. |
| Expired | (Not enough) hosts have submitted proof-of-storage before the request's expiry elapsed. |
| Errored | An unfortunate state of affairs. The 'error' field should tell you more. |
## Windows
### Overview {#overview-windows}
1. [Debug](#debug-windows)
2. [Upload a file](#upload-a-file-windows)
3. [Download a file](#download-a-file-windows)
4. [Local data](#local-data-windows)
5. [Create storage availability](#create-storage-availability-windows)
6. [Purchase storage](#purchase-storage-windows)
7. [View purchase status](#view-purchase-status-windows)
### Debug {#debug-windows}
An easy way to check that your node is up and running is:
```batch
curl http://localhost:8080/api/codex/v1/debug/info
```
This will return a JSON structure with plenty of information about your local node. It contains peer information that may be useful when troubleshooting connection issues.
### Upload a file {#upload-a-file-windows}
> [!Warning]
> Once you upload a file to Codex, other nodes in the network can download it. Please do not upload anything you don't want others to access, or, properly encrypt your data *first*.
```batch
curl -X POST ^
http://localhost:8080/api/codex/v1/data ^
-H "Content-Type: application/octet-stream" ^
-T <FILE>
```
On successful upload, you'll receive a CID. This can be used to download the file from any node in the network.
> [!TIP]
> Are you on the [Codex Discord server](https://discord.gg/codex-storage)? Post your CID in the [# :wireless: | share-cids](https://discord.com/channels/895609329053474826/1278383098102284369) channel, see if others are able to download it. Codex does not (yet?) provide file metadata, so if you want others to be able to open your file, tell them which extension to give it.
### Download a file {#download-a-file-windows}
When you have a CID of data you want to download, you can use the following commands:
```batch
:: paste your CID from the previous step here between the quotes
set CID="..."
```
```batch
curl "http://localhost:8080/api/codex/v1/data/%CID%/network/stream" ^
-o "%CID%.png"
```
Please use the correct extension for the downloaded file, because Codex does not store yet content-type or extension information.
### Local data {#local-data-windows}
You can view which datasets are currently being stored by your node:
```batch
curl http://localhost:8080/api/codex/v1/data
```
### Create storage availability {#create-storage-availability-windows}
> [!WARNING]
> This step requires that Codex was started with the [`prover`](/learn/run#codex-storage-node) option.
In order to start selling storage space to the network, you must configure your node with the following command. Once configured, the node will monitor on-chain requests-for-storage and will automatically enter into contracts that meet these specifications. In order to enter and maintain storage contracts, your node is required to submit zero-knowledge storage proofs. The calculation of these proofs will increase the CPU and RAM usage of Codex.
```batch
curl -X POST ^
http://localhost:8080/api/codex/v1/sales/availability ^
-H "Content-Type: application/json" ^
-d "{""totalSize"": ""8000000"", ""duration"": ""7200"", ""minPrice"": ""10"", ""maxCollateral"": ""10""}"
```
For descriptions of each parameter, please view the [spec](https://api.codex.storage/#tag/Marketplace/operation/offerStorage).
### Purchase storage {#purchase-storage-windows}
To purchase storage space from the network, first you must upload your data. Once you have the CID, use the following to create a request-for-storage.
Set your CID:
```batch
:: paste your CID from the previous step here between the quotes
set CID="..."
echo CID: %CID%
```
Next you can run:
```batch
curl -X POST ^
"http://localhost:8080/api/codex/v1/storage/request/%CID%" ^
-H "Content-Type: application/json" ^
-d "{""duration"": ""3600"",""reward"": ""1"", ""proofProbability"": ""5"", ""expiry"": ""1200"", ""nodes"": 5, ""tolerance"": 2, ""collateral"": ""1""}"
```
For descriptions of each parameter, please view the [spec](https://api.codex.storage/#tag/Marketplace/operation/createStorageRequest).
When successful, this request will return a Purchase-ID.
### View purchase status {#view-purchase-status-windows}
Using a Purchase-ID, you can check the status of your request-for-storage contract:
```batch
:: paste your PURCHASE_ID from the previous step here between the quotes
set PURCHASE_ID="..."
```
Then:
```batch
curl "http://localhost:8080/api/codex/v1/storage/purchases/%PURCHASE_ID%"
```
This will display state and error information for your purchase.
| State | Description |
|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Pending | Request is waiting for chain confirmation. |
| Submitted | Request is on-chain. Hosts may now attempt to download the data. |
| Started | Hosts have downloaded the data and provided proof-of-storage. |
| Failed | The request was started, but (too many) hosts failed to provide proof-of-storage on time. While the data may still be available in the network, for the purpose of the purchase it is considered lost. |
| Finished | The request was started successfully and the duration has elapsed. |
| Expired | (Not enough) hosts have submitted proof-of-storage before the request's expiry elapsed. |
| Errored | An unfortunate state of affairs. The 'error' field should tell you more. |
## Known issues
1. We add a new line to the API calls to get more readable output, please check [[rest] Add line ending on responses #771](https://github.com/codex-storage/nim-codex/issues/771) for more details.

24
ko/learn/what-is-codex.md Normal file
View File

@ -0,0 +1,24 @@
# Codex란 무엇인가?
Codex는 분산형 데이터 저장 프로토콜입니다. 주요 특징으로는 강력한 검열 저항성과 내구성 보장이 있습니다. 동일한 이름의 참조 구현이 있으며, nim 언어로 작성되었습니다. [Logos](https://logos.co/) 기술 스택의 저장 계층 역할을 합니다. 무신뢰 합의 계층인 [Nomos](http://nomos.tech)와 통신 계층인 [Waku](http://waku.org)와 함께 Logos Collective의 기반 프로젝트 중 하나입니다.
동기
원격 저장소 환경은 점점 더 소수의 인터넷 강자들—Google, Microsoft, Amazon 등—에 의해 지배되고 있습니다. 이러한 서비스들은 사용자 경험과 편의성 측면에서 높은 점수를 받지만, 중앙화된 클라우드 데이터 저장소는 다음과 같은 단점을 가지고 있습니다:
검열
데이터 소유권 부재
보안 침해 및 서비스 중단
높은 비용
중앙화된 클라우드 저장소 제공자들은 데이터를 검열한 확립된 역사를 가지고 있으며, 해당 데이터의 사실상 소유자로서 자신들의 기준에 따라 그렇게 할 수 있는 권한을 가지고 있습니다. 또한, 중앙화된 플랫폼들은 수많은 경우에 주요 데이터 유출 및 서비스 중단의 피해자가 되었습니다.
이러한 사건들은 분산화되고 검열에 저항하는 대안을 위한 시장의 틈새를 만들었습니다. 기존의 P2P 저장소 및 파일 공유 네트워크는 네트워크 중단에 대한 견고성과 검열에 대한 바람직한 저항성과 같은 일부 문제를 해결합니다. 그러나 적절한 인센티브와 강력한 데이터 내구성 보장 없이는 진정으로 멈출 수 없는 애플리케이션을 구축하기 위한 적합한 기반이 되지 못합니다.
기존의 분산 저장소 솔루션들은 eDonkey 및 Gnutella와 같은 초기 P2P 파일 공유 플랫폼을 개선한다고 주장합니다. 그러나, 시장에는 여전히 저장소와 대역폭 사용 측면에서 효율적이면서도 기존 업체들과 비교할 만한 성능과 내구성 보장을 제공하는 분산 저장소 솔루션이 부족합니다.
데이터 저장소 분산화
Codex는 2021년에 web3 기술 스택을 위한 내구성 있는 분산 저장 계층의 필요성을 해결하기 위해 시작되었습니다.
"Codex"라는 이름은 고대 책의 형태를 지칭하는 것으로, 데이터 저장 엔진의 매우 강력한—99.99%—내구성 보장을 암시합니다.
Codex는 2023년 6월에 핵심 Logos Collective 프로토콜로 발표되었습니다.
### 테스트넷
Codex는 현재 테스트넷 단계에 있습니다. 클라이언트 구현은 무료이며 오픈 소프트웨어입니다. 관심이 있으시다면, Codex를 시도하고, 테스트넷에 연결하여 기능을 실험해보시기를 권장합니다. [여기서 시작하세요](./quick-start.md)

21
ko/learn/whitepaper.md Normal file
View File

@ -0,0 +1,21 @@
<center>
**초록**
</center>
<div style="display: flex; justify-content: center; align-items: center;">
<div style="text-align: justify; width: 80%">
Codex는 web3 애플리케이션을 위한 분산형 데이터 저장 프로토콜입니다. 이는 데이터 내구성과 검열 저항성을 제공하면서도 저장소와 대역폭 효율성을 유지합니다. 프로토콜은 삭제 코딩을 사용하여 데이터를 여러 조각으로 나누고, 이를 네트워크의 여러 노드에 분산시킵니다. 영지식 증명을 사용하여 데이터가 여전히 검색 가능함을 증명합니다.
[중간 생략된 약 500줄의 기술적 내용...]
</div>
</div>
## 참고 문헌
[^bitswap_spec]: IPFS Standards. "Bitswap Protocol," https://specs.ipfs.tech/bitswap-protocol/ (2024년 9월 27일 접속)
[^schroeder_07]: B. Schroeder와 G. A. Gibson, "실제 세계의 디스크 실패: 1,000,000시간의 MTTF가 의미하는 것," 제5회 USENIX 파일 및 저장 기술 컨퍼런스(FAST '07) 프로시딩, 미국 캘리포니아 산호세, 2007
[^ipfs_website]: IPFS: 중앙 서버 없이 데이터를 관리하는 개방형 시스템," IPFS, 2024. [온라인]. 사용 가능: https://ipfs.tech/. [접속: 2024년 9월 28일].

20
ko/networks/networks.md Normal file
View File

@ -0,0 +1,20 @@
# Codex 네트워크
Codex에서는 다양한 목적으로 사용되는 여러 네트워크를 출시하고 있습니다.
| 네트워크 | 상태 | 블록체인 | 목적 |
| ------------------ | ---------------------- | ---------------------------------------------------------------------- | --------------------------------------------------------------------- |
| Devnet | :building_construction:| [Geth PoA](https://geth.ethereum.org/docs/fundamentals/private-network) | 개발 목적 전용이며 최신 `master` 빌드를 따름 |
| [Testnet](testnet) | :white_check_mark: | [Geth PoA](https://geth.ethereum.org/docs/fundamentals/private-network) | 테스트 목적의 공개 네트워크로 최신 릴리스를 따름 |
| Mainnet | :construction: | :construction: | 메인 공개 네트워크 |
네트워크 간의 주요 차이점은 다음과 같습니다:
- 네트워크 목적
- 부트스트랩 노드
- 저장소 노드 수
- 사용 가능한 저장 용량
- 블록체인 네트워크
- 마켓플레이스 계약 버전
- 증명 검증에 사용되는 회로 파일 세트
Codex를 시작하는 가장 쉬운 방법은 [테스트넷에 참여](testnet)하는 것입니다.

225
ko/networks/testnet.md Normal file
View File

@ -0,0 +1,225 @@
---
outline: [2, 4]
---
# Codex 테스트넷
Codex 테스트넷이 출시되어 테스트에 사용할 준비가 되었습니다.
Your participation in the Codex Testnet is subject to the [Codex Testnet Terms and Conditions](https://github.com/codex-storage/codex-testnet-starter/blob/master/Codex%20Testnet%20Terms%20and%20Conditions.pdf) and [Codex Testnet Privacy Policy](https://github.com/codex-storage/codex-testnet-starter/blob/master/Codex%20Testnet%20Privacy%20Policy.pdf).
**Guides.** We have basic guides covering how to set up a Storage Client which can be used to upload and persist files by buying storage in the Codex network. We recommend that you start with those.
Running a Storage Provider is more involved and is covered as a separate guide which demonstrates the storage sales side, as well as how to run Codex with its own local Ethereum execution client.
Guides are available either on Discord, as step-by-step, interactive guides, or here as simple instructions that you can follow:
- **Basic: running a storage client.** [[Discord](#sc-guide-discord) | [web](#sc-guide-web)]
- **Advanced: Running a storage provider.** [[web](#sp-guide-web)]
The guides were tested on the following operating systems:
- Linux: Ubuntu 24.04, Debian 12, Fedora 40
- macOS: 15
- Windows: 11, Server 2022
## 테스트넷 정보
| 항목 | 값 |
| --- | --- |
| 체인 ID | 2430 |
| 통화 기호 | ETH |
| 블록 시간 | 5초 |
| 합의 | Clique PoA |
| 네트워크 ID | 2430 |
| 네트워크 이름 | Codex Testnet |
| RPC URL | https://rpc.testnet.codex.storage |
## Running a Storage Client (Discord Version) {#sc-guide-discord}
You can join [Codex Discord server](https://discord.gg/codex-storage) and jump into the [#:tv:|join-testnet](https://discord.com/channels/895609329053474826/1289923125928001702) channel.
It is mostly the same as a [Web guide](#sc-guide-web), but uses Discord capabilities so you can have an interactive, step-by-step guide, and you also can get a support in the [#:sos:|node-help](https://discord.com/channels/895609329053474826/1286205545837105224) channel.
## Running a Storage Client (Web Version) {#sc-guide-web}
**Prerequisites**
- Access to your Internet router so you can [configure port forwarding](#basic-common)
Steps for [Linux/macOS](#basic-linux-macos) and [Windows](#basic-windows) are slightly different, so please use ones for your OS.
<hr>
### Linux/macOS {#basic-linux-macos}
1. Download the master tarball from the Codex testnet starter repository, and untar its contents:
```shell
curl -LO https://github.com/codex-storage/codex-testnet-starter/archive/master.tar.gz
tar xzvf master.tar.gz
rm master.tar.gz
```
2. Navigate to the scripts folder:
```shell
cd codex-testnet-starter-master/scripts
```
3. Install dependencies when required:
```shell
# Debian-based Linux
sudo apt update && sudo apt install libgomp1
```
4. Download Codex binaries from GitHub releases:
```shell
./download_online.sh
```
5. Generate an ethereum keypair:
```shell
./generate.sh
```
Your private key will be saved to `eth.key` and address to `eth.address` file.
6. Fill-up your address shown on the screen with the tokens:
- Use the web faucets to mint some [ETH](https://faucet-eth.testnet.codex.storage) and [TST](https://faucet-tst.testnet.codex.storage) tokens.
- We can also do this using Discord [# bot](https://discord.com/channels/895609329053474826/1230785221553819669) channel
- Use `/set ethaddress` command to enter your generated address
- Use `/mint` command to receive ETH and TST tokens
- Use `/balance` command to check if you have received test tokens successfully
7. Run Codex node:
```shell
./run_client.sh
```
8. Configure [port forwarding](#basic-common) and we are ready go to.
### Windows {#basic-windows}
1. Download the master tarball from the Codex testnet starter repository, and untar its contents:
> [!WARNING]
> Windows antivirus software and built-in firewalls may cause steps to fail. We will cover some possible errors here, but always consider checking your setup if requests fail - in particular, if temporarily disabling your antivirus fixes it, then it is likely to be the culprit.
```batch
curl -LO https://github.com/codex-storage/codex-testnet-starter/archive/master.tar.gz
```
If you see an error like:
```batch
curl: (35) schannel: next InitializeSecurityContext failed: CRYPT_E_NO_REVOCATION_CHECK (0x80092012) - The revocation function was unable to check revocation for the certificate.
```
You may need to add the `--ssl-no-revoke` option to your curl call, e.g.:
```batch
curl -LO --ssl-no-revoke https://github.com/codex-storage/codex-testnet-starter/archive/master.tar.gz
```
1. Extract the contents of the tar file, and then delete it:
```batch
tar xzvf master.tar.gz
del master.tar.gz
```
2. Navigate to the scripts folder:
```batch
cd codex-testnet-starter-master\scripts\windows
```
3. Download Codex binaries from GitHub releases:
```batch
download-online.bat
```
4. Generate an ethereum keypair:
```batch
generate.bat
```
Your private key will be saved to `eth.key` and address to `eth.address` file.
5. Fill-up your address shown on the screen with the tokens:
- Use the web faucets to mint some [ETH](https://faucet-eth.testnet.codex.storage) and [TST](https://faucet-tst.testnet.codex.storage) tokens.
- We can also do this using Discord [# bot](https://discord.com/channels/895609329053474826/1230785221553819669) channel
- Use `/set ethaddress` command to enter your generated address
- Use `/mint` command to receive ETH and TST tokens
- Use `/balance` command to check if you have received test tokens successfully
6. Run Codex node:
```batch
run-client.bat
```
7. Configure [port forwarding](#basic-common) and we are ready go to.
### All OS {#basic-common}
Configure [port forwarding](https://en.wikipedia.org/wiki/Port_forwarding) on your Internet router
| # | Protocol | Port | Description |
| - | -------- | ------ | ----------------- |
| 1 | `UDP` | `8090` | `Codex Discovery` |
| 2 | `TCP` | `8070` | `Codex Transport` |
After your node is up and running, you can use the [Codex API](/developers/api) to be able to interact with your Codex node, please check our [API walk-through](/learn/using) for more details.
You also can use [Codex App UI](https://app.codex.storage) to interact with your local Codex node.
Need help? Reach out to us in [#:sos:|node-help](https://discord.com/channels/895609329053474826/1286205545837105224) channel or check [troubleshooting guide](/learn/troubleshoot.md).
## Running a Storage Provider (Web Version) {#sp-guide-web}
Work in progress :construction:
## Testnet Data
### Bootstrap Nodes
**Codex**
```shell
spr:CiUIAhIhAiJvIcA_ZwPZ9ugVKDbmqwhJZaig5zKyLiuaicRcCGqLEgIDARo8CicAJQgCEiECIm8hwD9nA9n26BUoNuarCEllqKDnMrIuK5qJxFwIaosQ3d6esAYaCwoJBJ_f8zKRAnU6KkYwRAIgM0MvWNJL296kJ9gWvfatfmVvT-A7O2s8Mxp8l9c8EW0CIC-h-H-jBVSgFjg3Eny2u33qF7BDnWFzo7fGfZ7_qc9P
spr:CiUIAhIhAyUvcPkKoGE7-gh84RmKIPHJPdsX5Ugm_IHVJgF-Mmu_EgIDARo8CicAJQgCEiEDJS9w-QqgYTv6CHzhGYog8ck92xflSCb8gdUmAX4ya78QoemesAYaCwoJBES39Q2RAnVOKkYwRAIgLi3rouyaZFS_Uilx8k99ySdQCP1tsmLR21tDb9p8LcgCIG30o5YnEooQ1n6tgm9fCT7s53k6XlxyeSkD_uIO9mb3
spr:CiUIAhIhA6_j28xa--PvvOUxH10wKEm9feXEKJIK3Z9JQ5xXgSD9EgIDARo8CicAJQgCEiEDr-PbzFr74--85TEfXTAoSb195cQokgrdn0lDnFeBIP0QzOGesAYaCwoJBK6Kf1-RAnVEKkcwRQIhAPUH5nQrqG4OW86JQWphdSdnPA98ErQ0hL9OZH9a4e5kAiBBZmUl9KnhSOiDgU3_hvjXrXZXoMxhGuZ92_rk30sNDA
spr:CiUIAhIhA7E4DEMer8nUOIUSaNPA4z6x0n9Xaknd28Cfw9S2-cCeEgIDARo8CicAJQgCEiEDsTgMQx6vydQ4hRJo08DjPrHSf1dqSd3bwJ_D1Lb5wJ4Qt_CesAYaCwoJBEDhWZORAnVYKkYwRAIgFNzhnftocLlVHJl1onuhbSUM7MysXPV6dawHAA0DZNsCIDRVu9gnPTH5UkcRXLtt7MLHCo4-DL-RCMyTcMxYBXL0
spr:CiUIAhIhAzZn3JmJab46BNjadVnLNQKbhnN3eYxwqpteKYY32SbOEgIDARo8CicAJQgCEiEDNmfcmYlpvjoE2Np1Wcs1ApuGc3d5jHCqm14phjfZJs4QrvWesAYaCwoJBKpA-TaRAnViKkcwRQIhANuMmZDD2c25xzTbKSirEpkZYoxbq-FU_lpI0K0e4mIVAiBfQX4yR47h1LCnHznXgDs6xx5DLO5q3lUcicqUeaqGeg
spr:CiUIAhIhAgybmRwboqDdUJjeZrzh43sn5mp8jt6ENIb08tLn4x01EgIDARo8CicAJQgCEiECDJuZHBuioN1QmN5mvOHjeyfmanyO3oQ0hvTy0ufjHTUQh4ifsAYaCwoJBI_0zSiRAnVsKkcwRQIhAJCb_z0E3RsnQrEePdJzMSQrmn_ooHv6mbw1DOh5IbVNAiBbBJrWR8eBV6ftzMd6ofa5khNA2h88OBhMqHCIzSjCeA
spr:CiUIAhIhAntGLadpfuBCD9XXfiN_43-V3L5VWgFCXxg4a8uhDdnYEgIDARo8CicAJQgCEiECe0Ytp2l-4EIP1dd-I3_jf5XcvlVaAUJfGDhry6EN2dgQsIufsAYaCwoJBNEmoCiRAnV2KkYwRAIgXO3bzd5VF8jLZG8r7dcLJ_FnQBYp1BcxrOvovEa40acCIDhQ14eJRoPwJ6GKgqOkXdaFAsoszl-HIRzYcXKeb7D9
```
**Geth**
```shell
enode://cff0c44c62ecd6e00d72131f336bb4e4968f2c1c1abeca7d4be2d35f818608b6d8688b6b65a18f1d57796eaca32fd9d08f15908a88afe18c1748997235ea6fe7@159.223.243.50:40010
enode://ea331eaa8c5150a45b793b3d7c17db138b09f7c9dd7d881a1e2e17a053e0d2600e0a8419899188a87e6b91928d14267949a7e6ec18bfe972f3a14c5c2fe9aecb@68.183.245.13:40030
enode://4a7303b8a72db91c7c80c8fb69df0ffb06370d7f5fe951bcdc19107a686ba61432dc5397d073571433e8fc1f8295127cabbcbfd9d8464b242b7ad0dcd35e67fc@174.138.127.95:40020
enode://36f25e91385206300d04b95a2f8df7d7a792db0a76bd68f897ec7749241b5fdb549a4eecfab4a03c36955d1242b0316b47548b87ad8291794ab6d3fecda3e85b@64.225.89.147:40040
enode://2e14e4a8092b67db76c90b0a02d97d88fc2bb9df0e85df1e0a96472cdfa06b83d970ea503a9bc569c4112c4c447dbd1e1f03cf68471668ba31920ac1d05f85e3@170.64.249.54:40050
enode://6eeb3b3af8bef5634b47b573a17477ea2c4129ab3964210afe3b93774ce57da832eb110f90fbfcfa5f7adf18e55faaf2393d2e94710882d09d0204a9d7bc6dd2@143.244.205.40:40060
enode://6ba0e8b5d968ca8eb2650dd984cdcf50acc01e4ea182350e990191aadd79897801b79455a1186060aa3818a6bc4496af07f0912f7af53995a5ddb1e53d6f31b5@209.38.160.40:40070
```
### Smart contracts
| Contract | Address |
| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| Token | [`0x34a22f3911De437307c6f4485931779670f78764`](https://explorer.testnet.codex.storage/address/0x34a22f3911De437307c6f4485931779670f78764) |
| Verifier | [`0x02dd582726F7507D7d0F8bD8bf8053d3006F9092`](https://explorer.testnet.codex.storage/address/0x02dd582726F7507D7d0F8bD8bf8053d3006F9092) |
| Marketplace | [`0xAB03b6a58C5262f530D54146DA2a552B1C0F7648`](https://explorer.testnet.codex.storage/address/0xAB03b6a58C5262f530D54146DA2a552B1C0F7648) |
### Endpoints
| # | Service | URL |
| - | --------------- | ---------------------------------------------------------------------------- |
| 1 | Geth Public RPC | [rpc.testnet.codex.storage](https://rpc.testnet.codex.storage) |
| 2 | Block explorer | [explorer.testnet.codex.storage](https://explorer.testnet.codex.storage) |
| 3 | Faucet ETH | [faucet-eth.testnet.codex.storage](https://faucet-eth.testnet.codex.storage) |
| 4 | Faucet TST | [faucet-tst.testnet.codex.storage](https://faucet-tst.testnet.codex.storage) |
| 5 | Status page | [status.testnet.codex.storage](https://status.testnet.codex.storage) |
## 유용한 링크
| # | 서비스 | URL |
| - | --------------- | ---------------------------------------------------------------------------- |
| 1 | Geth 공개 RPC | [rpc.testnet.codex.storage](https://rpc.testnet.codex.storage) |
| 2 | 블록 탐색기 | [explorer.testnet.codex.storage](https://explorer.testnet.codex.storage) |
| 3 | ETH 수도꼭지 | [faucet-eth.testnet.codex.storage](https://faucet-eth.testnet.codex.storage) |
| 4 | TST 수도꼭지 | [faucet-tst.testnet.codex.storage](https://faucet-tst.testnet.codex.storage) |
| 5 | 상태 페이지 | [status.testnet.codex.storage](https://status.testnet.codex.storage) |