1 Home
Richard Ramos edited this page 2019-05-01 18:57:25 -04:00

Project Phoenix

Frontend library to vastly improve the dapp development experience and solve typical problems dapp developers encounter, this would include multiple ideas we brainstorm before including:

  • Event sourcing solution
  • Streams
  • Redux provider
  • Client-side GraphQL API

Goals:

Create a library that allows the developer to easily define a data schema for contracts, integrate with a solution such as redux and not have to worry about syncing and event sourcing

Ideas

  1. An schema needs to be defined for the data extraction. 1.1. Consider extracting this schema from the ABI. In theory you should be able to replicate the state of the contract using events if they're well made.
  2. Define block number from which to begin extracting data. 2.1. Can be automatically extracted from the transaction that deployed the contrac
  3. Option to save / restore data from IPFS
  4. Consider using https://github.com/hasura/client-side-graphql https://blog.hasura.io/client-side-graphql-schema-resolving-and-schema-stitching-f4d8bccc42d2/ 4.1. Is it necessary to use graphql if the schema is already defined with the ABI, and functions like map,filter,reduce, can be used to filter data?
  5. There is a limit on how much data you can put in LocalStorage. This limit is set by browser vendors and therefore varies between browsers. To be safe, you should assume that your web application has only 2.5mb of storage space.
  6. https://github.com/tantaman/LargeLocalStorage Alternative to local storage. It requires a user permission. Can be used depending on the amount fo data to store. (possible config) (conflicting data found in internet. some sites mention 2.5mb, and other mention 5mb)

Data preload

Extracting data can be a time consuming process depending on the number of blocks to process. An alternative can be created using a combination of a smart contract, IPFS and cronjob, to extract periodically data that will be uploaded to IPFS and can be downloaded by the users to avoid starting the data extraction process from scratch

a. A background process is run in a server that will perform data extraction using the same schema as the UI for a delta of last block processed and current block.

b. This data is uploaded to IPFS. and then invoke addFile

c. Users that want to preload the data, would check what's the latest block they have syncd in their machine, and compare it against lastBlockNum in the contract

d. If their data is behind, they can navigate the extractedData linkedList offchain until they find which data block to download from ipfs to update the local storage.

contract DataSnapshots is Owned {
    
    struct ipfsData {
        uint prevBlockNumber;
        bytes32 ipfsFile;
    }
    
    mapping(uint => ipfsData) extractedData;
    
    mapping(bytes32 => uint) fileToBlock;
    
    uint public lastBlockNum;

    function DataSnapshots() public { }

    function addFile(bytes32 _ipfsFile, uint _blockNumber)
        public
        onlyOwner
        returns(bool success)
    {
        require(fileToBlock[_ipfsFile] == 0);
        require(_blockNumber > 0);
        require(extractedData[_blockNumber].ipfsFile == bytes32(0));
        
        extractedData[_blockNumber].prevBlockNumber = lastBlockNum;
        extractedData[_blockNumber].ipfsFile = _ipfsFile;
        
        lastBlockNum = _blockNumber;
        
        fileToBlock[_ipfsFile] = _blockNumber;
        
        // TODO: add event
        // TODO: determine how to handle overwriting data. Maybe a version number can be introduced, and a version chain will continue with one version of the data, and another with the other. (this is just for verification)
    }
    
}