sourcecred/package.json

125 lines
3.7 KiB
JSON
Raw Normal View History

{
"name": "sourcecred",
"version": "0.1.0",
"private": true,
"dependencies": {
"aphrodite": "^2.1.0",
deps: add `better-sqlite3` (#836) Summary: I selected this over the alternatives, `sqlite` and `sqlite3`, primarily because its README explicitly acknowledges that using asynchronous APIs for CPU-bound or serialized work units are worse than useless. To me, this is a sign that the maintainer has his head on straight. The many-fold performance increase over `sqlite` and `sqlite3` is nice to have, too. For now, we use my fork of the project, which includes a critical patch to support private in-memory databases via SQLite’s standard `:memory:` filepath. When this patch is merged upstream, we can move back to mainline. Test Plan: The following session demonstrates the basic API and validates that the install has completed successfully: ```js const Database = require("better-sqlite3"); const db = new Database("/tmp/irrelevant", {memory: true}); db.prepare("CREATE TABLE pythagorean_triples (x, y, z)").run(); const insert = db.prepare("INSERT INTO pythagorean_triples VALUES (?, ?, ?)"); const get = db.prepare( "SELECT rowid, x * x + y * y AS xxyy, z * z AS zz FROM pythagorean_triples" ); function print(x) { console.log(JSON.stringify(x)); } print(insert.run(3, 4, 5)); print(get.all()); print(insert.run(5, 12, 13)); print(get.all()); db.prepare("DELETE FROM pythagorean_triples").run(); print(get.all()); ``` It prints: ```js {"changes":1,"lastInsertROWID":1} [{"rowid":1,"xxyy":25,"zz":25}] {"changes":1,"lastInsertROWID":2} [{"rowid":1,"xxyy":25,"zz":25},{"rowid":2,"xxyy":169,"zz":169}] [] ``` wchargin-branch: dep-better-sqlite3
2018-09-14 01:20:10 +00:00
"better-sqlite3": "wchargin/better-sqlite3#wchargin-private-inmemory-db",
"chalk": "1.1.3",
"commonmark": "^0.28.1",
"express": "^4.16.3",
"fs-extra": "3.0.1",
"history": "^3.0.0",
"isomorphic-fetch": "^2.2.1",
"json-stable-stringify": "^1.0.1",
"lodash.clonedeep": "^4.5.0",
"lodash.isequal": "^4.5.0",
"lodash.sortby": "^4.7.0",
"mkdirp": "^0.5.1",
"object-assign": "4.1.1",
Store GitHub data gzipped at rest (#751) Summary: We store the relational view in `view.json.gz` instead of `view.json`, taking advantage of the isomorphic `pako` library for gzip encoding and decoding. Sample space savings (note that post bodies are included; i.e., #747 has not been applied): SAVE OLD (B) NEW (B) REPO 89.7% 25326 2617 sourcecred/example-github 82.9% 3257576 555948 sourcecred/sourcecred 85.2% 11287621 1665884 ipfs/js-ipfs 88.0% 20953425 2520358 gitcoinco/web 84.4% 38196825 5951459 ipfs/go-ipfs 84.9% 205770642 31101452 tensorflow/tensorflow <details> <summary>Script to generate space savings output</summary> ```shell savings() { printf '% 7s % 11s % 11s %s\n' 'SAVE' 'OLD (B)' 'NEW (B)' 'REPO' for repo; do file="${SOURCECRED_DIRECTORY}/data/${repo}/github/view.json.gz" if ! [ -f "${file}" ]; then printf >&2 'warn: no such file %s\n' "${file}" continue fi script="$(sed -e 's/^ *//' <<EOF repo = '${repo}' pre_size = $(<"${file}" gzip -dc | wc -c) post_size = $(<"${file}" wc -c) percentage = '%0.1f%%' % (100 * (1 - post_size / pre_size)) p = '% 7s % 11d % 11d %s' % (percentage, pre_size, post_size, repo) print(p) EOF )" python3 -c "${script}" done } ``` </details> Closes #750. Test Plan: Comparing the raw old version with the decompressed new version shows that they are identical: ``` $ <~/tmp/sourcecred/data/sourcecred/example-github/github/view.json \ > shasum -a 256 - 63853b9d3f918274aafacf5198787e18185a61b9c95faf640a1e61f5d11fa19f - $ <~/tmp/sourcecred/data/sourcecred/example-github/github/view.json.gz \ > gzip -dc | shasum -a 256 63853b9d3f918274aafacf5198787e18185a61b9c95faf640a1e61f5d11fa19f - ``` Additionally, `yarn test --full` passes, and `yarn start` still loads data and runs PageRank properly. wchargin-branch: gzip-relational-view
2018-09-01 17:42:30 +00:00
"pako": "^1.0.6",
"promise": "8.0.1",
"react": "^16.4.1",
"react-dom": "^16.4.1",
"react-router": "3.2.1",
Retry GitHub queries with exponential backoff (#699) Summary: This patch adds independent exponential backoff to each individual GitHub GraphQL query. We remove the fixed `GITHUB_DELAY_MS` delay before each query in favor of this solution, which requires no additional configuration (thus resolving a TODO in the process). We use the NPM module `retry` with its default settings: namely, a maximum of 10 retries with factor-2 backoff starting at 1000ms. Empirically, it seems very unlikely that we should require much more than 2 retries for a query. (See Test Plan for more details.) This is both a short-term unblocker and a good kind of thing to have in the long term. Test Plan: Note that `yarn test --full` passes, including `fetchGithubRepoTest.sh`. Consider manual testing as follows. Add `console.info` statements in `retryGithubFetch`, then load a large repository like TensorFlow, and observe the output: ```shell $ node bin/sourcecred.js load --plugin github tensorflow/tensorflow 2>&1 | ts -s '%.s' 0.252566 Fetching repo... 0.258422 Trying... 5.203014 Trying... [snip] 1244.521197 Trying... 1254.848044 Will retry (n=1)... 1260.893334 Trying... 1271.547368 Trying... 1282.094735 Will retry (n=1)... 1283.349192 Will retry (n=2)... 1289.188728 Trying... [snip] 1741.026869 Ensuring no more pages... 1742.139978 Creating view... 1752.023697 Stringifying... 1754.697116 Writing... 1754.697772 Done. ``` This took just under half an hour, with 264 queries total, of which: - 225 queries required 0 retries; - 38 queries required exactly 1 retry; - 1 query required exactly 2 retries; and - 0 queries required 3 or more retries. wchargin-branch: github-backoff
2018-08-22 18:37:29 +00:00
"retry": "^0.12.0",
Port skeleton of Odyssey frontend (#1132) This commit integrates an bare skeleton of the odyssey frontend that we implemented in the [odyssey-hackathon] repository. You can see the working frontend that we are trying to port over at [sourcecred.io/odyssey-hackathon/][scio]. The prototype in the other repository has some tooling choices which are incompatible/redundant with decisions in our codebase (sass vs aphrodite), and requires some tools not yet present here (svg-react-loader). This commit includes the build and integration work needed to port the prototype frontend into mainline SourceCred. The frontend scaffold isn't yet integrated with any "real" Odyssey data. One potential issue: right now, every page that is rendered from the SourceCred homepage is contained within a [homepage/Page], meaning that it has full SourceCred website styling, along with the SourceCred website header. The [application][scio] also has a header. Currently, I work around this by having the Odyssey UI cover up the base header (via absolute positioning), which works but is hacky. We can consider more principled solutions: - Finding a way to specify routes which aren't contained by [homepage/Page]; maybe by adding a new top-level route [here][route-alternative]. - Unify the headers for the Odyssey viewer and the page as a whole (sounds like inappropriate entanglement?) - Have a website header and also an application header (sounds ugly?) [homepage/Page]: https://github.com/sourcecred/sourcecred/blob/ee1d2fb996718fe41325711271542a54c197a1ed/src/homepage/Page.js [route-alternative]: https://github.com/sourcecred/sourcecred/blob/ee1d2fb996718fe41325711271542a54c197a1ed/src/homepage/createRoutes.js#L17 Test plan: Run `yarn start`, and then navigate to `localhost:8080/odyssey/`. observe that a working website is displayed, and that the cred logo next to the word "SourceCred" is loaded properly (i.e. svg-react-loader is integrated properly). Observe that there are no build/compile errors from either `yarn start` or `yarn build`. Also, observe that the UI looks passably nice, and that if the number of elements in the entity lists is larger than can be displayed, the sidebar pane scrolls independently. The UI was tested in both Chrome and Firefox. [odyssey-hackathon]: https://github.com/sourcecred/odyssey-hackathon [scio]: https://sourcecred.io/odyssey-hackathon/ Thanks to @jmnemo, as the implementation is based on [his work]. [his work]: https://github.com/jmnemo/hackathon-event/
2019-05-06 15:15:39 +00:00
"svg-react-loader": "^0.4.6",
"tmp": "^0.0.33",
"whatwg-fetch": "2.0.3"
},
"devDependencies": {
"babel-core": "6.26.0",
"babel-eslint": "7.2.3",
"babel-jest": "20.0.3",
"babel-loader": "7.1.2",
"babel-plugin-transform-es2015-for-of": "^6.23.0",
"babel-preset-react-app": "^3.1.1",
"babel-runtime": "6.26.0",
"copy-webpack-plugin": "^4.5.2",
"css-loader": "0.28.7",
"dotenv": "4.0.0",
"dotenv-expand": "4.0.1",
"enzyme": "^3.3.0",
"enzyme-adapter-react-16": "^1.1.1",
"enzyme-to-json": "^3.3.3",
"eslint": "4.10.0",
"eslint-config-react-app": "^2.1.0",
"eslint-plugin-flowtype": "2.50.0",
"eslint-plugin-import": "2.8.0",
"eslint-plugin-jsx-a11y": "5.1.1",
"eslint-plugin-react": "7.4.0",
"file-loader": "1.1.5",
"flow-bin": "^0.86.0",
"jest": "^23.6.0",
"jest-fetch-mock": "^1.6.5",
"prettier": "^1.13.4",
"raf": "3.4.0",
"react-dev-utils": "^5.0.0",
"static-site-generator-webpack-plugin": "^3.4.1",
"url-loader": "0.6.2",
"webpack": "3.8.1",
"webpack-dev-server": "2.9.4",
"webpack-manifest-plugin": "1.3.2",
"webpack-node-externals": "^1.7.2"
},
"scripts": {
"prettify": "prettier --write '**/*.js'",
"check-pretty": "prettier --list-different '**/*.js'",
"start": "NODE_ENV=development webpack-dev-server --config config/webpack.config.web.js",
"build": "NODE_ENV=production webpack --config config/webpack.config.web.js",
"backend": "NODE_ENV=development webpack --config config/webpack.config.backend.js",
"test": "node ./config/test.js",
"unit": "BABEL_ENV=test NODE_ENV=test jest --env=jsdom",
Add `sharness` for shell-based testing (#597) Summary: We will shortly want to perform testing of shell scripts; it makes the most sense to do so via the shell. We could roll our own testing framework, but it makes more sense to use an existing one. By choosing Sharness, we’re in good company: `go-ipfs` and `go-multihash` use it as well, and it’s derived from Git’s testing library. I like it a lot. For now, we need a dummy test file; our test runner will fail if there are no tests to run. As soon as we have a real test, we can remove this. This commit was generated by following the “per-project installation” instructions at https://github.com/chriscool/sharness, and by additionally including that repository’s `COPYING` file as `SHARNESS_LICENSE`, with a header prepended. I considered instead adding Sharness as a submodule, which is supported and has clear advantages (e.g., you can update the thing), but opted to avoid the complexity of submodules for now. Test Plan: Create the following tests in the `sharness` directory: ```shell $ cat sharness/good.t #!/bin/sh test_description='demo of passing tests' . ./sharness.sh test_expect_success "look at me go" true test_expect_success EXPENSIVE "this may take a while" 'sleep 2' test_done # vim: ft=sh $ cat sharness/bad.t #!/bin/sh test_description='demo of failing tests' . ./sharness.sh test_expect_success "I don't feel so good" false test_done # vim: ft=sh ``` Note that `yarn sharness` and `yarn test` fail appropriately. Note that `yarn sharness-full` fails appropriately after taking two extra seconds, and `yarn test --full` runs the latter. Each failure message should print the name of the failing test case, not just the suite name, and should indicate that the passing tests passed. Then, remove `sharness/bad.t`, and note that the above commands all pass, with the `--full` variants still taking longer. Finally, remove `sharness/good.t`, and note that the above commands all pass (and all pass quickly). wchargin-branch: add-sharness
2018-08-06 19:56:25 +00:00
"sharness": "make -sC ./sharness prove PROVE_OPTS=-f TEST_OPTS='--chain-lint'",
"sharness-full": "make -sC ./sharness prove PROVE_OPTS=-vf TEST_OPTS='-v --chain-lint --long'",
"coverage": "yarn run unit --coverage",
"flow": "flow",
"lint": "eslint src config --max-warnings 0"
},
"license": "MIT + Apache-2",
"jest": {
"collectCoverageFrom": [
"src/**/*.{js,jsx,mjs}"
],
"setupFiles": [
"<rootDir>/config/polyfills.js",
"<rootDir>/config/jest/setupJest.js"
],
"testMatch": [
"<rootDir>/src/**/__tests__/**/*.{js,jsx,mjs}",
"<rootDir>/src/**/?(*.)(spec|test).{js,jsx,mjs}"
],
"testEnvironment": "node",
"testURL": "http://localhost",
"transform": {
"^.+\\.(js|jsx|mjs)$": "<rootDir>/node_modules/babel-jest",
"^.+\\.css$": "<rootDir>/config/jest/cssTransform.js",
"^(?!.*\\.(js|jsx|mjs|css|json)$)": "<rootDir>/config/jest/fileTransform.js"
},
"transformIgnorePatterns": [
"[/\\\\]node_modules[/\\\\].+\\.(js|jsx|mjs)$"
],
"moduleNameMapper": {
"^react-native$": "react-native-web"
},
"moduleFileExtensions": [
"web.js",
"mjs",
"js",
"json",
"web.jsx",
"jsx",
"node"
]
},
"babel": {
"presets": [
"./config/babel"
]
},
"files": [
"/bin",
"/build"
]
}