This replaces our "compressByThreshold" strategy (where we include only
over-time data for nodes/flows with above 10 Cred). Instead, we keep
over time data only for user nodes. This basically gives us
CredRank-esque semantics upfront, which is nice future proofing.
Test plan: I didn't write tests because this is the sort of thing which
will either work or be obviously broken, and it's in a part of the
codebase which we expect to replace wholesale anyway (when CredRank
rolls around).
Will test via downstream usage before merging.
This commit adds a new intermediate data format which includes a mixture
of account data and cred data. It includes the full info for users where
we have identities and accounts, and also includes info on "unclaimed"
aliases (user accounts not linked to any identity).
The non-alias data is useful for computing Grain distributions which is
why I've prioritized it, but I'm also writing the data out to disk
because I think it might prove useful, either for frontend development
or for external consumers. It's definitely an experimental API so folks
shouldn't assume it will stay around unchanged indefinitely.
Test plan: Inspected the output, also added tests.
This commit modifies the GitHub createGraph method so that it now
returns a WeightedGraph, and that WeightedGraph sets weight 0 for any
pull request which hasn't been merged.
This improves Cred robustness, since it's easy to (non-maliciously)
create a bunch of unmerged PRs, but getting them merged is a signal of
quality.
Test plan: Unit tests added.
By default, let's only mint Cred for pull requests and reviews.
I'll follow on with a change so that only merged PRs mint Cred. I'll let
all reviews mint 1 cred for now, though it is potentially abuse-able by
adding tons of reviews to stale PRs, we'll want to keep an eye out for
it.
Test plan: Just a weight change. `yarn test`.
Now, if a `weights.json` file is present in the `config/` folder, its
weights will be loaded and used in computing the credResult.
Test plan: We don't yet have proper unit testing for the CLI, so I added
weights to the snapshot. The snapshot weights are pretty silly; 32x for
Discourse posts and 32x for GitHub bots. Load the UI via `yarn start`,
and observe that these weights were persisted in the CredResult.
If the instance wants to specify custom TimelineCredParams, e.g. a
custom alpha value, they may now write a partial TimelineCredParams file
as `config/params.json`.
Test plan: We don't yet have proper unit testing for the CLI, so I added
this case to the test-instance. Using the new frontend, I've verified
that the custom alpha value is correctly reflected in the output data
(use `yarn start`, open weight config, and view alpha).
This changes the instance system structure so that all the
plugin-specific configs are organized under
`config/plugins/$OWNER/$NAME` instead of `config/$OWNER/$NAME`. I think
this is a somewhat clearer structure; since `config/` will hold other
files (e.g. `weights.json` or `params.json`), I think it's cleaner if
everything plugin-specific is under it's own clearly scoped folder. This
avoids potential confusion if we ever have plugins with very
generic-named organizations, e.g. "config".
Test plan: The test instance has been updated, and the sharness test
loading still works.
This adds a simple sharness snapshot test for the new instance system,
in the model of the old [test_load_example_github.t][old].
I've setup a very simple test instance with the GitHub and Discourse
plugins, and we verify that the output generated by running `load`,
`graph`, and `score` in succession is stable. (Cache is not
persisted.) This is a nice sanity check to verify that nothing ever gets
totally broken; we'll still want to add unit testing for more specific
features and edge cases.
Test plan:
`yarn test --full` passes.
If you `rm -rf sharness/test-instance/output`, then `yarn test --full`
fails.
If you then run `./scripts/update_snapshots.sh`, then the output
directory will be restored; afterwards `yarn test --full` passes again.
To verify that the snapshots are valid, you can test them with the
frontend:
`yarn start --instance sharness/__snapshots__/test-instance`
If you are actually debugging this script, rather than using
`yarn test --full` you'll want to invoke
`(cd sharness; ./load_test_instance.t -l -v)`
[old]: https://github.com/sourcecred/sourcecred/blob/v0.6.0/sharness/test_load_example_github.t