This commit modifies the TimelineExplorer so that the user can both see
the chosen alpha value, and change it. Alpha has a pretty profound
impact on the final scores, and I want to tweak it for CredSperiment
week two, so this is an important addition.
Test plan: Modify the alpha, re-run cred calculation, and observe that
the scores change. `yarn test` passes.
This commit integrates the identity plugin, which was created in #1384.
It does this by adding explicit identity fields to the project
configuration, which are then applied when loading the graph in
`api/load.js`.
The actual integration is quite straightforward.
Test plan: The underlying logic is thoroughly tested; I added one new
test case to verify that it is integrated properly. Since the project
compat has changed, I've updated all the snapshots. Prior to merging
this PR, I will produce one "integration test", using this code to do
identity resolution for a real project (i.e. on the SourceCred instance
itself).
This commit adds the new SourceCred identity plugin. As described in the
README.md file:
This folder contains the Identity plugin. Unlike most other plugins, the
Identity plugin does not add any new contributions to the graph. Instead, it
allows collapsing different user accounts together into a shared 'identity'
node.
To see why this is valuable, imagine that a contributor has an account on both
GitHub and Discourse (potentially with a different username on each service).
We would like to combine these two identities together, so that we can
represent that user's combined cred properly. The Identity plugin enables this.
Specifically, the instance maintainer can provide a (locally unique) username
for the user, along with a list of aliases the user is known by, e.g.
`github/username` and `discourse/other_username`. The aliases are simple string
representations, that are intended to be easy to maintain by hand in a
configuration file. Then, the identity plugin will provide a list of
`NodeContraction`s that can be used by `Graph.contractNodes` to combine the
user identities as described.
The plugin is broken up into a few submoudles:
- `declaration.js` provides the PluginDeclaration. It has a single node
type (the identity node).
- `identity.js` declares the `Identity` type (a username and list of
aliases), allows constructing identity nodes, and does some validation
on the identity username.
- `alias.js` implements the logic for parsing aliases like
"github/decentralion" or "discourse/s_ben" into a node address.
- `nodeContractions.js` provides logic for turning a list of Identities
into a list of NodeContractions, suitable for use in
`Graph.contractNodes`.
The plugin is not yet integrated; that will come in a followon commit.
Test plan: Unit tests added; `yarn test` passes.
Currently attempting to load the SourceCred discourse instance fails
with foreign key constraint errors.
Basically, we have a few weird situations:
- A post (which corresponds to the 'psuedo-topic' generated by creating
a new category) is picked up, but its topic is not detected, because
Discourse does not list these 'psuedo-topics' in the latest topic
endpoint. Attempting to add the post breaks the foreign key constraint.
- We have several likes which correspond to posts that don't exist.
Possibly they were deleted? I'm not sure.
Right now, the load process fails entirely when it hits these
exceptions, which is bad. It should print a warning instead, and
continue without the offending interactions. This commit effects that
change in behavior.
Test plan:
Before this commit, loading the SourceCred discourse with a clean cache
fails. After building with this commit, loading the SourceCred discourse
with a clean cache workes and prints the following warnings:
```
$ node bin/sourcecred.js discourse https://discourse.sourcecred.io credbot
GO load-discourse.sourcecred.io
GO discourse
GO discourse/topics
DONE discourse/topics: 3m 53s
GO discourse/posts
Warning: Encountered error 'FOREIGN KEY constraint failed' while adding
post https://discourse.so urcecred.io/t/214/1.
DONE discourse/posts: 2m 38s
GO discourse/likes
DONE discourse/likes: 50s
DONE discourse: 7m 21s
GO compute-cred
DONE compute-cred: 547ms
DONE load-discourse.sourcecred.io: 7m 22s
```
Also, unit tests have been added that verify the specific behavior
changes.
Fixes#1353
Tested manually by creating a docker image including the changes.
Running the dev-preview @passbolt command until completion.
(once hitting the github rate limit, once till #1354 happens)
No more problematic interactions show up during load.
This fixes a bug introduced in #1371, where selecting a type other than
"All users" and then trying to reselect "All users" would break the UI.
Test plan: Manual inspection; load an instance, try selecting a
different type, and then go back to "All users". It now works as
expected.
This adds a new command, `discourse`, which makes it convenient to load
Discourse servers as standalone SourceCred projects.
For example, you could load the official SourceCred discourse via the
following:
```sh
export SOURCECRED_DISCOURSE_KEY=....
yarn backend
node bin/sourcecred.js discourse https://discourse.sourcecred.io credbot
yarn start
```
I've updated the README with instructions for using the plugin.
Test plan: No automated testing because I see this tool as a temporary
placeholder until we get the SourceCred instances setup. I manually
tested the error cases (e.g. providing an invalid server url) as well as
success cases like the one above. I validated that the weights file
argument is being interpreted correctly (i.e. trying to load invalid
weights produces an expected error message, loading valid weights
results in those weights being present in the UI).
Allow getting the node address for a user, given the user's login. This
will be needed by the upcoming identity plugin.
If the login in question corresponds to a bot, then a bot address will
be returned. When we make the bot-set configuration (rather than
hardcoded), we'll need to change the signature of this function; I think
that's fine.
Test plan: Unit tests added. (Also, it's really simple.)
This commit adds Graph.contractNodes, which allows collapsing certain
nodes in the graph into each other. This will enable the creation of a
SourceCred "identity" plugin, allowing identity resolution between users
different accounts on different services.
Test plan: Thorough unit tests have been added. `yarn test` passes.
Thanks to @wchargin for [review feedback][1] which significantly
improved this API.
[1]: https://github.com/sourcecred/sourcecred/pull/1380#discussion_r324958055
Summary:
Flow provides a utility type for this purpose; there’s no need to
implement, document, and keep it in sync ourselves:
<https://flow.org/en/docs/types/utilities/#toc-shape>
Test Plan:
As written, `yarn flow` passes. Changing the definition of `params` on
line 77 of `load.test.js` to add a key `foo: "wat"` or change the value
of `weights` to `{hmm: "hmm"}` yield appropriate type errors.
wchargin-branch: use-shape
Summary:
This commit modifies `_updateOwnData` to write to both the old
type-specific primitives tables as well as the new EAV table. This
establishes the invariant that a node with non-null `last_update` will
always have primitive data (if its object type has primitive fields).
Test Plan:
Existing tests expanded. Commenting out each of the `updateEavPrimitive`
calls (independently) causes a test to fail. Note that every test that
queries an internal `primitives_*` table to inspect the database state
has been expanded to make an equivalent query against the `primitives`
table as well.
wchargin-branch: mirror-eav-update
Summary:
This establishes the invariant that every object in the `objects` table
has all relevant rows in the `primitives` table, though those rows’
values are never yet set.
Test Plan:
Unit tests updated. Manually loading `sourcecred/example-github` and
running `.dump primitives` generates reasonable-looking output, with
lots of rows, including entries for nested fields and eggs. Verified
that the set of non-`id` columns on `Issue` equals the set of values for
the `fieldname` column of an `Issue` object, and likewise for `Commit`s,
thus covering each kind of field.
wchargin-branch: mirror-eav-init
Summary:
See #1313 for context. The plan is to set up dual-writes with `extract`
calls still reading from the old tables until the new ones are complete
and tested. The primary risk to production would be a fatal exception in
the new write paths, which seems like an acceptable risk.
Test Plan:
Unit tests pass.
wchargin-branch: mirror-eav-schema
Summary:
Prior to this commit, removing the `addLink.run({id, fieldname})` on
line 487 of `mirror.js` would cause test failures down the pipeline, but
not at the root cause. Such an error is now caught earlier.
Test Plan:
Comment out line 487 of `mirror.js` and observe that the newly added
test case fails, but the other `registerObject` test cases do not.
wchargin-branch: mirror-test-registerobject-nested
For phase one of the CredSperiment, I need a SourceCred instance which combines GitHub and Discourse servers. I'll also need to be able to give it very specific configuration to collapse certain user identities together.
Shortly after launching the CredSperiment, I plan to come back and totally re-write SourceCred's command line interface and site building system, in a way that will throw away most of the existing codebase.
As such, I found it expedient to add rather hacky and untested support for loading combined GitHub/Discourse instances, so I can land the promised features. This PR does so by:
- adding sourcecred gen-project for constructing project.json files
- adding sourcecred load --project for loading a project.json file
- ensuring that load provides the right plugins based on the project that's in scope
- updating build_static_site so that it can use the new --project flag
Test plan:
I have done some end-to-end testing, but the overall commit stack lacks automated testing. This is a deliberate tradeoff: I'm planning to re-write this section of the codebase, and the testing ergonomics are not great, so I'd rather accept some technical debt, especially since I plan to pay it off soon.
See the pull request on GitHub for the individual constituent commits.
As suggested by @Beanow in [a review comment][1], this commit factors
loading weights from disk into a cli/common utility method.
The actual method is really generic, and we have a number of similar
constructions across the codebase (grep for `JSON.parse` to find them).
I considered factoring out a generic utility for loading and
deserializing JSON data from disk in general, but it didn't seem
valuable enough at this time.
Test plan: Unit tests added, existing tests pass.
[1]: https://github.com/sourcecred/sourcecred/pull/1374#discussion_r323149740
At present, every place in the codebase that needs
TimelineCredParameters constructs them ad-hoc, meaning we don't have any
shared defaults across different consumers.
This commit adds a new type, `PartialTimelineCredParameters`, which
is basically `TimelineCredParameters` with every field marked optional.
Callers can then choose to override any fields where they want
non-default values. A new internal `partialParams` function promotes
these partial parameters to full parameters.
All the public interfaces for using params (namely,
`TimelineCred.compute` and `TimelineCred.reanalyze`) now accept optional
partial params. If the params are not specified, default values are
used; if partial params are provided, all the explicitly provided values
are used, and unspecified values are initialized to default values.
Test plan: A simple unit test was added to ensure that weights overrides
work as intended. `git grep "intervalDecay: "` reveals that there are no
other explicit parameter constructions in the codebase. All existing
unit tests pass.
The `timelineCred.js` file is a bit of a beast. One way to start
slimming it down is to pull the parameters into their own file. This is
especially helpful as I'm planning a followon PR that will colocate the
default parameter values with their declaration.
The naming of everything in the `/timeline/` subdirectory is a bit
wonky: it reflects that at the time of creation, "Timeline" designated
an experimental version of SourceCred. Now, it is becoming canonical,
but the cumbersome naming persists. I haven't made any effort to tackle
the name debt here.
Test plan: `yarn test` passes; since this is merely a code
reorganization, this give me great confidence that the change is
correct. I also added a few small tests to the new module. Although the
behavior in question is already tested, I think setting up test files
liberally is a good practice, as the existence of the test file invites
the creation of more tests.
Now that we're adding support for the Discourse plugin, we'll start
having >1 plugin present in the frontend again. As such, we should
provide clear grouping of types in the frontend so that it's possible to
distinguish between a GitHub user and a Discourse user. This commit does
just that, by resurrecting code that we used when the GitHub and Git
plugins co-existed in the frontend.
Test plan: Launch the fronted and observe that node types in the filter
selection dropdown are grouped by the name of their plugin. Also,
clicking on the name of a plugin should filter to all nodes from that
plugin.
Previously, the `sourcecred scores` command assumed that all users are
GitHub users, and assigned users an id based on their GitHub login.
Now, the command returns information on all users, regardless of which
plugin provided them. As such, we need to identify users differently.
Instead of a string id, they now have an array of address parts. That
array contains all of the parts of their corresponding node address.
For example, the GitHub user `@Beanow` would correspond to the address
array `["sourcecred", "github", "USERLIKE", "USER", "Beanow"]`
As a general convention, the first two components of any node's address
contain information about the plugin that owns that node. The first
component is the owner of the plugin, and the second is the name of the
plugin. Afterwards, the plugin may represent nodes in whatever manner it
sees fit.
Thanks to @Beanow and @vsoch for some feedback and discussion on this
design.
Test plan: Snapshots have been updated. `yarn test` passes.
Now instead of always defaulting to GitHub users, it shows all
user-typed nodes. This will make SourceCred work non-hackily when there
is e.g. just a Discourse plugin in scope.
I also fixed an issue where it was loading the GitHub declaration in a
hardcoded way, instead of properly getting it from the TimelineCred's
plugin array.
Test plan: Manual UI inspection.
This is a convenience method that extracts cred for all the user-typed
nodes. It's basically an abstraction over calling `credSortedNodes` with
the right set of prefixes.
I forsee using it in at least two places (score retrieval in the CLI and
score display in the frontend) so I decided to make it a method.
Test plan: A very simple unit test was added. (It's a very simple
wrapper function.)
This lets us filter by a group of prefixes simultaneously, which enables
e.g. seeing all user node types at once.
I also tweaked the API to make it a bit more convenient, you can now
pass no arguments and get all nodes in sorted order.
Test plan: Unit tests updated.
The PluginDeclaration has all of the information we need to configure
TimelineCred: it knows all the node and edge types, as well as which
node types are user (or scoring) node types.
Therefore, we can replace the ad-hoc config object with a simple array
of plugin declarations. Since the plugins will be saved as part of the
TimelineCred, it means the UI can configure to only show information for
plugins that are actually in scope.
Test plan: `yarn test` passes, and the prototype still works. Snapshots
updated.
When a post or topic is deleted, Discourse fetch will give status 410.
As with 404 and 403, we should just ignore the post and move on.
I took the opportunity to slightly refactor the fetch error handling
while I was there.
Test plan: Previously, doing a load on the SourceCred discourse instance
would fail due to a deleted topic. Now, it doesn't.
This modifies the pluginDeclaration so that it can specifiy user node
types. This will allow us to replace the TimelineCredConfig type with a
plugin collection instead.
It's expected that the user types will also be present in the node
types, although this isn't validated anywhere at present.
Test plan: `yarn flow`.
This updates the cred computation logic so that we can have multiple
"scoring node types".
Context: Currently, we designate a single node type (GitHub users) as
the scoring node type, and normalize so that all users have 1000 score
in total.
This commit updates the pipeline to admit using more than one prefix for
scoring, meaning that we could have GitHub users, Discourse users, and
more, and still have all users sum to 1000 score.
We will still need to update the frontend so that it will have a user
pane which aggregates across all users.
Test plan: Unit tests updated. `yarn test` passes.
Summary:
This adds `MDM6Qm90NDY0NDczMjE=` (`@allcontributors`) to the blacklist
to enable loading the `aragon/aragon` repository. See #1362 and #996 for
context.
Test Plan:
Running `node ./bin/sourcecred.js load aragon/aragon` on a clean cache
now completes successfully.
wchargin-branch: blacklist-allcontributors
Summary:
This was doing exactly the wrong thing, attempting to update snapshots
whenever the Discourse API token was _not_ present.
Test Plan:
Running `env -u DISCOURSE_TEST_API_KEY ./scripts/update_snapshots.sh`
now successfully updates non-Discourse snapshots, rather than emitting
an error, “Please set the DISCOURSE_TEST_API_KEY environment variable.”.
wchargin-branch: update-snapshots-discourse
Summary:
Generated with `./scripts/update_snapshots.sh` (with #1360 patched in).
This fixes failures introduced in #1358.
Test Plan:
Running `yarn test --full` now passes. Inspecting the diff (after piping
the old and new snapshots to `jq -S .`) shows that this includes only
additions, which seems appropriate given the precipitating change.
wchargin-branch: fix-1358-failures
This changes how TimelineCred filtering works. Instead of using the
filterTimelineCred module, which includes all nodes matching
filterPrefixes, we now take all nodes matching scorePrefixes and
additionally the top `k` nodes for every other type.
This ensures that we will have the top comments, pull requests, issues,
etc in the UI, without needing to take every single comment or PR or
issue.
Concurrently, the UI is updated so that every type is included in the
filter dropdown.
CHANGELOG has been updated, since this is user facing.
Test plan: `yarn test` passes, snapshots are updated, and I also tested
the UI manually.
TimelineCred computation is implemented as follows:
- Compute Distribution
- Filter it down to specified node types
- Wrap the filtered results into a TimelineCred
I want to change how the filtering works. The new filtering logic will
depend on logic we've already implemented in TimelineCred; therefore
filtering should be done on the TimelineCred object and not separately.
Specifically, I want to be able to filter down to the highest-scored
nodes by type (dependent on the type).
As a first step, I've refactored the interface to TimelineCred so that
the filtering is an implementation detail, i.e. the TimelineCred
constructor doesn't expect objects defined in `filterTimelineCred`.
Test plan: `yarn test` passes after a snapshot update.
This modifies the TimelineCred serialization so that it includes the
CredConfig in the JSON. This means that it's easier to coordinate which
plugins and types are in scope, as the data itself can contain that
information.
Rather than define a new hand-rolled serializer, I just passed the
config directly through for stringification. Unit tests verify that this
still works (round-trip serialization is tested). As an added sanity
check, I generated a new small `cred.json`, and inspected the file via
`cat` to ensure that it's still legible text, and isn't interpreted as a
binary file due to the `NUL` bytes in node addresses.
Every client that previously depended on the `DEFAULT_CRED_CONFIG` now
properly gets its cred configuration from the JSON.
Test plan: Unit tests for serialization already exist. Generated a fresh
`cred.json` file and tested the frontend with it. Also,
`yarn test --full` passes.
Blacklist more problematic quasar interactions
Summary:
Context: <https://github.com/sourcecred/sourcecred/issues/1256#issuecomment-526252852>
Without also blacklisting the reaction, we hit an invariant violation in
the relational view (reactions are expected to have exactly one author).
Test Plan:
Running `node ./bin/sourcecred.js load quasarframework/quasar-cli` now
completes successfully (in about 2 minutes 40 seconds). It does emit a
warning:
```
Issue[MDU6SXNzdWUzNDg0NjUzNDg=].reactions: unexpected null value
```
…because one of the reactions was blacklisted. But the relational view
handles this correctly, it seems: timeline cred is still computed and
renders without obvious error.
wchargin-branch: blacklist-more-quasar
Summary:
The format of GitHub’s GraphQL object IDs is explicitly opaque, and so
we must not introspect them in any way that would influence our results.
But it seems reasonable to introspect these IDs solely for diagnostic
purposes, enabling us to proactively detect GitHub’s contract violations
while we still have useful information about the root cause.
This commit adds an optional `guessTypename` option to the Mirror
constructor, which accepts a function that attempts to guess an object’s
typename based on its ID. If the guess differs from what the server
claims, we continue on as before, but omit a console warning to help
diagnose the issue more quickly.
Resolves#1336. See that issue for details.
Test Plan:
Unit tests for `mirror.js` updated, retaining full coverage. To test
manually, revert #1335, then load `quasarframework/quasar-cli`. Note
that it emits the following warning before failing:
> Warning: when setting Reaction["MDg6UmVhY3Rpb24zNDUxNjA2MQ=="].user:
> object "MDEyOk9yZ2FuaXphdGlvbjQzMDkzODIw" looks like it should have
> type "Organization", but the server claims that it has type "User"
Unit tests for the GitHub typename guesser added as well.
Running `yarn test --full` passes.
wchargin-branch: mirror-guess-typenames
Summary:
Upgrading past a security fix in that package. Generated by running
`yarn add eslint@^6.2.2 babel-eslint@^10.0.3`: `eslint` to update the
problematic transitive dependency, and `babel-eslint` to avoid
<https://github.com/eslint/eslint/issues/12117>.
Test Plan:
Running `yarn lint` yields no false positives, and does complain on true
positives. Running `yarn list --pattern eslint-utils` lists only v1.4.2.
wchargin-branch: eslint-utils-1.4.2
Summary:
The current implementation of `NullUtil.filterList` uses an `any`-cast.
This is fine as long as the definition is actually typesafe; we should
take a least a little care to ensure that it is. This commit adds a
typesafe version, commented out but still typechecked, and refines the
type around the `any`-cast to make the cast slightly more robust.
Test Plan:
Note that changing `$ReadOnlyArray<?T>` to `$ReadOnlyArray<?T | number>`
in the declaration of `filterList` caused no Flow error prior to this
commit, but now causes one.
wchargin-branch: filter-list-typecheck
PR #1325 introduced a failing snapshot test, which was promptly caught
by @wchargin. This commit fixes it by running
`./scripts/update_snapshots.sh`. Also, I bumped the project JSON version
number, which also should have happened in #1325.
Test plan: `yarn test --full` passes.
This commit modifies `cli/load` to appropriately load a Discourse key
from the environment, if it is available.
The mechanics are basically the same as with the GitHub token.
Test plan: Unit tests added. `yarn test` passes.
This commit modifies the `Project` type so that it allows settings for a
Discourse server, and ensures that `api/load` will appropriately load
the server in question, and include it in the output graph.
Putting the full Discourse declaration directly into the Project type is
an unsustainable development practice—in general, adding plugins should
not require changing core data types. However, at the moment I'm punting
on polishing the plugin system, in favor of adding the Discourse plugin
quickly, so I just put it into Project alongside the repo ids.
In the future, I expect to refactor the plugins around a much cleaner
interface; it's just not a priority as yet. (Tracking: #1120.)
This commit also makes the GitHub token optional in `api/load`, since
now it's very plausible that a user will want to only load a Discourse
server, and therefore not require a GitHub token.
As of this commit, it's still impossible to load Discourse projects, as
the CLI always sets a null Discourse server; and in any case, the
frontend would not properly display the project in question, as any
Discourse types would get filtered out.
Test plan: Mocking unit tests have been added to `api/load.test.js` to
ensure that the Discourse graph is loaded and merged correctly.
This adds a new method called `filter` to the `NullUtil` module.
`filter` enables you to filter all the null-like values out of an array
in a convenient typesafe way. (It's really just a wrapper around
`Array.filter((x) => x != null)` with a type signature.)
Test plan: Unit tests added (for both functionality and type safety).
This is the analogue to `github/loadGraph`, but for Discourse. It
basically pipes together the mechanisms for loading Discourse data and
creating a Discourse graph from them, resulting in a single endpoint for
consumption in the API.
In contrast to github, the method is called `loadDiscourse` and not
`loadGraph`, which seemed more appropriate to me. I haven't changed
the corresponding GitHub method's name. (I'm currently knowingly letting
conceputal debt accumulate around the plugin interface; I expect to do a
full refactor within the next few months.)
Test plan: This is the kind of "pipe together tested APIs involving IO"
code which I have decided not to write explicit tests for. However, it
is still protected by flow, and I have a branch (`discourse-plugin`)
which uses this code to do a full Discourse load.
This implements rate limiting to the Discourse fetch logic, so that we
can actually load nontrivial servers without getting a 529 failure.
We could have used retry; I thought it was more polite to actually limit
the rate at which we make requests. However, to avoid seeing 529s in
practice, I left a bit of a buffer: we make only 55 requests per minute,
although 60 would be allowed.
If we want to improve Discourse loading time, we could boost up to the
full 60 request/min, but add in retries. (Or we could switch to retries
entirely.)
Test plan: This logic is untested, however my full discourse-plugin
branch uses it to do full Discourse loads without issue.
Adding docker container recipe and instructions in README for running sourcecred
Signed-off-by: Vanessa Sochat <vsochat@stanford.edu>
Test plan: @decentralion verified that the commands work on a fresh setup prior to merging.