Summary:
Generated with `./scripts/update_snapshots.sh` (with #1360 patched in).
This fixes failures introduced in #1358.
Test Plan:
Running `yarn test --full` now passes. Inspecting the diff (after piping
the old and new snapshots to `jq -S .`) shows that this includes only
additions, which seems appropriate given the precipitating change.
wchargin-branch: fix-1358-failures
This changes how TimelineCred filtering works. Instead of using the
filterTimelineCred module, which includes all nodes matching
filterPrefixes, we now take all nodes matching scorePrefixes and
additionally the top `k` nodes for every other type.
This ensures that we will have the top comments, pull requests, issues,
etc in the UI, without needing to take every single comment or PR or
issue.
Concurrently, the UI is updated so that every type is included in the
filter dropdown.
CHANGELOG has been updated, since this is user facing.
Test plan: `yarn test` passes, snapshots are updated, and I also tested
the UI manually.
TimelineCred computation is implemented as follows:
- Compute Distribution
- Filter it down to specified node types
- Wrap the filtered results into a TimelineCred
I want to change how the filtering works. The new filtering logic will
depend on logic we've already implemented in TimelineCred; therefore
filtering should be done on the TimelineCred object and not separately.
Specifically, I want to be able to filter down to the highest-scored
nodes by type (dependent on the type).
As a first step, I've refactored the interface to TimelineCred so that
the filtering is an implementation detail, i.e. the TimelineCred
constructor doesn't expect objects defined in `filterTimelineCred`.
Test plan: `yarn test` passes after a snapshot update.
This modifies the TimelineCred serialization so that it includes the
CredConfig in the JSON. This means that it's easier to coordinate which
plugins and types are in scope, as the data itself can contain that
information.
Rather than define a new hand-rolled serializer, I just passed the
config directly through for stringification. Unit tests verify that this
still works (round-trip serialization is tested). As an added sanity
check, I generated a new small `cred.json`, and inspected the file via
`cat` to ensure that it's still legible text, and isn't interpreted as a
binary file due to the `NUL` bytes in node addresses.
Every client that previously depended on the `DEFAULT_CRED_CONFIG` now
properly gets its cred configuration from the JSON.
Test plan: Unit tests for serialization already exist. Generated a fresh
`cred.json` file and tested the frontend with it. Also,
`yarn test --full` passes.
Blacklist more problematic quasar interactions
Summary:
Context: <https://github.com/sourcecred/sourcecred/issues/1256#issuecomment-526252852>
Without also blacklisting the reaction, we hit an invariant violation in
the relational view (reactions are expected to have exactly one author).
Test Plan:
Running `node ./bin/sourcecred.js load quasarframework/quasar-cli` now
completes successfully (in about 2 minutes 40 seconds). It does emit a
warning:
```
Issue[MDU6SXNzdWUzNDg0NjUzNDg=].reactions: unexpected null value
```
…because one of the reactions was blacklisted. But the relational view
handles this correctly, it seems: timeline cred is still computed and
renders without obvious error.
wchargin-branch: blacklist-more-quasar
Summary:
The format of GitHub’s GraphQL object IDs is explicitly opaque, and so
we must not introspect them in any way that would influence our results.
But it seems reasonable to introspect these IDs solely for diagnostic
purposes, enabling us to proactively detect GitHub’s contract violations
while we still have useful information about the root cause.
This commit adds an optional `guessTypename` option to the Mirror
constructor, which accepts a function that attempts to guess an object’s
typename based on its ID. If the guess differs from what the server
claims, we continue on as before, but omit a console warning to help
diagnose the issue more quickly.
Resolves#1336. See that issue for details.
Test Plan:
Unit tests for `mirror.js` updated, retaining full coverage. To test
manually, revert #1335, then load `quasarframework/quasar-cli`. Note
that it emits the following warning before failing:
> Warning: when setting Reaction["MDg6UmVhY3Rpb24zNDUxNjA2MQ=="].user:
> object "MDEyOk9yZ2FuaXphdGlvbjQzMDkzODIw" looks like it should have
> type "Organization", but the server claims that it has type "User"
Unit tests for the GitHub typename guesser added as well.
Running `yarn test --full` passes.
wchargin-branch: mirror-guess-typenames
Summary:
Upgrading past a security fix in that package. Generated by running
`yarn add eslint@^6.2.2 babel-eslint@^10.0.3`: `eslint` to update the
problematic transitive dependency, and `babel-eslint` to avoid
<https://github.com/eslint/eslint/issues/12117>.
Test Plan:
Running `yarn lint` yields no false positives, and does complain on true
positives. Running `yarn list --pattern eslint-utils` lists only v1.4.2.
wchargin-branch: eslint-utils-1.4.2
Summary:
The current implementation of `NullUtil.filterList` uses an `any`-cast.
This is fine as long as the definition is actually typesafe; we should
take a least a little care to ensure that it is. This commit adds a
typesafe version, commented out but still typechecked, and refines the
type around the `any`-cast to make the cast slightly more robust.
Test Plan:
Note that changing `$ReadOnlyArray<?T>` to `$ReadOnlyArray<?T | number>`
in the declaration of `filterList` caused no Flow error prior to this
commit, but now causes one.
wchargin-branch: filter-list-typecheck
PR #1325 introduced a failing snapshot test, which was promptly caught
by @wchargin. This commit fixes it by running
`./scripts/update_snapshots.sh`. Also, I bumped the project JSON version
number, which also should have happened in #1325.
Test plan: `yarn test --full` passes.
This commit modifies `cli/load` to appropriately load a Discourse key
from the environment, if it is available.
The mechanics are basically the same as with the GitHub token.
Test plan: Unit tests added. `yarn test` passes.
This commit modifies the `Project` type so that it allows settings for a
Discourse server, and ensures that `api/load` will appropriately load
the server in question, and include it in the output graph.
Putting the full Discourse declaration directly into the Project type is
an unsustainable development practice—in general, adding plugins should
not require changing core data types. However, at the moment I'm punting
on polishing the plugin system, in favor of adding the Discourse plugin
quickly, so I just put it into Project alongside the repo ids.
In the future, I expect to refactor the plugins around a much cleaner
interface; it's just not a priority as yet. (Tracking: #1120.)
This commit also makes the GitHub token optional in `api/load`, since
now it's very plausible that a user will want to only load a Discourse
server, and therefore not require a GitHub token.
As of this commit, it's still impossible to load Discourse projects, as
the CLI always sets a null Discourse server; and in any case, the
frontend would not properly display the project in question, as any
Discourse types would get filtered out.
Test plan: Mocking unit tests have been added to `api/load.test.js` to
ensure that the Discourse graph is loaded and merged correctly.
This adds a new method called `filter` to the `NullUtil` module.
`filter` enables you to filter all the null-like values out of an array
in a convenient typesafe way. (It's really just a wrapper around
`Array.filter((x) => x != null)` with a type signature.)
Test plan: Unit tests added (for both functionality and type safety).
This is the analogue to `github/loadGraph`, but for Discourse. It
basically pipes together the mechanisms for loading Discourse data and
creating a Discourse graph from them, resulting in a single endpoint for
consumption in the API.
In contrast to github, the method is called `loadDiscourse` and not
`loadGraph`, which seemed more appropriate to me. I haven't changed
the corresponding GitHub method's name. (I'm currently knowingly letting
conceputal debt accumulate around the plugin interface; I expect to do a
full refactor within the next few months.)
Test plan: This is the kind of "pipe together tested APIs involving IO"
code which I have decided not to write explicit tests for. However, it
is still protected by flow, and I have a branch (`discourse-plugin`)
which uses this code to do a full Discourse load.
This implements rate limiting to the Discourse fetch logic, so that we
can actually load nontrivial servers without getting a 529 failure.
We could have used retry; I thought it was more polite to actually limit
the rate at which we make requests. However, to avoid seeing 529s in
practice, I left a bit of a buffer: we make only 55 requests per minute,
although 60 would be allowed.
If we want to improve Discourse loading time, we could boost up to the
full 60 request/min, but add in retries. (Or we could switch to retries
entirely.)
Test plan: This logic is untested, however my full discourse-plugin
branch uses it to do full Discourse loads without issue.
Adding docker container recipe and instructions in README for running sourcecred
Signed-off-by: Vanessa Sochat <vsochat@stanford.edu>
Test plan: @decentralion verified that the commands work on a fresh setup prior to merging.
Summary:
Generated by manually deleting the three `lodash` paragraphs from the
lockfile and then re-running `yarn`.
Test Plan:
Prior to this commit, running `yarn audit` noted 3011 high-severity
vulnerabilities; now, it notes none. Running `yarn test --full` still
passes.
wchargin-branch: security-upgrade-lodash
Summary:
In #1194, we upgraded Prettier from 1.13.4 to 1.18.2, but this upgrades
past <https://github.com/prettier/prettier/pull/5647>, which was first
released in Prettier 1.16.0. This commit fixes the uses of deprecated
code introduced as a result. It also upgrades the type definitions to
match, via `flow-typed install prettier@1.18.2`.
Addresses part of #1308.
Test Plan:
Prior to this commit, running `yarn unit` would print
```
console.warn node_modules/prettier/index.js:7934
{ parser: "babylon" } is deprecated; we now treat it as { parser: "babel" }.
```
in two test cases; it no longer prints any such warnings. Furthermore,
running `git grep 'parser.*babylon'` no longer finds any matches.
wchargin-branch: prettier-deprecations
Summary:
This dependency was added in #1249 without typedefs, and so is
implicitly `any`-typed.
Depends on #1309 to fix a bug that would otherwise be a true positive
type error.
Addresses part of #1308.
Generated with `flow-typed install deep-freeze@0.0.1`.
Test Plan:
Running `yarn flow` passes, but fails if you remove the `nodePrefix` or
`edgePrefix` attributes of the Discourse plugin declaration.
wchargin-branch: libdefs-deep-freeze
Summary:
A `PluginDeclaration` must have a `nodePrefix` and an `edgePrefix`, but
the Discourse plugin declaration was missing these. This was not caught
by Flow because `deep-freeze` was introduced in #1249 without type
definitions; see #1308.
Test Plan:
Apply the following patch:
```diff
diff --git a/src/plugins/discourse/declaration.js b/src/plugins/discourse/declaration.js
index 246a0a28..36ae5f13 100644
--- a/src/plugins/discourse/declaration.js
+++ b/src/plugins/discourse/declaration.js
@@ -1,6 +1,6 @@
// @flow
-import deepFreeze from "deep-freeze";
+declare function deepFreeze<T>(x: T): T;
import type {PluginDeclaration} from "../../analysis/pluginDeclaration";
import type {NodeType, EdgeType} from "../../analysis/types";
import {NodeAddress, EdgeAddress} from "../../core/graph";
```
Note that, with this patch, `yarn flow` fails before this change but
passes after it. Running `yarn unit` still passes.
wchargin-branch: discourse-plugin-prefixes
Summary:
Generated by running `flow-typed install --skip --overwrite` and
reverting a minimal set of libdefs such that the change does not
introduce any Flow errors (except Prettier, which is covered by #1307).
Addresses parts of #1308.
Changes:
- `chalk`: upgraded v1.x.x to v2.x.x
- `flow-bin`: no-op; explicit Flow window widening
- `isomorphic-fetch`: no-op; formatting change
- `jest`: updates for Flow v0.104.x (explicit inexact objects), and
also some functional additions
- `object-assign`: no-op; explicit Flow window widening
- `rimraf`: added new at v2.x.x
Test Plan:
Flow passes, by construction.
wchargin-branch: libdefs-clean
Summary:
These can be updated cleanly after applying the SourceCred-specific
patch. I’ve modified the comment on that patch to be clear that it *is*
SourceCred-specific—after updating, I spent a while trying to find why
it was deleted from upstream, before eventually realizing that it never
existed upstream anyway.
Generated by running `flow-typed install express@4.16.3 --overwrite` and
then manually inserting the three “SourceCred-specific hack” comment
blocks.
Addresses part of #1308.
Test Plan:
Running `yarn flow` still passes (but warns if the hacks are removed).
wchargin-branch: libdefs-express
Summary:
These can be updated cleanly now that an upstream pull request has been
merged: <https://github.com/flow-typed/flow-typed/pull/3522>
Generated by running `flow-typed install enzyme@3.3.0 --overwrite`.
Addresses part of #1308.
Test Plan:
Running `yarn flow` still passes.
wchargin-branch: libdefs-enzyme
Summary:
All links in SourceCred must use the `Link` component, providing either
an external URL `href={…}` or an internal route `to={…}`. Any uses of a
raw `<a>` element for internal routes will incur 404s when the
application is hosted on a non-root path, as is currently the case on
the production website.
The change to `FileUploader` is not strictly necessary, as the link has
no styled text and uses a `data:` URL, but there’s no reason not to.
Fixes#1304.
Test Plan:
Build the static site:
```
scripts/build_static_site.sh --target cred --project sourcecred/example-github
```
Then run `python3 -m http.server` from the repository root directory—not
the `cred/` subdirectory—and navigate to the timeline cred view:
<http://localhost:8000/cred/timeline/sourcecred/example-github/>
Observe that the “(legacy)” link now has the correct styling and
correctly navigates to the legacy mode page when clicked: prior to this
change, it would navigate to a URL without the proper `/cred/` path
prefix, yielding a 404. On the legacy page, verify that the “timeline
mode” link has the same properties.
Then, visit <http://localhost:8000/cred/test/FileUploader/> and verify
that the inspection test still passes.
Added a regression test to catch further such errors. Note that
reverting the code changes in this commit causes the test to fail, and
that running it with `--verbose` prints the problematic files.
wchargin-branch: fix-bad-routing-404s
Summary:
This is firing on a production page load of the “prototype” link from
the homepage, and does not seem to actually be an error condition.
Test Plan:
Run `yarn start`, navigate to `/timeline/sourcecred/example-github/`,
and observe that the console error has disappeared.
wchargin-branch: defaultloader-console-error
Summary:
When inserting a “like” action with `INSERT OR IGNORE` semantics, we
also learn whether the action had any effect. We can use this bit to
avoid a separate query checking whether the “like” already exists.
As mentioned here:
<https://github.com/sourcecred/sourcecred/pull/1298#discussion_r314994911>
Test Plan:
Running `yarn test` passes as is, and fails if you change `addLike` to
always return either `changed: true` or `changed: false`.
wchargin-branch: discourse-likes-one-query
Summary:
Calling `db.prepare(sql)` parses the text in `sql` and compiles it to a
prepared statement. This takes time, both for the parsing and allocation
itself and for the context switch from JavaScript to C (SQLite).
Prepared statements are designed to be invoked multiple times with
different bound values. This commit factors prepared statement creation
out of loops so that each call to `update` prepares only a constant
number of statements.
In doing so, we naturally factor out some light JS abstractions over the
raw SQL: `addTopic((topic: Topic))`, rather than `addTopicStmt.run(…)`.
In principle, these could be factored out of `update` entirely to
properties set on the class at initialization time, but, as described in
a comment on the GraphQL mirror, we defer this optimization for now as
it introduces additional complexity.
Test Plan:
Running `yarn test --full` passes.
wchargin-branch: discourse-sql-cse
This commit changes the Discourse default weights around, mostly
significantly moving many weights (e.g. LIKES) that have a 0 backward
weight to have a small positive backward weight instead, like 1/16. In
practice, this mitigates an issue where users with few outbound edges
act as "cred sinks" because the cred gets stuck in a loop between the
user and content they've authored.
Test plan: In local experimentation, I've found the new weights produce
more reasonable-seeming cred attribution.
I've written the Discourse plugin with distinct edge types for post and
topic authorship; it allows us to have more precise control over how
cred flows (and mitigates the need for #968). However, I gave the two
types the same name, which is confusing in the weight config ui. Now
they are properly distinct.
Test plan: It's a simple string change. In (unpublished) commits with a
full Discourse integration, the new strings show up nicely in the UI.
The previous code incorrectly constructed a Discourse post url based on
the post's id, rather than its index within the containing topic. This
is now fixed.
Test plan: There isn't actually a snapshot diff, because the post with
id 2 is also the second post in its thread. I'm not too worried about
this, though: this kind of code changes infrequently, and it's pretty
obvious when it's wrong.
The Discourse mirror class now keeps an up-to-date record of all of the
likes within an instance. It does this by iterating over every user in
the history, and requesting their likes. If at any point we hit a like
we've already seen, we move on to the next user. In the future, we can
improve this so we only query users we haven't checked in a while, or
users who were recently active.
Test plan: Tests verify that we correctly store all the likes, including
after partial updates, and that we don't issue unnecessary queries.
This is a minor change to the Discourse mirror so that it supports a
query to get all users from the server. It will be convenient for a
followon change which makes `update` search for every user's likes.
I also modified createGraph so that it uses the new method, which
results in code that is cleaner and slightly more efficient.
Test plan: Unit tests updated.
For the Discourse plugin, we really want to be able to add a full record
of all of the users' liked posts as edges in the graph. It's a really
high-signal way to move cred, that also gives individual users a lot of
agency and way to engage.
However: we need an API to get this data. Initial searches of API docs
were un-promising; fisrt, we would need to query potentially every post
to get its likes individually (makes it very expensive to find the likes
on old posts), and second, the likes did not come with timestamp
information. For a while, I thought we were at an impasse.
I then went fishing in the Discourse implementation for a solution (yay
open source!). Lots of the API is un-documented, since it's whatever
they happen to add to run Discourse. And it turns out there's a
`user_actions` API ([source]) which can provide all of a user's actions
in order, and having your content liked by someone else is considered an
action. Best of all, these actions come with timestamps.
The upshot is that instead of querying every post to get its likes, we
can query every user to get likes. Iterating over all users can still
be slow, but it's far better than iterating over all posts; plus we can
implement caching so that we only infrequently check in on inactive
users.
I've added a `likesByUser` method to the Discourse fetch interface that
provides this information. I've also added a snapshot test for it (and
updated all of the snapshots). I also rolled in a slight refactor to
error handling in the fetcher.
The mirror doesn't yet use this information (will come later).
[source]: 82e07cb0f4/app/controllers/user_actions_controller.rb (L3)
Test plan: `yarn test` passes. Snapshots look good.
This commit adds the logic needed for creating a contribution graph
based on the Discourse data. We first have a declaration with
specifications for the node and edge types in the plugin. We also have a
`createGraph` module which creates a conformant graph from the Mirror
data. The graph creation is thoroughly tested.
Test plan: Inspect unit tests, run `yarn test`. I also have (yet
unpublished) code which loads the graph into the UI, and it appears
fine.
This is a quick fixup so that the coming createGraph module can be
properly tested.
Shout out to @Beanow for anticipating this need in a [review comment].
[review comment]: https://github.com/sourcecred/sourcecred/pull/1266#discussion_r314305108
Test plan: trivial refactor, run `yarn test`
The mirror wraps a SQLite database which will store all of the data we
download from Discourse.
On a call to `update`, it downloads new data from the server and stores
it. Then, when it is asked for information like the topics and posts, it
can just pull from its local copy. This means that we don't need to
re-download the content every time we load a Discourse instance, which
makes the load more performant, more robust to network failures, etc.
Thanks to @wchargin, whose work on the GraphQL mirror for GitHub (#622)
inspired this mirror.
Test plan: I've written unit tests that use a mock fetcher to validate
the update logic. I've also used this to do a full load of the real
SourceCred Discourse instance, and to create a corresponding graph
(using subsequent commits).
Progress towards #865.
The `DiscourseFetcher` class abstracts over fetching from the Discourse
API, and post-processing and filtering the result into a form that's
convenient for us.
Testing is a bit tricky because the Discourse API keys are sensitive
(they are admin keys) and so I'm reluctant to commit them, even for our
test instance. As a workaround, I've added a shell script which
downloads some data from the SourceCred test instance, and saves it with
a filename which is an encoding of the actual endpoint. Then, in
testing, we can use a mocked fetch which actually hits the snapshots
directory, and thus validate the processing logic on "real" data from
the server. We also test that the fetch headers are set correctly, and
that we handle non-200 error codes appropriately.
Test plan: In addition to the included tests, I have an end-to-end test
which actually uses this fetcher to fully populate the mirror and then
generate a valid SourceCred graph.
This builds on API investigations
[here](https://github.com/sourcecred/sourcecred/issues/865#issuecomment-478026449),
and is general progress towards #865. Thanks to @erlend-sh, without whom
we wouldn't have a test instance.
Summary:
In ES6, the [`try` statement grammar][1] requires a catch parameter; the
parameter is only optional in the latest draft of ECMAScript, which is
of course not yet ratified as any actual standard.
Even though we don’t officially pledge to support Node 8, this is
currently the only breakage, and it’s easy enough to fix.
[1]: https://www.ecma-international.org/ecma-262/6.0/#sec-try-statement
Test Plan:
Running `yarn start` on Node v8.11.4 no longer raises a syntax error.
wchargin-branch: catch-parameter
Summary:
Introduced in #1277.
Test Plan:
Run `yarn start` and visit <http://localhost:8080/test/FileUploader/>.
Conduct the test plan as specified on that page.
wchargin-branch: fileuploader-target
I'm mostly motivated by wanting to get greenkeeper lockfile
auto-updating working (see #1269) although this is also a first step
towards making SourceCred usable from NPM (#1232).
For now, see this as us making sure we claim the sourcecred package name
on npm (see: https://www.npmjs.com/package/sourcecred).
I also fixed the license spec so that it's valid SPDX.
Summary:
To elaborate a bit: The repository-level `.gitignore` file is for
artifacts that are generated _by the code/build of that project_. This
includes `node_modules/`, `bin/`, `build/`, etc. These should be
necessary for all users of the project.
The user-level `.gitignore_global` file is for files that _your system_
generates. These are swap files (`.swp` `.swo` `.swa` for Vim), file
system metadata (`.DS_Store` for macOS, `Thumbs.db` for Windows), trash
directories, etc.
(See `man gitignore` for details about the two files. Take a look at
[the `.gitignore` for Git itself][git-gitignore] as an example.)
[git-gitignore]: https://github.com/git/git/blob/master/.gitignore
It doesn’t make sense to put the latter category of patterns into the
project’s `.gitignore`. You can’t accommodate every programming
environment under the sun. The file would be hundreds of lines.
By removing these patterns from the `.gitignore`, we help teach users
about how to configure `.gitignore_global` to set up their own
environment properly, once and for all.
This reverts commit 816c954f3d.
Test Plan:
The `.gitignore` now only contains patterns specific to SourceCred.
wchargin-branch: gitignore-project-only