Skip doing any checking at all if none of the tables reachable from the root
table have been modified (which can happen if the table version was bumped due
to insertions, unrelated backlinks, or unlinked-to rows being deleted in linked
tables).
Add cycle checking rather than relying on the max depth to handle it, as the
worst case was O(N^16) if the cycle involved a LinkList of size N.
Track which rows have been confirmed to have not been modified.
Cache the information about the links for each of the relevant tables as
checking the table schema can get somewhat expensive.
Switch to a chunked vector-of-vectors which makes mid-insertions on large sets
much faster, and cache the begin/end/count for each chunk to make lookups much
more cache-friendly.