We don't track insertions and deletions for tables that are merely linked to by
tables actually being observed (for performance reasons, since we don't need
that information), but the check for that was missing in one place. This would
be merely a slowdown rather than a crash, but deletions.add_shifted() can
overflow size_t if the passed-in index represents a newly inserted row and the
check for that didn't work due to not tracking insertions for the table.
The only remotely realistic way to actually have size_t overflow is to have
previously cleared the table (the table clear instruction does not include the
old size of the table, so it just marks {0, SIZE_T_MAX} as deleted).
Fixes#3537.
Skip doing any checking at all if none of the tables reachable from the root
table have been modified (which can happen if the table version was bumped due
to insertions, unrelated backlinks, or unlinked-to rows being deleted in linked
tables).
Add cycle checking rather than relying on the max depth to handle it, as the
worst case was O(N^16) if the cycle involved a LinkList of size N.
Track which rows have been confirmed to have not been modified.
Cache the information about the links for each of the relevant tables as
checking the table schema can get somewhat expensive.
If there are multiple Realm instances for a single file on a single thread due
to disabling caching we need to actually deliver the results to the appropriate
SharedGroup for each notifier rather than delivering them all to the first one.
Even if the new TV has the same rows as the old one, we need to hand it over to
the destination thread to bump the outside version of the destination thread's
TV to avoid rerunning the query there.
Switch to a chunked vector-of-vectors which makes mid-insertions on large sets
much faster, and cache the begin/end/count for each chunk to make lookups much
more cache-friendly.