These two notify functions are very similar. There appear to be just
enough differences that trying to parameterize the differences may not
improve things.
For now, reduce some of the cosmetic differences so that the material
differences are more obvious.
Use named returned so that the caller has a better idea of what these
bools mean.
Return early to reduce the scope, and make it more obvious what values
are returned in which cases. Also reduces the number of conditional
expressions in each case.
This change moves all the typeEntry lookups to the first step in the exported methods,
and makes unexporter internals accept the typeEntry struct.
This change is primarily intended to make it easier to extract the container of caches
from the Cache type.
It may incidentally reduce locking in fetch, but that was not a goal.
Fixes#6521
Ensure that initial failures to fetch an agent cache entry using the
notify API where the underlying RPC returns a synthetic index of 1
correctly recovers when those RPCs resume working.
The bug in the Cache.notifyBlockingQuery used to incorrectly "fix" the
index for the next query from 0 to 1 for all queries, when it should
have not done so for queries that errored.
Also fixed some things that made debugging difficult:
- config entry read/list endpoints send back QueryMeta headers
- xds event loops don't swallow the cache notification errors
All these changes should have no side-effects or change behavior:
- Use bytes.Buffer's String() instead of a conversion
- Use time.Since and time.Until where fitting
- Drop unnecessary returns and assignment
* Fix race condition during a cache get
Check the entry we pulled out of the cache while holding the lock had Fetching set.
If it did then we should use the existing Waiter instead of calling fetch. The reason
this is better than just calling fetch is that fetch re-gets the entry out of the
entries map and the previous fetch may have finished. Therefore this prevents
erroneously starting a new fetch because we just missed the last update.
* Fix race condition fully
The first commit still allowed for the following scenario:
• No entry existing when checked in getWithIndex while holding the read lock
• Then by time we had reached fetch it had been created and finished.
* always use ok when returning
* comment mentioning the reading from entries.
* use cacheHit consistently
* Support rate limiting and concurrency limiting CSR requests on servers; handle CA rotations gracefully with jitter and backoff-on-rate-limit in client
* Add CSR rate limiting docs
* Fix config naming and add tests for new CA configs
Fixes#4969
This implements non-blocking request polling at the cache layer which is currently only used for prepared queries. Additionally this enables the proxycfg manager to poll prepared queries for use in envoy proxy upstreams.
* Add State storage and LastResult argument into Cache so that cache.Types can safely store additional data that is eventually expired.
* New Leaf cache type working and basic tests passing. TODO: more extensive testing for the Root change jitter across blocking requests, test concurrent fetches for different leaves interact nicely with rootsWatcher.
* Add multi-client and delayed rotation tests.
* Typos and cleanup error handling in roots watch
* Add comment about how the FetchResult can be used and change ca leaf state to use a non-pointer state.
* Plumb test override of root CA jitter through TestAgent so that tests are deterministic again!
* Fix failing config test
* Fixes#4480. Don't leave old errors around in cache that can be hit in specific circumstances.
* Move error reset to cover extreme edge case of nil Value, nil err Fetch
* Add cache types for catalog/services and health/services and basic test that caching works
* Support non-blocking cache types with Cache-Control semantics.
* Update API docs to include caching info for every endpoint.
* Comment updates per PR feedback.
* Add note on caching to the 10,000 foot view on the architecture page to make the new data path more clear.
* Document prepared query staleness quirk and force all background requests to AllowStale so we can spread service discovery load across servers.
In a real agent the `cache` instance is alive until the agent shuts down so this is not a real leak in production, however in out test suite, every testAgent that is started and stops leaks goroutines that never get cleaned up which accumulate consuming CPU and memory through subsequent test in the `agent` package which doesn't help our test flakiness.
This adds a Close method that doesn't invalidate or clean up the cache, and still allows concurrent blocking queries to run (for up to 10 mins which might still affect tests). But at least it doesn't maintain them forever with background refresh and an expiry watcher routine.
It would be nice to cancel any outstanding blocking requests as well when we close but that requires much more invasive surgery right into our RPC protocol since we don't have a way to cancel requests currently.
Unscientifically this seems to make tests pass a bit quicker and more reliably locally but I can't really be sure of that!
* Fix theoretical cache collision bug if/when we use more cache types with same result type
* Generalized fix for blocking query handling when state store methods return zero index
* Refactor test retry to only affect CI
* Undo make file merge
* Add hint to error message returned to end-user requests if Connect is not enabled when they try to request cert
* Explicit error for Roots endpoint if connect is disabled
* Fix tests that were asserting old behaviour
Few other fixes in here just to get a clean run locally - they are all also fixed in other PRs but shouldn't conflict.
This should be robust to timing between goroutines now.