This patch removes pkg/util/mount completely, and replaces it with the
mount package now located at k8s.io/utils/mount. The code found at
k8s.io/utils/mount was moved there from pkg/util/mount, so the code is
identical, just no longer in-tree to k/k.
Kubernetes-commit: 0c5c3d8bb97d18a2a25977e92b3f7a49074c2ecb
And maybe the webhook authorizer cache.
This cache has two primary advantages over the LRU cache used currently:
- Cache hits don't acquire an exclusive lock.
- More importantly, performance doesn't fallover when the access pattern
scans a key space larger than an arbitrary size (e.g. the LRU
capacity).
The downside of using an expiring cache here is that it doesn't have a
maximum size so it's suspectible to DoS when the input is user
controlled. This is not the case for successful authentications, and
successful authentications have a natural expiry so it might be a good
fit here.
It has some a few differences compared to:
3d7318f29d/staging/src/k8s.io/client-go/tools/cache/expiration_cache.go
- Expiration is not entirely lazy so keys that are never accessed again
are still released from the cache.
- It does not acquire an exclusive lock on cache hits.
- It supports per entry ttls specified on Set.
The expiring cache (without striping) does somewhere in between the
simple cache and striped cache in the very contrived contention test
where every iteration acquires a write lock:
```
$ benchstat simple.log expiring.log
name old time/op new time/op delta
Cache-12 2.74µs ± 2% 2.02µs ± 3% -26.37% (p=0.000 n=9+9)
name old alloc/op new alloc/op delta
Cache-12 182B ± 0% 107B ± 4% -41.21% (p=0.000 n=8+9)
name old allocs/op new allocs/op delta
Cache-12 5.00 ± 0% 2.00 ± 0% -60.00% (p=0.000 n=10+10)
$ benchstat striped.log expiring.log
name old time/op new time/op delta
Cache-12 1.58µs ± 5% 2.02µs ± 3% +27.34% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
Cache-12 288B ± 0% 107B ± 4% -62.85% (p=0.000 n=10+9)
name old allocs/op new allocs/op delta
Cache-12 9.00 ± 0% 2.00 ± 0% -77.78% (p=0.000 n=10+10)
$ benchstat simple.log striped.log expiring.log
name \ time/op simple.log striped.log expiring.log
Cache-12 2.74µs ± 2% 1.58µs ± 5% 2.02µs ± 3%
name \ alloc/op simple.log striped.log expiring.log
Cache-12 182B ± 0% 288B ± 0% 107B ± 4%
name \ allocs/op simple.log striped.log expiring.log
Cache-12 5.00 ± 0% 9.00 ± 0% 2.00 ± 0%
```
I also naively replacemed the LRU cache with the expiring cache in the
more realisitc CachedTokenAuthenticator benchmarks:
https://gist.github.com/mikedanese/41192b6eb62106c0758a4f4885bdad53
For token counts that fit in the LRU, expiring cache does better because
it does not require acquiring an exclusive lock for cache hits.
For token counts that exceed the size of the LRU, the LRU has a massive
performance drop off. The LRU cache is around 5x slower (with lookups
taking 1 milisecond and throttled to max 40 lookups in flight).
```
$ benchstat before.log after.log
name old time/op new time/op delta
CachedTokenAuthenticator/tokens=100_threads=256-12 3.60µs ±22% 1.08µs ± 4% -69.91% (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=500_threads=256-12 3.94µs ±19% 1.20µs ± 3% -69.57% (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=2500_threads=256-12 3.07µs ± 6% 1.17µs ± 1% -61.87% (p=0.000 n=9+10)
CachedTokenAuthenticator/tokens=12500_threads=256-12 3.16µs ±17% 1.38µs ± 1% -56.23% (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=62500_threads=256-12 15.0µs ± 1% 2.9µs ± 3% -80.71% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
CachedTokenAuthenticator/tokens=100_threads=256-12 337B ± 1% 300B ± 0% -11.06% (p=0.000 n=10+8)
CachedTokenAuthenticator/tokens=500_threads=256-12 307B ± 1% 304B ± 0% -0.96% (p=0.000 n=9+10)
CachedTokenAuthenticator/tokens=2500_threads=256-12 337B ± 1% 304B ± 0% -9.79% (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=12500_threads=256-12 343B ± 1% 276B ± 0% -19.58% (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=62500_threads=256-12 493B ± 0% 334B ± 0% -32.12% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
CachedTokenAuthenticator/tokens=100_threads=256-12 13.0 ± 0% 11.0 ± 0% -15.38% (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=500_threads=256-12 12.0 ± 0% 11.0 ± 0% -8.33% (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=2500_threads=256-12 13.0 ± 0% 11.0 ± 0% -15.38% (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=12500_threads=256-12 13.0 ± 0% 10.0 ± 0% -23.08% (p=0.000 n=9+10)
CachedTokenAuthenticator/tokens=62500_threads=256-12 17.0 ± 0% 12.0 ± 0% -29.41% (p=0.000 n=10+10)
```
Benchmarked with changes in #84423
Bugs: #83259#83375
Kubernetes-commit: 9167711fd18511ffc9c90ee306c462be9fc7999b
Ginkgo 1.10.0 includes the relevant fix for dumping the full stack
(https://github.com/onsi/ginkgo/pull/590), so when using that release
we can simplify the logging unit test.
By changing the skipping, we can avoid the rather volatile util.go
entries. However, that gomega is part of the stack trace still needs
to be fixed in Gingko.
Kubernetes-commit: 02ce619078b1a75e9fa258e101f81af899719e8e
47ffc4e Add test case for detecting data race
959d342 Prevent data race in SetOutput* methods
34123a4 Test with golang 1.12.x
bf4884f Fix the log duplication issue for --log-file
dc5546c Backfill integration tests for selecting log destinations
baef93d fix broken links
07b218b Add go modules files
b33ae69 Add flag to include file's dir in logs
7c58910 correct documentation
a4033db Code Hygene - glog to klog
941d47b Revert "Fix the log duplication issue for --log-file."
314f6c4 Update godoc for the default value of logtostderr
7eb35e2 Fix the log duplication issue for --log-file.
Kubernetes-commit: 9a2de95601641aa1077734c76fc24ebe7b6026db