sample-controller/go.mod
Mike Danese 326f8f2c88 Add an expiring cache for the caching token authenticator
And maybe the webhook authorizer cache.

This cache has two primary advantages over the LRU cache used currently:

- Cache hits don't acquire an exclusive lock.
- More importantly, performance doesn't fallover when the access pattern
  scans a key space larger than an arbitrary size (e.g. the LRU
  capacity).

The downside of using an expiring cache here is that it doesn't have a
maximum size so it's suspectible to DoS when the input is user
controlled. This is not the case for successful authentications, and
successful authentications have a natural expiry so it might be a good
fit here.

It has some a few differences compared to:

3d7318f29d/staging/src/k8s.io/client-go/tools/cache/expiration_cache.go

- Expiration is not entirely lazy so keys that are never accessed again
  are still released from the cache.
- It does not acquire an exclusive lock on cache hits.
- It supports per entry ttls specified on Set.

The expiring cache (without striping) does somewhere in between the
simple cache and striped cache in the very contrived contention test
where every iteration acquires a write lock:

```
$ benchstat simple.log expiring.log
name      old time/op    new time/op    delta
Cache-12    2.74µs ± 2%    2.02µs ± 3%  -26.37%  (p=0.000 n=9+9)
name      old alloc/op   new alloc/op   delta
Cache-12      182B ± 0%      107B ± 4%  -41.21%  (p=0.000 n=8+9)
name      old allocs/op  new allocs/op  delta
Cache-12      5.00 ± 0%      2.00 ± 0%  -60.00%  (p=0.000 n=10+10)

$ benchstat striped.log expiring.log
name      old time/op    new time/op    delta
Cache-12    1.58µs ± 5%    2.02µs ± 3%  +27.34%  (p=0.000 n=10+9)
name      old alloc/op   new alloc/op   delta
Cache-12      288B ± 0%      107B ± 4%  -62.85%  (p=0.000 n=10+9)
name      old allocs/op  new allocs/op  delta
Cache-12      9.00 ± 0%      2.00 ± 0%  -77.78%  (p=0.000 n=10+10)

$ benchstat simple.log striped.log expiring.log
name \ time/op    simple.log   striped.log  expiring.log
Cache-12          2.74µs ± 2%  1.58µs ± 5%   2.02µs ± 3%
name \ alloc/op   simple.log   striped.log  expiring.log
Cache-12            182B ± 0%    288B ± 0%     107B ± 4%
name \ allocs/op  simple.log   striped.log  expiring.log
Cache-12            5.00 ± 0%    9.00 ± 0%     2.00 ± 0%
```

I also naively replacemed the LRU cache with the expiring cache in the
more realisitc CachedTokenAuthenticator benchmarks:

https://gist.github.com/mikedanese/41192b6eb62106c0758a4f4885bdad53

For token counts that fit in the LRU, expiring cache does better because
it does not require acquiring an exclusive lock for cache hits.

For token counts that exceed the size of the LRU, the LRU has a massive
performance drop off. The LRU cache is around 5x slower (with lookups
taking 1 milisecond and throttled to max 40 lookups in flight).

```
$ benchstat before.log after.log
name                                                  old time/op    new time/op    delta
CachedTokenAuthenticator/tokens=100_threads=256-12      3.60µs ±22%    1.08µs ± 4%  -69.91%  (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=500_threads=256-12      3.94µs ±19%    1.20µs ± 3%  -69.57%  (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=2500_threads=256-12     3.07µs ± 6%    1.17µs ± 1%  -61.87%  (p=0.000 n=9+10)
CachedTokenAuthenticator/tokens=12500_threads=256-12    3.16µs ±17%    1.38µs ± 1%  -56.23%  (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=62500_threads=256-12    15.0µs ± 1%     2.9µs ± 3%  -80.71%  (p=0.000 n=10+10)

name                                                  old alloc/op   new alloc/op   delta
CachedTokenAuthenticator/tokens=100_threads=256-12        337B ± 1%      300B ± 0%  -11.06%  (p=0.000 n=10+8)
CachedTokenAuthenticator/tokens=500_threads=256-12        307B ± 1%      304B ± 0%   -0.96%  (p=0.000 n=9+10)
CachedTokenAuthenticator/tokens=2500_threads=256-12       337B ± 1%      304B ± 0%   -9.79%  (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=12500_threads=256-12      343B ± 1%      276B ± 0%  -19.58%  (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=62500_threads=256-12      493B ± 0%      334B ± 0%  -32.12%  (p=0.000 n=10+10)

name                                                  old allocs/op  new allocs/op  delta
CachedTokenAuthenticator/tokens=100_threads=256-12        13.0 ± 0%      11.0 ± 0%  -15.38%  (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=500_threads=256-12        12.0 ± 0%      11.0 ± 0%   -8.33%  (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=2500_threads=256-12       13.0 ± 0%      11.0 ± 0%  -15.38%  (p=0.000 n=10+10)
CachedTokenAuthenticator/tokens=12500_threads=256-12      13.0 ± 0%      10.0 ± 0%  -23.08%  (p=0.000 n=9+10)
CachedTokenAuthenticator/tokens=62500_threads=256-12      17.0 ± 0%      12.0 ± 0%  -29.41%  (p=0.000 n=10+10)
```

Benchmarked with changes in #84423

Bugs: #83259 #83375

Kubernetes-commit: 9167711fd18511ffc9c90ee306c462be9fc7999b
2019-10-26 12:19:07 -07:00

24 lines
581 B
Modula-2

// This is a generated file. Do not edit directly.
module k8s.io/sample-controller
go 1.12
require (
k8s.io/api v0.0.0
k8s.io/apimachinery v0.0.0
k8s.io/client-go v0.0.0
k8s.io/code-generator v0.0.0
k8s.io/klog v1.0.0
)
replace (
golang.org/x/sys => golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a
golang.org/x/tools => golang.org/x/tools v0.0.0-20190821162956-65e3620a7ae7
k8s.io/api => ../api
k8s.io/apimachinery => ../apimachinery
k8s.io/client-go => ../client-go
k8s.io/code-generator => ../code-generator
k8s.io/sample-controller => ../sample-controller
)