Compare commits

...

60 Commits

Author SHA1 Message Date
Kubernetes Publisher
e3c0944ed2 Merge pull request #83436 from liggitt/automated-cherry-pick-of-#83261-upstream-release-1.13-1570075716
[1.13] Automated cherry pick of #83261: bump gopkg.in/yaml.v2 v2.2.4

Kubernetes-commit: 17c28f0e1c6733b02a62471c813b262df7681789
2019-10-04 10:48:56 +00:00
Jordan Liggitt
1fbb8791a9 bump gopkg.in/yaml.v2 v2.2.4
Kubernetes-commit: f39333c75ea93afb4f43f7a1d8c15dbcb7f41410
2019-10-02 14:46:08 -04:00
Kubernetes Publisher
f9f3619e62 Merge pull request #81546 from cblecker/1.13/go-1.11.13
Update golang/x/net dependency on release-1.13

Kubernetes-commit: 37d169313237cb4ceb2cc4bef300f2ae3053c1a2
2019-08-17 22:37:12 +00:00
Christoph Blecker
1f880e428b Update golang.org/x/net to b1cc14a
Kubernetes-commit: e795732a4c0f077bf9d0bd8fbf41992e390bbed5
2019-08-17 00:04:10 -07:00
Kubernetes Publisher
4c5e0077a4 sync: update godeps 2019-07-04 10:35:20 +00:00
Kubernetes Publisher
f1e98070f3 Merge pull request #79501 from nikhita/remove-bitbucket-01
[1.13] Replace bitbucket with github to fix godep error

Kubernetes-commit: bd6da4fe2b07f7681802f28de264ee7eda5cef5d
2019-06-29 00:55:59 +00:00
Nikhita Raghunath
717a3ec7b3 Replace bitbucket with github
This commit has the following changes:

- Replace `bitbucket.org/ww/goautoneg` with `github.com/munnerz/goautoneg`.
- Replace `bitbucket.org/bertimus9/systemstat` with `github.com/nikhita/systemstat`.
- Bump kube-openapi to remove so that it's dependency on `bitbucket.org/ww/goautoneg`
moves to `github.com/munnerz/goautoneg`.
- Generate `swagger.json` generated from the above change.
- Update `BUILD` files.

Bitbucket is replaced with GitHub because:

Atlassian finally pulled the plug on their 1.0 api and forces everyone
to use 2.0 now: https://developer.atlassian.com/cloud/bitbucket/deprecation-notice-v1-apis/

This leads to an error like:

```
godep: error downloading dep (bitbucket.org/ww/goautoneg): https://api.bitbucket.org/1.0/repositories/ww/goautoneg: 410 Gone
```

This was fixed in upstream go in golang/tools@13ba8ad.

To fix this in k/k:

1) We'll need to either bump our vendored version
https://github.com/kubernetes/kubernetes/blob/release-1.13/vendor/golang.org/x/tools/go/vcs/vcs.go#L676.
However, this bump brings in _lots_ of changes.

2) We can entirely remove our dependency on bitbucket.

The second point is better because:

1) godep itself vendors in an older version: https://github.com/tools/godep/blob/master/vendor/golang.org/x/tools/go/vcs/vcs.go#L667.
This means that anyone who installs godep directly, without forking it,
will not be able to use it with Kubernetes if we stick to bitbucket.

2) Bumping `golang/x/tools` requires running `godep restore`, which doesn't
work because that uses the 1.0 api...leading to a catch-22 like situation.

Kubernetes-commit: 409df0aa2e5a555454909eab3c4f492461c21f3b
2019-06-28 15:43:19 +05:30
Kubernetes Publisher
9593044ffe Merge pull request #74102 from caesarxuchao/automated-cherry-pick-of-#73443-#73713-#73805-#74000-upstream-release-1.13
Automated cherry pick of #73443: update json-patch to pick up bug fixes

Kubernetes-commit: de4225fa13bfb50581f80e6af63b326a3c1028b1
2019-02-21 22:07:23 +00:00
Chao Xu
a5274af388 Importing latest json-patch.
Kubernetes-commit: f80a5504d88b9029a4323a7c6bd31e034badc315
2019-02-04 09:47:54 -08:00
Chao Xu
cd99871cca update json-patch to pick up bug fixes
Kubernetes-commit: f0a495cff09087e38f39ac2dd4864b38e14da7be
2019-01-28 17:42:01 -08:00
Kubernetes Publisher
945b0edca8 Merge remote-tracking branch 'origin/master' into release-1.13
Kubernetes-commit: 016b73bae049309a13d1422b5fbd27e519bc3cca
2018-11-21 19:37:49 +00:00
Christoph Blecker
4ad346dca1 Update github.com/json-iterator/go to 1.1.4
Kubernetes-commit: c7d39519279937693e654149eb6b67af46836135
2018-11-20 18:13:01 -08:00
Kubernetes Publisher
8ddebc5d89 Merge remote-tracking branch 'origin/master' into release-1.13
Kubernetes-commit: 23dc5401f4e9b985860aeae9657bba1b28c74ff8
2018-11-17 11:36:02 +00:00
David Eads
bf7334cd9d generated
Kubernetes-commit: 8f7edec615fb9cd722b7f8310dab3efa25351b7c
2018-11-16 08:38:57 -05:00
Kubernetes Publisher
28705fc220 Merge remote-tracking branch 'origin/master' into release-1.13
Kubernetes-commit: 03aacded1e0e8e9ebf2a84039f02433bb7b38bd0
2018-11-12 11:36:58 +00:00
Davanum Srinivas
794c636ab1 Update all the staging Godeps.json
Change-Id: I64b30c68a606b4f5c095a66496a1e48c4d62ea88

Kubernetes-commit: 68ce375d0039738df5a2a837122215f3224f1fde
2018-11-09 16:41:26 -05:00
Davanum Srinivas
58e97b0bc2 Move from glog to klog
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
  * github.com/kubernetes/repo-infra
  * k8s.io/gengo/
  * k8s.io/kube-openapi/
  * github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods

Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135

Kubernetes-commit: 954996e231074dc7429f7be1256a579bedd8344c
2018-11-09 13:49:10 -05:00
Kubernetes Publisher
b9d013a749 Merge remote-tracking branch 'origin/master' into release-1.13
Kubernetes-commit: 11facc81ac19cba95cc85f7da1635adafec20bb8
2018-11-08 17:02:10 -08:00
Kubernetes Publisher
48776fd1f2 Merge pull request #70598 from dims/switch-from-sigs.k8s.io/yaml-to-ghodss/yaml
Switch to sigs.k8s.io/yaml from ghodss/yaml

Kubernetes-commit: f212b9db236344d3121879e609d53b79f9f106f9
2018-11-09 03:55:51 +00:00
Kubernetes Publisher
0b7a8b29d6 Merge pull request #70718 from cblecker/godep-round-a-million
Fork godep to fix inconsistent abbreviation size

Kubernetes-commit: e998d6c2bc83385d98186a87e95a0f947e121ec1
2018-11-09 03:55:27 +00:00
Daniel Smith
cbb9197ba2 update generated deps
Kubernetes-commit: dcb10d81d18f4e8a58496ef61b62247ae93bbaef
2018-11-08 11:01:41 +00:00
Christoph Blecker
b8ae83903e Update godeps
Kubernetes-commit: d15da2c586ba27df895c22486b1b527852c6363d
2018-11-06 16:23:59 -08:00
Wenjia Zhang
d36cff0c2c update staging godeps for golang.org/x/net/... to release-branch.go1.10
Kubernetes-commit: adf155ee9f9dfa023069282ec195f9eb8d1ce0fe
2018-11-06 15:49:50 -08:00
Davanum Srinivas
0b5e85fef2 Switch to sigs.k8s.io/yaml from ghodss/yaml
Change-Id: Ic72b5131bf441d159012d67a6a3d87088d0e6d31

Kubernetes-commit: 43f523d405b012fa8d90dd95b667f520e036f6bc
2018-11-02 16:41:57 -04:00
Kubernetes Publisher
dcc8aa9d76 Merge pull request #70031 from nrfox/requeue-on-error
Sample Controller: requeue work item on syncHandler error

Kubernetes-commit: b7b0aae4358065b98a3db04411bec65a11eb166e
2018-10-27 10:54:09 +00:00
Kubernetes Publisher
7ee81ff4ae Merge pull request #69636 from p0lyn0mial/sample_controller_factory_start
fixes the way the informers are started in sample controller pkg

Kubernetes-commit: 8e7e226422aabc62f84a0e151c3cd73fa1da70d4
2018-10-26 15:32:11 +00:00
nrfox
94e9a85e76 Requeue work item on syncHandler error
Kubernetes-commit: 8f86654b8e23bbf960a286712df35c003f7389f8
2018-10-19 13:54:31 -04:00
Kubernetes Publisher
fe5d3e0825 Merge pull request #67547 from pbarker/audit-api
dynamic audit configuration api

Kubernetes-commit: 0652e098d03197aa4cc0a53440f62e425bf992c5
2018-10-18 01:54:19 +00:00
Kubernetes Publisher
f1200b1b60 Merge pull request #69269 from miguelbernadi/fix-golint-issues-68026
Fix undocumented golint errors

Kubernetes-commit: 31438712d6602ed16f0fd8311782178cec0bb774
2018-10-17 05:52:13 +00:00
Kubernetes Publisher
83a48d571f Merge pull request #69330 from vaikas-google/json-patch
Add support for JSON patch in fake client

Kubernetes-commit: 2f8b585d9c9815886e5563ed5ac75cd0bc94a8e8
2018-10-16 01:52:43 +00:00
Patrick Barker
eee7e4e176 adds dynamic audit api generated
Kubernetes-commit: b8e1250487f51bd27bd75f4bfb45c8635d6344ed
2018-10-16 00:20:30 +00:00
Kubernetes Publisher
e8109b7ca8 Merge pull request #69627 from dims/updating-ghodss-yaml-to-latest-version-2
Updating ghodss/yaml and gopkg.in/yaml.v2 to latest version 2

Kubernetes-commit: 3348f9ae23d6502218c6600bcee8d05e00ce5ee3
2018-10-13 01:29:34 +00:00
Kubernetes Publisher
76b83c1674 Merge pull request #69322 from jpbetz/etcd-client-3.3.9
Update etcd client to 3.3 for 1.13

Kubernetes-commit: a8c7a3fd5e707243af68b10a8a581c2c59248222
2018-10-11 07:15:43 +00:00
p0lyn0mial
3cc2958b99 fixes the way the informers are started in sample controller pkg
Kubernetes-commit: e55ca64dbd3b141f8a373d129b1a619db0e672f0
2018-10-10 20:54:36 +02:00
Davanum Srinivas
724b71bd9d Updating ghodss/yaml and gopkg.in/yaml.v2 to latest version
Change-Id: I1f1a10b68a2d3e796724c6ac26f0ed3260153588

Kubernetes-commit: 6364af128b0ca50e66501519f333e696a26801d9
2018-10-10 10:14:20 -04:00
Kubernetes Publisher
77db795851 sync: update godeps 2018-10-09 09:01:07 +00:00
Miguel Bernabeu
7bf6cd7d56 Fix golint errors when generating informer code
Kubernetes-commit: acf78cd6133de6faea9221d8c53b02ca6009b0bb
2018-10-03 21:39:26 +02:00
Ville Aikas
69ec0e4f12 Add support for JSON patch in fake client
Kubernetes-commit: a363b153851326ece7f81f4c1ae0a1ab8700a209
2018-10-02 13:30:52 +00:00
Joe Betz
bef5795b2d Update etcd client to 3.3.9
Kubernetes-commit: 4263c752115c3796ee5715c7de4cbc2e237809d3
2018-10-01 16:53:57 -07:00
Kubernetes Publisher
b8a1e69103 Merge pull request #68709 from krzyzacy/fix-sample-test
fix patch compare in sample-controller test

Kubernetes-commit: 1b2298cb75a1375083a275560c71f8a7479c43d0
2018-09-26 07:57:48 +00:00
Kubernetes Publisher
4f47c578d9 Merge pull request #68245 from jingyih/remove_tagName_in_goDoc
*: Remove comment tags in GoDoc

Kubernetes-commit: a67689dfcab0ed547e1d060c414eae7c81629cc9
2018-09-25 15:47:45 +00:00
Kubernetes Publisher
d71bd50edb Merge pull request #68238 from justinsb/update_reflect2_to_101
Update reflect2 to 1.0.1 (memory utilization fix)

Kubernetes-commit: a94ea824eb59e92188f166c302d7995ba9002667
2018-09-25 15:47:15 +00:00
Sen Lu
f98e21072d fix patch compare in test
Kubernetes-commit: 6e40cd846c1d68fed1315e37823ba600e9c761f2
2018-09-14 22:48:49 -07:00
Jingyi Hu
6d9191098e *: Remove comment tags in GoDoc
Adding blank line between comment tag and package name in doc.go. So
that the comment tags such as '+k8s:deepcopy-gen=package' do not show up
in GoDoc.

Kubernetes-commit: 61117761cd4a1b2e6ad9ff2d7eb915f3d2739dc6
2018-09-04 14:08:32 -07:00
Justin Santa Barbara
30a25c44dc Update reflect2 to 1.0.1 (memory utilization fix)
Picking up https://github.com/modern-go/reflect2/pull/2 which lazy
initializes a map of all types which we don't use in k8s, saving
memory & initialization time.

Kubernetes-commit: 970e4da4c6636b835175dc79a7492d22dc11ba33
2018-09-04 13:13:00 -04:00
Kubernetes Publisher
af8f21f6ee Merge pull request #67359 from mikedanese/reloadtoken
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

client: periodically reload InClusterConfig token

/sig auth
/sig api-machinery

```release-note
NONE
```

Kubernetes-commit: 7b6647a418c660f2c87f183f706b297f1cb573ca
2018-09-02 08:53:46 +00:00
Kubernetes Publisher
9bec8a6a53 Merge pull request #64097 from damemi/hpa-metrics-specificity
Automatic merge from submit-queue (batch tested with PRs 67894, 64097). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

HPA metrics specificity improvements

**What this PR does / why we need it**:
Improves available specificity for HPA metrics by adding metric selector fields for metrics of Pods and Objects.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Implements this KEP: https://github.com/kubernetes/community/pull/2055

**Special notes for your reviewer**:
Need to add/update tests?

**Release note**:

```release-note
Introduces autoscaling/v2beta2 and custom_metrics/v1beta2, which implement metric selectors for Object and Pods metrics, as well as allowing AverageValue targets on Objects, similar to External metrics.
```

/assign @DirectXMan12

Kubernetes-commit: fdb5707194d56cbbd0da11c5be3a2a5653e714c9
2018-08-28 00:29:15 +00:00
Kubernetes Publisher
a7ac24a6d0 Merge pull request #66971 from tnozicka/informer-watcher
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

#50102 Task 2: Add UntilWithSync

**What this PR does / why we need it**:
This is a split off from https://github.com/kubernetes/kubernetes/pull/50102 to go in smaller pieces.

Introduces UntilWithSync based on informer.

**Needs https://github.com/kubernetes/kubernetes/pull/66906 first**
/hold

**Release note**:
```release-note
NONE
```

/priority important-soon
/kind bug
(bug after the main PR which is this split from)

Kubernetes-commit: c4f355a2ad9692f5459541d4e4d94fcbc5f7d946
2018-08-23 16:44:54 +00:00
Kubernetes Publisher
be98dc6210 Merge pull request #67686 from imroc/update-sample-controller-readme
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

update sample-controller README

Kubernetes-commit: 405eecad6dde9ab24b770fe583da51b9fd7c6941
2018-08-22 12:50:00 +00:00
roc
5962ba9f56 update sample-controller README
remind that CustomResourceSubresources is beta in v1.11 and enabled by default

Kubernetes-commit: ac741a8412d3c73ba39224f2f2e849f9976acc5a
2018-08-22 13:42:47 +08:00
Kubernetes Publisher
8605b8ceff Merge pull request #67596 from nikhita/add-apimachinery-label-owners
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add sig/api-machinery label to apimachinery OWNERS files

Inspired by https://github.com/kubernetes/kubernetes/pull/67548. List of OWNERS files taken from https://github.com/kubernetes/community/blob/master/sig-api-machinery/README.md#subprojects.

**Release note**:

```release-note
NONE
```

Kubernetes-commit: 8f4ab6fe7635983443ebef7fde5b9c8861bef5bb
2018-08-21 00:23:11 +00:00
Nikhita Raghunath
bc340e9483 Add sig/api-machinery label to apimachinery OWNERS files
Kubernetes-commit: 6e47ba1fded3dc9932bd62affb673d321089760f
2018-08-20 18:46:47 +05:30
Kubernetes Publisher
acc9f193ca Merge pull request #65779 from cblecker/mergo-update
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

update github.com/imdario/mergo to v0.3.5

**What this PR does / why we need it**:
Updates github.com/imdario/mergo library to v0.3.5. We were pinned because of a functionality change in the dependency, however, a new function was introduced with similar functionality to the old.

There is apparently some Debian packaging issues (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=878254) because of this. I'm still not clear what those are, unless they are forcing the library to update, as opposed to using our `vendor/`.

That said, this will allow for some of those vendor conflicts to resolve for anyone else who is using client-go, so that's at least worthwhile.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
fixes #27543, fixes https://github.com/kubernetes/client-go/issues/431

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```

Kubernetes-commit: 6b4135267911b6c10ed536308d29d2a7c453eef6
2018-08-18 02:39:08 +00:00
Kubernetes Publisher
3e2e91bf31 Merge pull request #66906 from tnozicka/rename-until
Automatic merge from submit-queue (batch tested with PRs 67071, 66906, 66722, 67276, 67039). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

#50102 Task 1: Move apimachinery/pkg/watch.Until into client-go/tools/watch.UntilWithoutRetry

**What this PR does / why we need it**:
This is a split off from https://github.com/kubernetes/kubernetes/pull/50102 to go in smaller pieces.

Moves `apimachinery/pkg/watch.Until` into `client-go/tools/watch.UntilWithoutRetry` and adds context so it is cancelable.

**Release note**:
```release-note
NONE
```

**Dev release note**:
```dev-release-note
`apimachinery/pkg/watch.Until` has been moved to `client-go/tools/watch.UntilWithoutRetry`.
While switching please consider using the new `client-go/tools/watch.UntilWithSync` or `client-go/tools/watch.Until`.
```

/cc @smarterclayton @kubernetes/sig-api-machinery-pr-reviews
/milestone v1.12
/priority important-soon
/kind bug
(bug after the main PR which is this split from)

Kubernetes-commit: b6f0aed056ab94fef0b6f54e1ca1d66a5fc228b3
2018-08-16 15:23:25 +00:00
Kubernetes Publisher
b24841bd57 Merge pull request #67178 from cblecker/cfssl
Automatic merge from submit-queue (batch tested with PRs 66602, 67178, 67207, 67125, 66332). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Vendor cfssl/cfssljson utilities

**What this PR does / why we need it**:
Vendors the `cfssl` and `cfssljson` tools. Updates `kube::util::ensure-cfssl` to use them.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
fixes #66995, fixes #60070

**Special notes for your reviewer**:
1. Add cfssl/cfssljson ot the required bins for saving
2. Manually cloned/checked out the new dependencies to my gopath. `godep restore` doesn't pull them down because they aren't required or already in the `Godeps.json`. Used @BenTheElder's list here: https://github.com/kubernetes/kubernetes/issues/66995#issuecomment-410594532
3. `hack/godep-save.sh` to add the packages and dependencies to godep
4. Fixed two bugs when building:
  a. `golang.org/x/crypto` needed to be updated
  b. `github.com/cloudflare/cfssl` needed to be updated to 56268a613a so we can vendor their fork of `crypto/tls`, as we discard their modified vendored stdlib.
5. Update staging godeps
6. Update the `kube::util::ensure-cfssl` to install from vendor

**Release note**:
```release-note
NONE
```

Kubernetes-commit: 818e632c1fde5fb01bc8ccf9b9ee6201f33a28b4
2018-08-16 15:22:34 +00:00
Mike Danese
70ded76448 reload token file for InClusterConfig every 5 minutes
Kubernetes-commit: 287f6a564fb8c264f281056011f4a66f197b18f4
2018-08-13 16:47:17 -07:00
Tomas Nozicka
a4f7d75f32 Update Godeps
Kubernetes-commit: f6836df5dd104bde62d80be411582c5d08dcaa65
2018-08-07 20:15:14 +02:00
Tomas Nozicka
fb123e6c66 Update Godeps
Kubernetes-commit: 41f5c3dbf55c5b02a3067789b0b835c6eb6f3bd3
2018-08-06 14:49:19 +02:00
Christoph Blecker
72a1e10055 Update github.com/imdario/mergo to v0.3.5
Kubernetes-commit: 12b2e2c2b53ab987e956673bc778e040af22e304
2018-07-03 11:15:43 -07:00
Mike Dame
056e0354fd Generate files and modifications for autoscaling/v2beta2 and custom_metrics/v1beta2
Kubernetes-commit: 77d7f9cfa2b489b75b47fa1e1de5660139f83034
2018-06-28 14:35:50 -04:00
893 changed files with 37201 additions and 84722 deletions

564
Godeps/Godeps.json generated

File diff suppressed because it is too large Load Diff

2
OWNERS
View File

@@ -7,3 +7,5 @@ reviewers:
- sttts
- munnerz
- nikhita
labels:
- sig/api-machinery

View File

@@ -105,9 +105,8 @@ $ kubectl create -f artifacts/examples/crd-validation.yaml
## Subresources
Custom Resources support `/status` and `/scale` subresources as an
[alpha feature](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#subresources) in v1.10.
Enable this feature using the `CustomResourceSubresources` feature gate on the [kube-apiserver](https://kubernetes.io/docs/admin/kube-apiserver):
Custom Resources support `/status` and `/scale` subresources as a [beta feature](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#subresources) in v1.11 and is enabled by default.
This feature is [alpha](https://v1-10.docs.kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#subresources) in v1.10 and to enable it you need to set the `CustomResourceSubresources` feature gate on the [kube-apiserver](https://kubernetes.io/docs/admin/kube-apiserver):
```sh
--feature-gates=CustomResourceSubresources=true

View File

@@ -20,7 +20,6 @@ import (
"fmt"
"time"
"github.com/golang/glog"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
@@ -37,6 +36,7 @@ import (
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog"
samplev1alpha1 "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
clientset "k8s.io/sample-controller/pkg/client/clientset/versioned"
@@ -96,9 +96,9 @@ func NewController(
// Add sample-controller types to the default Kubernetes Scheme so Events can be
// logged for sample-controller types.
utilruntime.Must(samplescheme.AddToScheme(scheme.Scheme))
glog.V(4).Info("Creating event broadcaster")
klog.V(4).Info("Creating event broadcaster")
eventBroadcaster := record.NewBroadcaster()
eventBroadcaster.StartLogging(glog.Infof)
eventBroadcaster.StartLogging(klog.Infof)
eventBroadcaster.StartRecordingToSink(&typedcorev1.EventSinkImpl{Interface: kubeclientset.CoreV1().Events("")})
recorder := eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: controllerAgentName})
@@ -113,7 +113,7 @@ func NewController(
recorder: recorder,
}
glog.Info("Setting up event handlers")
klog.Info("Setting up event handlers")
// Set up an event handler for when Foo resources change
fooInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controller.enqueueFoo,
@@ -154,23 +154,23 @@ func (c *Controller) Run(threadiness int, stopCh <-chan struct{}) error {
defer c.workqueue.ShutDown()
// Start the informer factories to begin populating the informer caches
glog.Info("Starting Foo controller")
klog.Info("Starting Foo controller")
// Wait for the caches to be synced before starting workers
glog.Info("Waiting for informer caches to sync")
klog.Info("Waiting for informer caches to sync")
if ok := cache.WaitForCacheSync(stopCh, c.deploymentsSynced, c.foosSynced); !ok {
return fmt.Errorf("failed to wait for caches to sync")
}
glog.Info("Starting workers")
klog.Info("Starting workers")
// Launch two workers to process Foo resources
for i := 0; i < threadiness; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
glog.Info("Started workers")
klog.Info("Started workers")
<-stopCh
glog.Info("Shutting down workers")
klog.Info("Shutting down workers")
return nil
}
@@ -219,12 +219,14 @@ func (c *Controller) processNextWorkItem() bool {
// Run the syncHandler, passing it the namespace/name string of the
// Foo resource to be synced.
if err := c.syncHandler(key); err != nil {
return fmt.Errorf("error syncing '%s': %s", key, err.Error())
// Put the item back on the workqueue to handle any transient errors.
c.workqueue.AddRateLimited(key)
return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error())
}
// Finally, if no error occurs we Forget this item so it does not
// get queued again until another change happens.
c.workqueue.Forget(obj)
glog.Infof("Successfully synced '%s'", key)
klog.Infof("Successfully synced '%s'", key)
return nil
}(obj)
@@ -295,7 +297,7 @@ func (c *Controller) syncHandler(key string) error {
// number does not equal the current desired replicas on the Deployment, we
// should update the Deployment resource.
if foo.Spec.Replicas != nil && *foo.Spec.Replicas != *deployment.Spec.Replicas {
glog.V(4).Infof("Foo %s replicas: %d, deployment replicas: %d", name, *foo.Spec.Replicas, *deployment.Spec.Replicas)
klog.V(4).Infof("Foo %s replicas: %d, deployment replicas: %d", name, *foo.Spec.Replicas, *deployment.Spec.Replicas)
deployment, err = c.kubeclientset.AppsV1().Deployments(foo.Namespace).Update(newDeployment(foo))
}
@@ -363,9 +365,9 @@ func (c *Controller) handleObject(obj interface{}) {
runtime.HandleError(fmt.Errorf("error decoding object tombstone, invalid type"))
return
}
glog.V(4).Infof("Recovered deleted object '%s' from tombstone", object.GetName())
klog.V(4).Infof("Recovered deleted object '%s' from tombstone", object.GetName())
}
glog.V(4).Infof("Processing object: %s", object.GetName())
klog.V(4).Infof("Processing object: %s", object.GetName())
if ownerRef := metav1.GetControllerOf(object); ownerRef != nil {
// If this object is not owned by a Foo, we should not do anything more
// with it.
@@ -375,7 +377,7 @@ func (c *Controller) handleObject(obj interface{}) {
foo, err := c.foosLister.Foos(object.GetNamespace()).Get(ownerRef.Name)
if err != nil {
glog.V(4).Infof("ignoring orphaned object '%s' of foo '%s'", object.GetSelfLink(), ownerRef.Name)
klog.V(4).Infof("ignoring orphaned object '%s' of foo '%s'", object.GetSelfLink(), ownerRef.Name)
return
}

View File

@@ -198,7 +198,7 @@ func checkAction(expected, actual core.Action, t *testing.T) {
expPatch := e.GetPatch()
patch := a.GetPatch()
if !reflect.DeepEqual(expPatch, expPatch) {
if !reflect.DeepEqual(expPatch, patch) {
t.Errorf("Action %s %s has wrong patch\nDiff:\n %s",
a.GetVerb(), a.GetResource().Resource, diff.ObjectGoPrintDiff(expPatch, patch))
}

16
main.go
View File

@@ -20,10 +20,10 @@ import (
"flag"
"time"
"github.com/golang/glog"
kubeinformers "k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/klog"
// Uncomment the following line to load the gcp plugin (only required to authenticate against GKE clusters).
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
@@ -45,17 +45,17 @@ func main() {
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
glog.Fatalf("Error building kubeconfig: %s", err.Error())
klog.Fatalf("Error building kubeconfig: %s", err.Error())
}
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
glog.Fatalf("Error building kubernetes clientset: %s", err.Error())
klog.Fatalf("Error building kubernetes clientset: %s", err.Error())
}
exampleClient, err := clientset.NewForConfig(cfg)
if err != nil {
glog.Fatalf("Error building example clientset: %s", err.Error())
klog.Fatalf("Error building example clientset: %s", err.Error())
}
kubeInformerFactory := kubeinformers.NewSharedInformerFactory(kubeClient, time.Second*30)
@@ -65,11 +65,13 @@ func main() {
kubeInformerFactory.Apps().V1().Deployments(),
exampleInformerFactory.Samplecontroller().V1alpha1().Foos())
go kubeInformerFactory.Start(stopCh)
go exampleInformerFactory.Start(stopCh)
// notice that there is no need to run Start methods in a separate goroutine. (i.e. go kubeInformerFactory.Start(stopCh)
// Start method is non-blocking and runs all registered informers in a dedicated goroutine.
kubeInformerFactory.Start(stopCh)
exampleInformerFactory.Start(stopCh)
if err = controller.Run(2, stopCh); err != nil {
glog.Fatalf("Error running controller: %s", err.Error())
klog.Fatalf("Error running controller: %s", err.Error())
}
}

View File

@@ -15,7 +15,7 @@ limitations under the License.
*/
// +k8s:deepcopy-gen=package
// +groupName=samplecontroller.k8s.io
// Package v1alpha1 is the v1alpha1 version of the API.
// +groupName=samplecontroller.k8s.io
package v1alpha1

View File

@@ -131,7 +131,7 @@ func (c *FakeFoos) DeleteCollection(options *v1.DeleteOptions, listOptions v1.Li
// Patch applies the patch and returns the patched foo.
func (c *FakeFoos) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1alpha1.Foo, err error) {
obj, err := c.Fake.
Invokes(testing.NewPatchSubresourceAction(foosResource, c.ns, name, data, subresources...), &v1alpha1.Foo{})
Invokes(testing.NewPatchSubresourceAction(foosResource, c.ns, name, pt, data, subresources...), &v1alpha1.Foo{})
if obj == nil {
return nil, err

View File

@@ -19,6 +19,8 @@ limitations under the License.
package v1alpha1
import (
"time"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
@@ -76,11 +78,16 @@ func (c *foos) Get(name string, options v1.GetOptions) (result *v1alpha1.Foo, er
// List takes label and field selectors, and returns the list of Foos that match those selectors.
func (c *foos) List(opts v1.ListOptions) (result *v1alpha1.FooList, err error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
result = &v1alpha1.FooList{}
err = c.client.Get().
Namespace(c.ns).
Resource("foos").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Do().
Into(result)
return
@@ -88,11 +95,16 @@ func (c *foos) List(opts v1.ListOptions) (result *v1alpha1.FooList, err error) {
// Watch returns a watch.Interface that watches the requested foos.
func (c *foos) Watch(opts v1.ListOptions) (watch.Interface, error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
opts.Watch = true
return c.client.Get().
Namespace(c.ns).
Resource("foos").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Watch()
}
@@ -150,10 +162,15 @@ func (c *foos) Delete(name string, options *v1.DeleteOptions) error {
// DeleteCollection deletes a collection of objects.
func (c *foos) DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error {
var timeout time.Duration
if listOptions.TimeoutSeconds != nil {
timeout = time.Duration(*listOptions.TimeoutSeconds) * time.Second
}
return c.client.Delete().
Namespace(c.ns).
Resource("foos").
VersionedParams(&listOptions, scheme.ParameterCodec).
Timeout(timeout).
Body(options).
Do().
Error()

View File

@@ -27,6 +27,7 @@ import (
versioned "k8s.io/sample-controller/pkg/client/clientset/versioned"
)
// NewInformerFunc takes versioned.Interface and time.Duration to return a SharedIndexInformer.
type NewInformerFunc func(versioned.Interface, time.Duration) cache.SharedIndexInformer
// SharedInformerFactory a small interface to allow for adding an informer without an import cycle
@@ -35,4 +36,5 @@ type SharedInformerFactory interface {
InformerFor(obj runtime.Object, newFunc NewInformerFunc) cache.SharedIndexInformer
}
// TweakListOptionsFunc is a function that transforms a v1.ListOptions.
type TweakListOptionsFunc func(*v1.ListOptions)

16
vendor/github.com/evanphx/json-patch/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,16 @@
language: go
go:
- 1.8
- 1.7
install:
- if ! go get code.google.com/p/go.tools/cmd/cover; then go get golang.org/x/tools/cmd/cover; fi
- go get github.com/jessevdk/go-flags
script:
- go get
- go test -cover ./...
notifications:
email: false

25
vendor/github.com/evanphx/json-patch/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,25 @@
Copyright (c) 2014, Evan Phoenix
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the Evan Phoenix nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

297
vendor/github.com/evanphx/json-patch/README.md generated vendored Normal file
View File

@@ -0,0 +1,297 @@
# JSON-Patch
`jsonpatch` is a library which provides functionallity for both applying
[RFC6902 JSON patches](http://tools.ietf.org/html/rfc6902) against documents, as
well as for calculating & applying [RFC7396 JSON merge patches](https://tools.ietf.org/html/rfc7396).
[![GoDoc](https://godoc.org/github.com/evanphx/json-patch?status.svg)](http://godoc.org/github.com/evanphx/json-patch)
[![Build Status](https://travis-ci.org/evanphx/json-patch.svg?branch=master)](https://travis-ci.org/evanphx/json-patch)
[![Report Card](https://goreportcard.com/badge/github.com/evanphx/json-patch)](https://goreportcard.com/report/github.com/evanphx/json-patch)
# Get It!
**Latest and greatest**:
```bash
go get -u github.com/evanphx/json-patch
```
**Stable Versions**:
* Version 4: `go get -u gopkg.in/evanphx/json-patch.v4`
(previous versions below `v3` are unavailable)
# Use It!
* [Create and apply a merge patch](#create-and-apply-a-merge-patch)
* [Create and apply a JSON Patch](#create-and-apply-a-json-patch)
* [Comparing JSON documents](#comparing-json-documents)
* [Combine merge patches](#combine-merge-patches)
# Configuration
* There is a global configuration variable `jsonpatch.SupportNegativeIndices`.
This defaults to `true` and enables the non-standard practice of allowing
negative indices to mean indices starting at the end of an array. This
functionality can be disabled by setting `jsonpatch.SupportNegativeIndices =
false`.
* There is a global configuration variable `jsonpatch.AccumulatedCopySizeLimit`,
which limits the total size increase in bytes caused by "copy" operations in a
patch. It defaults to 0, which means there is no limit.
## Create and apply a merge patch
Given both an original JSON document and a modified JSON document, you can create
a [Merge Patch](https://tools.ietf.org/html/rfc7396) document.
It can describe the changes needed to convert from the original to the
modified JSON document.
Once you have a merge patch, you can apply it to other JSON documents using the
`jsonpatch.MergePatch(document, patch)` function.
```go
package main
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
)
func main() {
// Let's create a merge patch from these two documents...
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
target := []byte(`{"name": "Jane", "age": 24}`)
patch, err := jsonpatch.CreateMergePatch(original, target)
if err != nil {
panic(err)
}
// Now lets apply the patch against a different JSON document...
alternative := []byte(`{"name": "Tina", "age": 28, "height": 3.75}`)
modifiedAlternative, err := jsonpatch.MergePatch(alternative, patch)
fmt.Printf("patch document: %s\n", patch)
fmt.Printf("updated alternative doc: %s\n", modifiedAlternative)
}
```
When ran, you get the following output:
```bash
$ go run main.go
patch document: {"height":null,"name":"Jane"}
updated tina doc: {"age":28,"name":"Jane"}
```
## Create and apply a JSON Patch
You can create patch objects using `DecodePatch([]byte)`, which can then
be applied against JSON documents.
The following is an example of creating a patch from two operations, and
applying it against a JSON document.
```go
package main
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
)
func main() {
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
patchJSON := []byte(`[
{"op": "replace", "path": "/name", "value": "Jane"},
{"op": "remove", "path": "/height"}
]`)
patch, err := jsonpatch.DecodePatch(patchJSON)
if err != nil {
panic(err)
}
modified, err := patch.Apply(original)
if err != nil {
panic(err)
}
fmt.Printf("Original document: %s\n", original)
fmt.Printf("Modified document: %s\n", modified)
}
```
When ran, you get the following output:
```bash
$ go run main.go
Original document: {"name": "John", "age": 24, "height": 3.21}
Modified document: {"age":24,"name":"Jane"}
```
## Comparing JSON documents
Due to potential whitespace and ordering differences, one cannot simply compare
JSON strings or byte-arrays directly.
As such, you can instead use `jsonpatch.Equal(document1, document2)` to
determine if two JSON documents are _structurally_ equal. This ignores
whitespace differences, and key-value ordering.
```go
package main
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
)
func main() {
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
similar := []byte(`
{
"age": 24,
"height": 3.21,
"name": "John"
}
`)
different := []byte(`{"name": "Jane", "age": 20, "height": 3.37}`)
if jsonpatch.Equal(original, similar) {
fmt.Println(`"original" is structurally equal to "similar"`)
}
if !jsonpatch.Equal(original, different) {
fmt.Println(`"original" is _not_ structurally equal to "similar"`)
}
}
```
When ran, you get the following output:
```bash
$ go run main.go
"original" is structurally equal to "similar"
"original" is _not_ structurally equal to "similar"
```
## Combine merge patches
Given two JSON merge patch documents, it is possible to combine them into a
single merge patch which can describe both set of changes.
The resulting merge patch can be used such that applying it results in a
document structurally similar as merging each merge patch to the document
in succession.
```go
package main
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
)
func main() {
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
nameAndHeight := []byte(`{"height":null,"name":"Jane"}`)
ageAndEyes := []byte(`{"age":4.23,"eyes":"blue"}`)
// Let's combine these merge patch documents...
combinedPatch, err := jsonpatch.MergeMergePatches(nameAndHeight, ageAndEyes)
if err != nil {
panic(err)
}
// Apply each patch individual against the original document
withoutCombinedPatch, err := jsonpatch.MergePatch(original, nameAndHeight)
if err != nil {
panic(err)
}
withoutCombinedPatch, err = jsonpatch.MergePatch(withoutCombinedPatch, ageAndEyes)
if err != nil {
panic(err)
}
// Apply the combined patch against the original document
withCombinedPatch, err := jsonpatch.MergePatch(original, combinedPatch)
if err != nil {
panic(err)
}
// Do both result in the same thing? They should!
if jsonpatch.Equal(withCombinedPatch, withoutCombinedPatch) {
fmt.Println("Both JSON documents are structurally the same!")
}
fmt.Printf("combined merge patch: %s", combinedPatch)
}
```
When ran, you get the following output:
```bash
$ go run main.go
Both JSON documents are structurally the same!
combined merge patch: {"age":4.23,"eyes":"blue","height":null,"name":"Jane"}
```
# CLI for comparing JSON documents
You can install the commandline program `json-patch`.
This program can take multiple JSON patch documents as arguments,
and fed a JSON document from `stdin`. It will apply the patch(es) against
the document and output the modified doc.
**patch.1.json**
```json
[
{"op": "replace", "path": "/name", "value": "Jane"},
{"op": "remove", "path": "/height"}
]
```
**patch.2.json**
```json
[
{"op": "add", "path": "/address", "value": "123 Main St"},
{"op": "replace", "path": "/age", "value": "21"}
]
```
**document.json**
```json
{
"name": "John",
"age": 24,
"height": 3.21
}
```
You can then run:
```bash
$ go install github.com/evanphx/json-patch/cmd/json-patch
$ cat document.json | json-patch -p patch.1.json -p patch.2.json
{"address":"123 Main St","age":"21","name":"Jane"}
```
# Help It!
Contributions are welcomed! Leave [an issue](https://github.com/evanphx/json-patch/issues)
or [create a PR](https://github.com/evanphx/json-patch/compare).
Before creating a pull request, we'd ask that you make sure tests are passing
and that you have added new tests when applicable.
Contributors can run tests using:
```bash
go test -cover ./...
```
Builds for pull requests are tested automatically
using [TravisCI](https://travis-ci.org/evanphx/json-patch).

38
vendor/github.com/evanphx/json-patch/errors.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
package jsonpatch
import "fmt"
// AccumulatedCopySizeError is an error type returned when the accumulated size
// increase caused by copy operations in a patch operation has exceeded the
// limit.
type AccumulatedCopySizeError struct {
limit int64
accumulated int64
}
// NewAccumulatedCopySizeError returns an AccumulatedCopySizeError.
func NewAccumulatedCopySizeError(l, a int64) *AccumulatedCopySizeError {
return &AccumulatedCopySizeError{limit: l, accumulated: a}
}
// Error implements the error interface.
func (a *AccumulatedCopySizeError) Error() string {
return fmt.Sprintf("Unable to complete the copy, the accumulated size increase of copy is %d, exceeding the limit %d", a.accumulated, a.limit)
}
// ArraySizeError is an error type returned when the array size has exceeded
// the limit.
type ArraySizeError struct {
limit int
size int
}
// NewArraySizeError returns an ArraySizeError.
func NewArraySizeError(l, s int) *ArraySizeError {
return &ArraySizeError{limit: l, size: s}
}
// Error implements the error interface.
func (a *ArraySizeError) Error() string {
return fmt.Sprintf("Unable to create array of size %d, limit is %d", a.size, a.limit)
}

383
vendor/github.com/evanphx/json-patch/merge.go generated vendored Normal file
View File

@@ -0,0 +1,383 @@
package jsonpatch
import (
"bytes"
"encoding/json"
"fmt"
"reflect"
)
func merge(cur, patch *lazyNode, mergeMerge bool) *lazyNode {
curDoc, err := cur.intoDoc()
if err != nil {
pruneNulls(patch)
return patch
}
patchDoc, err := patch.intoDoc()
if err != nil {
return patch
}
mergeDocs(curDoc, patchDoc, mergeMerge)
return cur
}
func mergeDocs(doc, patch *partialDoc, mergeMerge bool) {
for k, v := range *patch {
if v == nil {
if mergeMerge {
(*doc)[k] = nil
} else {
delete(*doc, k)
}
} else {
cur, ok := (*doc)[k]
if !ok || cur == nil {
pruneNulls(v)
(*doc)[k] = v
} else {
(*doc)[k] = merge(cur, v, mergeMerge)
}
}
}
}
func pruneNulls(n *lazyNode) {
sub, err := n.intoDoc()
if err == nil {
pruneDocNulls(sub)
} else {
ary, err := n.intoAry()
if err == nil {
pruneAryNulls(ary)
}
}
}
func pruneDocNulls(doc *partialDoc) *partialDoc {
for k, v := range *doc {
if v == nil {
delete(*doc, k)
} else {
pruneNulls(v)
}
}
return doc
}
func pruneAryNulls(ary *partialArray) *partialArray {
newAry := []*lazyNode{}
for _, v := range *ary {
if v != nil {
pruneNulls(v)
newAry = append(newAry, v)
}
}
*ary = newAry
return ary
}
var errBadJSONDoc = fmt.Errorf("Invalid JSON Document")
var errBadJSONPatch = fmt.Errorf("Invalid JSON Patch")
var errBadMergeTypes = fmt.Errorf("Mismatched JSON Documents")
// MergeMergePatches merges two merge patches together, such that
// applying this resulting merged merge patch to a document yields the same
// as merging each merge patch to the document in succession.
func MergeMergePatches(patch1Data, patch2Data []byte) ([]byte, error) {
return doMergePatch(patch1Data, patch2Data, true)
}
// MergePatch merges the patchData into the docData.
func MergePatch(docData, patchData []byte) ([]byte, error) {
return doMergePatch(docData, patchData, false)
}
func doMergePatch(docData, patchData []byte, mergeMerge bool) ([]byte, error) {
doc := &partialDoc{}
docErr := json.Unmarshal(docData, doc)
patch := &partialDoc{}
patchErr := json.Unmarshal(patchData, patch)
if _, ok := docErr.(*json.SyntaxError); ok {
return nil, errBadJSONDoc
}
if _, ok := patchErr.(*json.SyntaxError); ok {
return nil, errBadJSONPatch
}
if docErr == nil && *doc == nil {
return nil, errBadJSONDoc
}
if patchErr == nil && *patch == nil {
return nil, errBadJSONPatch
}
if docErr != nil || patchErr != nil {
// Not an error, just not a doc, so we turn straight into the patch
if patchErr == nil {
if mergeMerge {
doc = patch
} else {
doc = pruneDocNulls(patch)
}
} else {
patchAry := &partialArray{}
patchErr = json.Unmarshal(patchData, patchAry)
if patchErr != nil {
return nil, errBadJSONPatch
}
pruneAryNulls(patchAry)
out, patchErr := json.Marshal(patchAry)
if patchErr != nil {
return nil, errBadJSONPatch
}
return out, nil
}
} else {
mergeDocs(doc, patch, mergeMerge)
}
return json.Marshal(doc)
}
// resemblesJSONArray indicates whether the byte-slice "appears" to be
// a JSON array or not.
// False-positives are possible, as this function does not check the internal
// structure of the array. It only checks that the outer syntax is present and
// correct.
func resemblesJSONArray(input []byte) bool {
input = bytes.TrimSpace(input)
hasPrefix := bytes.HasPrefix(input, []byte("["))
hasSuffix := bytes.HasSuffix(input, []byte("]"))
return hasPrefix && hasSuffix
}
// CreateMergePatch will return a merge patch document capable of converting
// the original document(s) to the modified document(s).
// The parameters can be bytes of either two JSON Documents, or two arrays of
// JSON documents.
// The merge patch returned follows the specification defined at http://tools.ietf.org/html/draft-ietf-appsawg-json-merge-patch-07
func CreateMergePatch(originalJSON, modifiedJSON []byte) ([]byte, error) {
originalResemblesArray := resemblesJSONArray(originalJSON)
modifiedResemblesArray := resemblesJSONArray(modifiedJSON)
// Do both byte-slices seem like JSON arrays?
if originalResemblesArray && modifiedResemblesArray {
return createArrayMergePatch(originalJSON, modifiedJSON)
}
// Are both byte-slices are not arrays? Then they are likely JSON objects...
if !originalResemblesArray && !modifiedResemblesArray {
return createObjectMergePatch(originalJSON, modifiedJSON)
}
// None of the above? Then return an error because of mismatched types.
return nil, errBadMergeTypes
}
// createObjectMergePatch will return a merge-patch document capable of
// converting the original document to the modified document.
func createObjectMergePatch(originalJSON, modifiedJSON []byte) ([]byte, error) {
originalDoc := map[string]interface{}{}
modifiedDoc := map[string]interface{}{}
err := json.Unmarshal(originalJSON, &originalDoc)
if err != nil {
return nil, errBadJSONDoc
}
err = json.Unmarshal(modifiedJSON, &modifiedDoc)
if err != nil {
return nil, errBadJSONDoc
}
dest, err := getDiff(originalDoc, modifiedDoc)
if err != nil {
return nil, err
}
return json.Marshal(dest)
}
// createArrayMergePatch will return an array of merge-patch documents capable
// of converting the original document to the modified document for each
// pair of JSON documents provided in the arrays.
// Arrays of mismatched sizes will result in an error.
func createArrayMergePatch(originalJSON, modifiedJSON []byte) ([]byte, error) {
originalDocs := []json.RawMessage{}
modifiedDocs := []json.RawMessage{}
err := json.Unmarshal(originalJSON, &originalDocs)
if err != nil {
return nil, errBadJSONDoc
}
err = json.Unmarshal(modifiedJSON, &modifiedDocs)
if err != nil {
return nil, errBadJSONDoc
}
total := len(originalDocs)
if len(modifiedDocs) != total {
return nil, errBadJSONDoc
}
result := []json.RawMessage{}
for i := 0; i < len(originalDocs); i++ {
original := originalDocs[i]
modified := modifiedDocs[i]
patch, err := createObjectMergePatch(original, modified)
if err != nil {
return nil, err
}
result = append(result, json.RawMessage(patch))
}
return json.Marshal(result)
}
// Returns true if the array matches (must be json types).
// As is idiomatic for go, an empty array is not the same as a nil array.
func matchesArray(a, b []interface{}) bool {
if len(a) != len(b) {
return false
}
if (a == nil && b != nil) || (a != nil && b == nil) {
return false
}
for i := range a {
if !matchesValue(a[i], b[i]) {
return false
}
}
return true
}
// Returns true if the values matches (must be json types)
// The types of the values must match, otherwise it will always return false
// If two map[string]interface{} are given, all elements must match.
func matchesValue(av, bv interface{}) bool {
if reflect.TypeOf(av) != reflect.TypeOf(bv) {
return false
}
switch at := av.(type) {
case string:
bt := bv.(string)
if bt == at {
return true
}
case float64:
bt := bv.(float64)
if bt == at {
return true
}
case bool:
bt := bv.(bool)
if bt == at {
return true
}
case nil:
// Both nil, fine.
return true
case map[string]interface{}:
bt := bv.(map[string]interface{})
for key := range at {
if !matchesValue(at[key], bt[key]) {
return false
}
}
for key := range bt {
if !matchesValue(at[key], bt[key]) {
return false
}
}
return true
case []interface{}:
bt := bv.([]interface{})
return matchesArray(at, bt)
}
return false
}
// getDiff returns the (recursive) difference between a and b as a map[string]interface{}.
func getDiff(a, b map[string]interface{}) (map[string]interface{}, error) {
into := map[string]interface{}{}
for key, bv := range b {
av, ok := a[key]
// value was added
if !ok {
into[key] = bv
continue
}
// If types have changed, replace completely
if reflect.TypeOf(av) != reflect.TypeOf(bv) {
into[key] = bv
continue
}
// Types are the same, compare values
switch at := av.(type) {
case map[string]interface{}:
bt := bv.(map[string]interface{})
dst := make(map[string]interface{}, len(bt))
dst, err := getDiff(at, bt)
if err != nil {
return nil, err
}
if len(dst) > 0 {
into[key] = dst
}
case string, float64, bool:
if !matchesValue(av, bv) {
into[key] = bv
}
case []interface{}:
bt := bv.([]interface{})
if !matchesArray(at, bt) {
into[key] = bv
}
case nil:
switch bv.(type) {
case nil:
// Both nil, fine.
default:
into[key] = bv
}
default:
panic(fmt.Sprintf("Unknown type:%T in key %s", av, key))
}
}
// Now add all deleted values as nil
for key := range a {
_, found := b[key]
if !found {
into[key] = nil
}
}
return into, nil
}

696
vendor/github.com/evanphx/json-patch/patch.go generated vendored Normal file
View File

@@ -0,0 +1,696 @@
package jsonpatch
import (
"bytes"
"encoding/json"
"fmt"
"strconv"
"strings"
)
const (
eRaw = iota
eDoc
eAry
)
var (
// SupportNegativeIndices decides whether to support non-standard practice of
// allowing negative indices to mean indices starting at the end of an array.
// Default to true.
SupportNegativeIndices bool = true
// AccumulatedCopySizeLimit limits the total size increase in bytes caused by
// "copy" operations in a patch.
AccumulatedCopySizeLimit int64 = 0
)
type lazyNode struct {
raw *json.RawMessage
doc partialDoc
ary partialArray
which int
}
type operation map[string]*json.RawMessage
// Patch is an ordered collection of operations.
type Patch []operation
type partialDoc map[string]*lazyNode
type partialArray []*lazyNode
type container interface {
get(key string) (*lazyNode, error)
set(key string, val *lazyNode) error
add(key string, val *lazyNode) error
remove(key string) error
}
func newLazyNode(raw *json.RawMessage) *lazyNode {
return &lazyNode{raw: raw, doc: nil, ary: nil, which: eRaw}
}
func (n *lazyNode) MarshalJSON() ([]byte, error) {
switch n.which {
case eRaw:
return json.Marshal(n.raw)
case eDoc:
return json.Marshal(n.doc)
case eAry:
return json.Marshal(n.ary)
default:
return nil, fmt.Errorf("Unknown type")
}
}
func (n *lazyNode) UnmarshalJSON(data []byte) error {
dest := make(json.RawMessage, len(data))
copy(dest, data)
n.raw = &dest
n.which = eRaw
return nil
}
func deepCopy(src *lazyNode) (*lazyNode, int, error) {
if src == nil {
return nil, 0, nil
}
a, err := src.MarshalJSON()
if err != nil {
return nil, 0, err
}
sz := len(a)
ra := make(json.RawMessage, sz)
copy(ra, a)
return newLazyNode(&ra), sz, nil
}
func (n *lazyNode) intoDoc() (*partialDoc, error) {
if n.which == eDoc {
return &n.doc, nil
}
if n.raw == nil {
return nil, fmt.Errorf("Unable to unmarshal nil pointer as partial document")
}
err := json.Unmarshal(*n.raw, &n.doc)
if err != nil {
return nil, err
}
n.which = eDoc
return &n.doc, nil
}
func (n *lazyNode) intoAry() (*partialArray, error) {
if n.which == eAry {
return &n.ary, nil
}
if n.raw == nil {
return nil, fmt.Errorf("Unable to unmarshal nil pointer as partial array")
}
err := json.Unmarshal(*n.raw, &n.ary)
if err != nil {
return nil, err
}
n.which = eAry
return &n.ary, nil
}
func (n *lazyNode) compact() []byte {
buf := &bytes.Buffer{}
if n.raw == nil {
return nil
}
err := json.Compact(buf, *n.raw)
if err != nil {
return *n.raw
}
return buf.Bytes()
}
func (n *lazyNode) tryDoc() bool {
if n.raw == nil {
return false
}
err := json.Unmarshal(*n.raw, &n.doc)
if err != nil {
return false
}
n.which = eDoc
return true
}
func (n *lazyNode) tryAry() bool {
if n.raw == nil {
return false
}
err := json.Unmarshal(*n.raw, &n.ary)
if err != nil {
return false
}
n.which = eAry
return true
}
func (n *lazyNode) equal(o *lazyNode) bool {
if n.which == eRaw {
if !n.tryDoc() && !n.tryAry() {
if o.which != eRaw {
return false
}
return bytes.Equal(n.compact(), o.compact())
}
}
if n.which == eDoc {
if o.which == eRaw {
if !o.tryDoc() {
return false
}
}
if o.which != eDoc {
return false
}
for k, v := range n.doc {
ov, ok := o.doc[k]
if !ok {
return false
}
if v == nil && ov == nil {
continue
}
if !v.equal(ov) {
return false
}
}
return true
}
if o.which != eAry && !o.tryAry() {
return false
}
if len(n.ary) != len(o.ary) {
return false
}
for idx, val := range n.ary {
if !val.equal(o.ary[idx]) {
return false
}
}
return true
}
func (o operation) kind() string {
if obj, ok := o["op"]; ok && obj != nil {
var op string
err := json.Unmarshal(*obj, &op)
if err != nil {
return "unknown"
}
return op
}
return "unknown"
}
func (o operation) path() string {
if obj, ok := o["path"]; ok && obj != nil {
var op string
err := json.Unmarshal(*obj, &op)
if err != nil {
return "unknown"
}
return op
}
return "unknown"
}
func (o operation) from() string {
if obj, ok := o["from"]; ok && obj != nil {
var op string
err := json.Unmarshal(*obj, &op)
if err != nil {
return "unknown"
}
return op
}
return "unknown"
}
func (o operation) value() *lazyNode {
if obj, ok := o["value"]; ok {
return newLazyNode(obj)
}
return nil
}
func isArray(buf []byte) bool {
Loop:
for _, c := range buf {
switch c {
case ' ':
case '\n':
case '\t':
continue
case '[':
return true
default:
break Loop
}
}
return false
}
func findObject(pd *container, path string) (container, string) {
doc := *pd
split := strings.Split(path, "/")
if len(split) < 2 {
return nil, ""
}
parts := split[1 : len(split)-1]
key := split[len(split)-1]
var err error
for _, part := range parts {
next, ok := doc.get(decodePatchKey(part))
if next == nil || ok != nil {
return nil, ""
}
if isArray(*next.raw) {
doc, err = next.intoAry()
if err != nil {
return nil, ""
}
} else {
doc, err = next.intoDoc()
if err != nil {
return nil, ""
}
}
}
return doc, decodePatchKey(key)
}
func (d *partialDoc) set(key string, val *lazyNode) error {
(*d)[key] = val
return nil
}
func (d *partialDoc) add(key string, val *lazyNode) error {
(*d)[key] = val
return nil
}
func (d *partialDoc) get(key string) (*lazyNode, error) {
return (*d)[key], nil
}
func (d *partialDoc) remove(key string) error {
_, ok := (*d)[key]
if !ok {
return fmt.Errorf("Unable to remove nonexistent key: %s", key)
}
delete(*d, key)
return nil
}
// set should only be used to implement the "replace" operation, so "key" must
// be an already existing index in "d".
func (d *partialArray) set(key string, val *lazyNode) error {
idx, err := strconv.Atoi(key)
if err != nil {
return err
}
(*d)[idx] = val
return nil
}
func (d *partialArray) add(key string, val *lazyNode) error {
if key == "-" {
*d = append(*d, val)
return nil
}
idx, err := strconv.Atoi(key)
if err != nil {
return err
}
sz := len(*d) + 1
ary := make([]*lazyNode, sz)
cur := *d
if idx >= len(ary) {
return fmt.Errorf("Unable to access invalid index: %d", idx)
}
if SupportNegativeIndices {
if idx < -len(ary) {
return fmt.Errorf("Unable to access invalid index: %d", idx)
}
if idx < 0 {
idx += len(ary)
}
}
copy(ary[0:idx], cur[0:idx])
ary[idx] = val
copy(ary[idx+1:], cur[idx:])
*d = ary
return nil
}
func (d *partialArray) get(key string) (*lazyNode, error) {
idx, err := strconv.Atoi(key)
if err != nil {
return nil, err
}
if idx >= len(*d) {
return nil, fmt.Errorf("Unable to access invalid index: %d", idx)
}
return (*d)[idx], nil
}
func (d *partialArray) remove(key string) error {
idx, err := strconv.Atoi(key)
if err != nil {
return err
}
cur := *d
if idx >= len(cur) {
return fmt.Errorf("Unable to access invalid index: %d", idx)
}
if SupportNegativeIndices {
if idx < -len(cur) {
return fmt.Errorf("Unable to access invalid index: %d", idx)
}
if idx < 0 {
idx += len(cur)
}
}
ary := make([]*lazyNode, len(cur)-1)
copy(ary[0:idx], cur[0:idx])
copy(ary[idx:], cur[idx+1:])
*d = ary
return nil
}
func (p Patch) add(doc *container, op operation) error {
path := op.path()
con, key := findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch add operation does not apply: doc is missing path: \"%s\"", path)
}
return con.add(key, op.value())
}
func (p Patch) remove(doc *container, op operation) error {
path := op.path()
con, key := findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch remove operation does not apply: doc is missing path: \"%s\"", path)
}
return con.remove(key)
}
func (p Patch) replace(doc *container, op operation) error {
path := op.path()
con, key := findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch replace operation does not apply: doc is missing path: %s", path)
}
_, ok := con.get(key)
if ok != nil {
return fmt.Errorf("jsonpatch replace operation does not apply: doc is missing key: %s", path)
}
return con.set(key, op.value())
}
func (p Patch) move(doc *container, op operation) error {
from := op.from()
con, key := findObject(doc, from)
if con == nil {
return fmt.Errorf("jsonpatch move operation does not apply: doc is missing from path: %s", from)
}
val, err := con.get(key)
if err != nil {
return err
}
err = con.remove(key)
if err != nil {
return err
}
path := op.path()
con, key = findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch move operation does not apply: doc is missing destination path: %s", path)
}
return con.add(key, val)
}
func (p Patch) test(doc *container, op operation) error {
path := op.path()
con, key := findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch test operation does not apply: is missing path: %s", path)
}
val, err := con.get(key)
if err != nil {
return err
}
if val == nil {
if op.value().raw == nil {
return nil
}
return fmt.Errorf("Testing value %s failed", path)
} else if op.value() == nil {
return fmt.Errorf("Testing value %s failed", path)
}
if val.equal(op.value()) {
return nil
}
return fmt.Errorf("Testing value %s failed", path)
}
func (p Patch) copy(doc *container, op operation, accumulatedCopySize *int64) error {
from := op.from()
con, key := findObject(doc, from)
if con == nil {
return fmt.Errorf("jsonpatch copy operation does not apply: doc is missing from path: %s", from)
}
val, err := con.get(key)
if err != nil {
return err
}
path := op.path()
con, key = findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch copy operation does not apply: doc is missing destination path: %s", path)
}
valCopy, sz, err := deepCopy(val)
if err != nil {
return err
}
(*accumulatedCopySize) += int64(sz)
if AccumulatedCopySizeLimit > 0 && *accumulatedCopySize > AccumulatedCopySizeLimit {
return NewAccumulatedCopySizeError(AccumulatedCopySizeLimit, *accumulatedCopySize)
}
return con.add(key, valCopy)
}
// Equal indicates if 2 JSON documents have the same structural equality.
func Equal(a, b []byte) bool {
ra := make(json.RawMessage, len(a))
copy(ra, a)
la := newLazyNode(&ra)
rb := make(json.RawMessage, len(b))
copy(rb, b)
lb := newLazyNode(&rb)
return la.equal(lb)
}
// DecodePatch decodes the passed JSON document as an RFC 6902 patch.
func DecodePatch(buf []byte) (Patch, error) {
var p Patch
err := json.Unmarshal(buf, &p)
if err != nil {
return nil, err
}
return p, nil
}
// Apply mutates a JSON document according to the patch, and returns the new
// document.
func (p Patch) Apply(doc []byte) ([]byte, error) {
return p.ApplyIndent(doc, "")
}
// ApplyIndent mutates a JSON document according to the patch, and returns the new
// document indented.
func (p Patch) ApplyIndent(doc []byte, indent string) ([]byte, error) {
var pd container
if doc[0] == '[' {
pd = &partialArray{}
} else {
pd = &partialDoc{}
}
err := json.Unmarshal(doc, pd)
if err != nil {
return nil, err
}
err = nil
var accumulatedCopySize int64
for _, op := range p {
switch op.kind() {
case "add":
err = p.add(&pd, op)
case "remove":
err = p.remove(&pd, op)
case "replace":
err = p.replace(&pd, op)
case "move":
err = p.move(&pd, op)
case "test":
err = p.test(&pd, op)
case "copy":
err = p.copy(&pd, op, &accumulatedCopySize)
default:
err = fmt.Errorf("Unexpected kind: %s", op.kind())
}
if err != nil {
return nil, err
}
}
if indent != "" {
return json.MarshalIndent(pd, "", indent)
}
return json.Marshal(pd)
}
// From http://tools.ietf.org/html/rfc6901#section-4 :
//
// Evaluation of each reference token begins by decoding any escaped
// character sequence. This is performed by first transforming any
// occurrence of the sequence '~1' to '/', and then transforming any
// occurrence of the sequence '~0' to '~'.
var (
rfc6901Decoder = strings.NewReplacer("~1", "/", "~0", "~")
)
func decodePatchKey(k string) string {
return rfc6901Decoder.Replace(k)
}

View File

@@ -1,7 +0,0 @@
language: go
go:
- 1.3
- 1.4
script:
- go test
- go build

View File

@@ -10,5 +10,6 @@
# Please keep the list sorted.
Sendgrid, Inc
Vastech SA (PTY) LTD
Walter Schulze <awalterschulze@gmail.com>

View File

@@ -1,4 +1,5 @@
Anton Povarov <anton.povarov@gmail.com>
Brian Goff <cpuguy83@gmail.com>
Clayton Coleman <ccoleman@redhat.com>
Denis Smirnov <denis.smirnov.91@gmail.com>
DongYun Kang <ceram1000@gmail.com>
@@ -10,9 +11,12 @@ John Shahid <jvshahid@gmail.com>
John Tuley <john@tuley.org>
Laurent <laurent@adyoulike.com>
Patrick Lee <patrick@dropbox.com>
Roger Johansson <rogeralsing@gmail.com>
Sam Nguyen <sam.nguyen@sendgrid.com>
Sergio Arbeo <serabe@gmail.com>
Stephen J Day <stephen.day@docker.com>
Tamir Duberstein <tamird@gmail.com>
Todd Eisenberger <teisenberger@dropbox.com>
Tormod Erevik Lea <tormodlea@gmail.com>
Vyacheslav Kim <kane@sendgrid.com>
Walter Schulze <awalterschulze@gmail.com>

View File

@@ -174,11 +174,11 @@ func sizeFixed32(x uint64) int {
// This is the format used for the sint64 protocol buffer type.
func (p *Buffer) EncodeZigzag64(x uint64) error {
// use signed number to get arithmetic right shift.
return p.EncodeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
return p.EncodeVarint((x << 1) ^ uint64((int64(x) >> 63)))
}
func sizeZigzag64(x uint64) int {
return sizeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
return sizeVarint((x << 1) ^ uint64((int64(x) >> 63)))
}
// EncodeZigzag32 writes a zigzag-encoded 32-bit integer

View File

@@ -73,7 +73,6 @@ for a protocol buffer variable v:
When the .proto file specifies `syntax="proto3"`, there are some differences:
- Non-repeated fields of non-message type are values instead of pointers.
- Getters are only generated for message and oneof fields.
- Enum types do not get an Enum method.
The simplest way to describe this is to see an example.

View File

@@ -193,6 +193,7 @@ type Properties struct {
Default string // default value
HasDefault bool // whether an explicit default was provided
CustomType string
CastType string
StdTime bool
StdDuration bool
@@ -341,6 +342,8 @@ func (p *Properties) Parse(s string) {
p.OrigName = strings.Split(f, "=")[1]
case strings.HasPrefix(f, "customtype="):
p.CustomType = strings.Split(f, "=")[1]
case strings.HasPrefix(f, "casttype="):
p.CastType = strings.Split(f, "=")[1]
case f == "stdtime":
p.StdTime = true
case f == "stdduration":

View File

@@ -522,6 +522,17 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
}
return nil
}
} else if len(props.CastType) > 0 {
if _, ok := v.Interface().(interface {
String() string
}); ok {
switch v.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
_, err := fmt.Fprintf(w, "%d", v.Interface())
return err
}
}
} else if props.StdTime {
t, ok := v.Interface().(time.Time)
if !ok {
@@ -531,9 +542,9 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
if err != nil {
return err
}
props.StdTime = false
err = tm.writeAny(w, reflect.ValueOf(tproto), props)
props.StdTime = true
propsCopy := *props // Make a copy so that this is goroutine-safe
propsCopy.StdTime = false
err = tm.writeAny(w, reflect.ValueOf(tproto), &propsCopy)
return err
} else if props.StdDuration {
d, ok := v.Interface().(time.Duration)
@@ -541,9 +552,9 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
return fmt.Errorf("stdtime is not time.Duration, but %T", v.Interface())
}
dproto := durationProto(d)
props.StdDuration = false
err := tm.writeAny(w, reflect.ValueOf(dproto), props)
props.StdDuration = true
propsCopy := *props // Make a copy so that this is goroutine-safe
propsCopy.StdDuration = false
err := tm.writeAny(w, reflect.ValueOf(dproto), &propsCopy)
return err
}
}

View File

@@ -983,7 +983,7 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
return p.readStruct(fv, terminator)
case reflect.Uint32:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
fv.SetUint(uint64(x))
fv.SetUint(x)
return nil
}
case reflect.Uint64:

33
vendor/github.com/imdario/mergo/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,33 @@
#### joe made this: http://goel.io/joe
#### go ####
# Binaries for programs and plugins
*.exe
*.dll
*.so
*.dylib
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
.glide/
#### vim ####
# Swap
[._]*.s[a-v][a-z]
[._]*.sw[a-p]
[._]s[a-v][a-z]
[._]sw[a-p]
# Session
Session.vim
# Temporary
.netrwhist
*~
# Auto-generated tag files
tags

View File

@@ -1,2 +1,7 @@
language: go
install: go get -t
install:
- go get -t
- go get golang.org/x/tools/cmd/cover
- go get github.com/mattn/goveralls
script:
- $HOME/gopath/bin/goveralls -service=travis-ci -repotoken $COVERALLS_TOKEN

46
vendor/github.com/imdario/mergo/CODE_OF_CONDUCT.md generated vendored Normal file
View File

@@ -0,0 +1,46 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at i@dario.im. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@@ -2,14 +2,72 @@
A helper to merge structs and maps in Golang. Useful for configuration default values, avoiding messy if-statements.
Also a lovely [comune](http://en.wikipedia.org/wiki/Mergo) (municipality) in the Province of Ancona in the Italian region Marche.
![Mergo dall'alto](http://www.comune.mergo.an.it/Siti/Mergo/Immagini/Foto/mergo_dall_alto.jpg)
Also a lovely [comune](http://en.wikipedia.org/wiki/Mergo) (municipality) in the Province of Ancona in the Italian region of Marche.
## Status
It is ready for production use. It works fine although it may use more of testing. Here some projects in the wild using Mergo:
It is ready for production use. [It is used in several projects by Docker, Google, The Linux Foundation, VMWare, Shopify, etc](https://github.com/imdario/mergo#mergo-in-the-wild).
[![GoDoc][3]][4]
[![GoCard][5]][6]
[![Build Status][1]][2]
[![Coverage Status][7]][8]
[![Sourcegraph][9]][10]
[1]: https://travis-ci.org/imdario/mergo.png
[2]: https://travis-ci.org/imdario/mergo
[3]: https://godoc.org/github.com/imdario/mergo?status.svg
[4]: https://godoc.org/github.com/imdario/mergo
[5]: https://goreportcard.com/badge/imdario/mergo
[6]: https://goreportcard.com/report/github.com/imdario/mergo
[7]: https://coveralls.io/repos/github/imdario/mergo/badge.svg?branch=master
[8]: https://coveralls.io/github/imdario/mergo?branch=master
[9]: https://sourcegraph.com/github.com/imdario/mergo/-/badge.svg
[10]: https://sourcegraph.com/github.com/imdario/mergo?badge
### Latest release
[Release v0.3.4](https://github.com/imdario/mergo/releases/tag/v0.3.4).
### Important note
Please keep in mind that in [0.3.2](//github.com/imdario/mergo/releases/tag/0.3.2) Mergo changed `Merge()`and `Map()` signatures to support [transformers](#transformers). An optional/variadic argument has been added, so it won't break existing code.
If you were using Mergo **before** April 6th 2015, please check your project works as intended after updating your local copy with ```go get -u github.com/imdario/mergo```. I apologize for any issue caused by its previous behavior and any future bug that Mergo could cause (I hope it won't!) in existing projects after the change (release 0.2.0).
### Donations
If Mergo is useful to you, consider buying me a coffee, a beer or making a monthly donation so I can keep building great free software. :heart_eyes:
<a href='https://ko-fi.com/B0B58839' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://az743702.vo.msecnd.net/cdn/kofi1.png?v=0' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
[![Beerpay](https://beerpay.io/imdario/mergo/badge.svg)](https://beerpay.io/imdario/mergo)
[![Beerpay](https://beerpay.io/imdario/mergo/make-wish.svg)](https://beerpay.io/imdario/mergo)
<a href="https://liberapay.com/dario/donate"><img alt="Donate using Liberapay" src="https://liberapay.com/assets/widgets/donate.svg"></a>
### Mergo in the wild
- [moby/moby](https://github.com/moby/moby)
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)
- [vmware/dispatch](https://github.com/vmware/dispatch)
- [Shopify/themekit](https://github.com/Shopify/themekit)
- [imdario/zas](https://github.com/imdario/zas)
- [matcornic/hermes](https://github.com/matcornic/hermes)
- [OpenBazaar/openbazaar-go](https://github.com/OpenBazaar/openbazaar-go)
- [kataras/iris](https://github.com/kataras/iris)
- [michaelsauter/crane](https://github.com/michaelsauter/crane)
- [go-task/task](https://github.com/go-task/task)
- [sensu/uchiwa](https://github.com/sensu/uchiwa)
- [ory/hydra](https://github.com/ory/hydra)
- [sisatech/vcli](https://github.com/sisatech/vcli)
- [dairycart/dairycart](https://github.com/dairycart/dairycart)
- [projectcalico/felix](https://github.com/projectcalico/felix)
- [resin-os/balena](https://github.com/resin-os/balena)
- [go-kivik/kivik](https://github.com/go-kivik/kivik)
- [Telefonica/govice](https://github.com/Telefonica/govice)
- [supergiant/supergiant](supergiant/supergiant)
- [SergeyTsalkov/brooce](https://github.com/SergeyTsalkov/brooce)
- [soniah/dnsmadeeasy](https://github.com/soniah/dnsmadeeasy)
- [ohsu-comp-bio/funnel](https://github.com/ohsu-comp-bio/funnel)
- [EagerIO/Stout](https://github.com/EagerIO/Stout)
- [lynndylanhurley/defsynth-api](https://github.com/lynndylanhurley/defsynth-api)
- [russross/canvasassignments](https://github.com/russross/canvasassignments)
@@ -17,12 +75,17 @@ It is ready for production use. It works fine although it may use more of testin
- [casualjim/exeggutor](https://github.com/casualjim/exeggutor)
- [divshot/gitling](https://github.com/divshot/gitling)
- [RWJMurphy/gorl](https://github.com/RWJMurphy/gorl)
[![Build Status][1]][2]
[![GoDoc](https://godoc.org/github.com/imdario/mergo?status.svg)](https://godoc.org/github.com/imdario/mergo)
[1]: https://travis-ci.org/imdario/mergo.png
[2]: https://travis-ci.org/imdario/mergo
- [andrerocker/deploy42](https://github.com/andrerocker/deploy42)
- [elwinar/rambler](https://github.com/elwinar/rambler)
- [tmaiaroto/gopartman](https://github.com/tmaiaroto/gopartman)
- [jfbus/impressionist](https://github.com/jfbus/impressionist)
- [Jmeyering/zealot](https://github.com/Jmeyering/zealot)
- [godep-migrator/rigger-host](https://github.com/godep-migrator/rigger-host)
- [Dronevery/MultiwaySwitch-Go](https://github.com/Dronevery/MultiwaySwitch-Go)
- [thoas/picfit](https://github.com/thoas/picfit)
- [mantasmatelis/whooplist-server](https://github.com/mantasmatelis/whooplist-server)
- [jnuthong/item_search](https://github.com/jnuthong/item_search)
- [bukalapak/snowboard](https://github.com/bukalapak/snowboard)
## Installation
@@ -35,25 +98,116 @@ It is ready for production use. It works fine although it may use more of testin
## Usage
You can only merge same-type structs with exported fields initialized as zero value of their type and same-types maps. Mergo won't merge unexported (private) fields but will do recursively any exported one. Also maps will be merged recursively except for structs inside maps (because they are not addressable using Go reflection).
You can only merge same-type structs with exported fields initialized as zero value of their type and same-types maps. Mergo won't merge unexported (private) fields but will do recursively any exported one. It won't merge empty structs value as [they are not considered zero values](https://golang.org/ref/spec#The_zero_value) either. Also maps will be merged recursively except for structs inside maps (because they are not addressable using Go reflection).
if err := mergo.Merge(&dst, src); err != nil {
// ...
}
```go
if err := mergo.Merge(&dst, src); err != nil {
// ...
}
```
Additionally, you can map a map[string]interface{} to a struct (and otherwise, from struct to map), following the same restrictions as in Merge(). Keys are capitalized to find each corresponding exported field.
Also, you can merge overwriting values using the transformer `WithOverride`.
if err := mergo.Map(&dst, srcMap); err != nil {
// ...
}
```go
if err := mergo.Merge(&dst, src, mergo.WithOverride); err != nil {
// ...
}
```
Warning: if you map a struct to map, it won't do it recursively. Don't expect Mergo to map struct members of your struct as map[string]interface{}. They will be just assigned as values.
Additionally, you can map a `map[string]interface{}` to a struct (and otherwise, from struct to map), following the same restrictions as in `Merge()`. Keys are capitalized to find each corresponding exported field.
```go
if err := mergo.Map(&dst, srcMap); err != nil {
// ...
}
```
Warning: if you map a struct to map, it won't do it recursively. Don't expect Mergo to map struct members of your struct as `map[string]interface{}`. They will be just assigned as values.
More information and examples in [godoc documentation](http://godoc.org/github.com/imdario/mergo).
### Nice example
```go
package main
import (
"fmt"
"github.com/imdario/mergo"
)
type Foo struct {
A string
B int64
}
func main() {
src := Foo{
A: "one",
B: 2,
}
dest := Foo{
A: "two",
}
mergo.Merge(&dest, src)
fmt.Println(dest)
// Will print
// {two 2}
}
```
Note: if test are failing due missing package, please execute:
go get gopkg.in/yaml.v1
go get gopkg.in/yaml.v2
### Transformers
Transformers allow to merge specific types differently than in the default behavior. In other words, now you can customize how some types are merged. For example, `time.Time` is a struct; it doesn't have zero value but IsZero can return true because it has fields with zero value. How can we merge a non-zero `time.Time`?
```go
package main
import (
"fmt"
"github.com/imdario/mergo"
"reflect"
"time"
)
type timeTransfomer struct {
}
func (t timeTransfomer) Transformer(typ reflect.Type) func(dst, src reflect.Value) error {
if typ == reflect.TypeOf(time.Time{}) {
return func(dst, src reflect.Value) error {
if dst.CanSet() {
isZero := dst.MethodByName("IsZero")
result := isZero.Call([]reflect.Value{})
if result[0].Bool() {
dst.Set(src)
}
}
return nil
}
}
return nil
}
type Snapshot struct {
Time time.Time
// ...
}
func main() {
src := Snapshot{time.Now()}
dest := Snapshot{}
mergo.Merge(&dest, src, mergo.WithTransformers(timeTransfomer{}))
fmt.Println(dest)
// Will print
// { 2018-01-12 01:15:00 +0000 UTC m=+0.000000001 }
}
```
## Contact me

View File

@@ -31,7 +31,8 @@ func isExported(field reflect.StructField) bool {
// Traverses recursively both values, assigning src's fields values to dst.
// The map argument tracks comparisons that have already been seen, which allows
// short circuiting on recursive types.
func deepMap(dst, src reflect.Value, visited map[uintptr]*visit, depth int) (err error) {
func deepMap(dst, src reflect.Value, visited map[uintptr]*visit, depth int, config *Config) (err error) {
overwrite := config.Overwrite
if dst.CanAddr() {
addr := dst.UnsafeAddr()
h := 17 * addr
@@ -57,10 +58,17 @@ func deepMap(dst, src reflect.Value, visited map[uintptr]*visit, depth int) (err
}
fieldName := field.Name
fieldName = changeInitialCase(fieldName, unicode.ToLower)
if v, ok := dstMap[fieldName]; !ok || isEmptyValue(reflect.ValueOf(v)) {
if v, ok := dstMap[fieldName]; !ok || (isEmptyValue(reflect.ValueOf(v)) || overwrite) {
dstMap[fieldName] = src.Field(i).Interface()
}
}
case reflect.Ptr:
if dst.IsNil() {
v := reflect.New(dst.Type().Elem())
dst.Set(v)
}
dst = dst.Elem()
fallthrough
case reflect.Struct:
srcMap := src.Interface().(map[string]interface{})
for key := range srcMap {
@@ -85,21 +93,24 @@ func deepMap(dst, src reflect.Value, visited map[uintptr]*visit, depth int) (err
srcKind = reflect.Ptr
}
}
if !srcElement.IsValid() {
continue
}
if srcKind == dstKind {
if err = deepMerge(dstElement, srcElement, visited, depth+1); err != nil {
if err = deepMerge(dstElement, srcElement, visited, depth+1, config); err != nil {
return
}
} else if dstKind == reflect.Interface && dstElement.Kind() == reflect.Interface {
if err = deepMerge(dstElement, srcElement, visited, depth+1, config); err != nil {
return
}
} else if srcKind == reflect.Map {
if err = deepMap(dstElement, srcElement, visited, depth+1, config); err != nil {
return
}
} else {
if srcKind == reflect.Map {
if err = deepMap(dstElement, srcElement, visited, depth+1); err != nil {
return
}
} else {
return fmt.Errorf("type mismatch on %s field: found %v, expected %v", fieldName, srcKind, dstKind)
}
return fmt.Errorf("type mismatch on %s field: found %v, expected %v", fieldName, srcKind, dstKind)
}
}
}
@@ -117,18 +128,35 @@ func deepMap(dst, src reflect.Value, visited map[uintptr]*visit, depth int) (err
// doesn't apply if dst is a map.
// This is separated method from Merge because it is cleaner and it keeps sane
// semantics: merging equal types, mapping different (restricted) types.
func Map(dst, src interface{}) error {
func Map(dst, src interface{}, opts ...func(*Config)) error {
return _map(dst, src, opts...)
}
// MapWithOverwrite will do the same as Map except that non-empty dst attributes will be overridden by
// non-empty src attribute values.
// Deprecated: Use Map(…) with WithOverride
func MapWithOverwrite(dst, src interface{}, opts ...func(*Config)) error {
return _map(dst, src, append(opts, WithOverride)...)
}
func _map(dst, src interface{}, opts ...func(*Config)) error {
var (
vDst, vSrc reflect.Value
err error
)
config := &Config{}
for _, opt := range opts {
opt(config)
}
if vDst, vSrc, err = resolveValues(dst, src); err != nil {
return err
}
// To be friction-less, we redirect equal-type arguments
// to deepMerge. Only because arguments can be anything.
if vSrc.Kind() == vDst.Kind() {
return deepMerge(vDst, vSrc, make(map[uintptr]*visit), 0)
return deepMerge(vDst, vSrc, make(map[uintptr]*visit), 0, config)
}
switch vSrc.Kind() {
case reflect.Struct:
@@ -142,5 +170,5 @@ func Map(dst, src interface{}) error {
default:
return ErrNotSupported
}
return deepMap(vDst, vSrc, make(map[uintptr]*visit), 0)
return deepMap(vDst, vSrc, make(map[uintptr]*visit), 0, config)
}

View File

@@ -12,10 +12,34 @@ import (
"reflect"
)
func hasExportedField(dst reflect.Value) (exported bool) {
for i, n := 0, dst.NumField(); i < n; i++ {
field := dst.Type().Field(i)
if field.Anonymous && dst.Field(i).Kind() == reflect.Struct {
exported = exported || hasExportedField(dst.Field(i))
} else {
exported = exported || len(field.PkgPath) == 0
}
}
return
}
type Config struct {
Overwrite bool
AppendSlice bool
Transformers Transformers
}
type Transformers interface {
Transformer(reflect.Type) func(dst, src reflect.Value) error
}
// Traverses recursively both values, assigning src's fields values to dst.
// The map argument tracks comparisons that have already been seen, which allows
// short circuiting on recursive types.
func deepMerge(dst, src reflect.Value, visited map[uintptr]*visit, depth int) (err error) {
func deepMerge(dst, src reflect.Value, visited map[uintptr]*visit, depth int, config *Config) (err error) {
overwrite := config.Overwrite
if !src.IsValid() {
return
}
@@ -32,68 +56,190 @@ func deepMerge(dst, src reflect.Value, visited map[uintptr]*visit, depth int) (e
// Remember, remember...
visited[h] = &visit{addr, typ, seen}
}
if config.Transformers != nil && !isEmptyValue(dst) {
if fn := config.Transformers.Transformer(dst.Type()); fn != nil {
err = fn(dst, src)
return
}
}
switch dst.Kind() {
case reflect.Struct:
for i, n := 0, dst.NumField(); i < n; i++ {
if err = deepMerge(dst.Field(i), src.Field(i), visited, depth+1); err != nil {
return
if hasExportedField(dst) {
for i, n := 0, dst.NumField(); i < n; i++ {
if err = deepMerge(dst.Field(i), src.Field(i), visited, depth+1, config); err != nil {
return
}
}
} else {
if dst.CanSet() && !isEmptyValue(src) && (overwrite || isEmptyValue(dst)) {
dst.Set(src)
}
}
case reflect.Map:
if dst.IsNil() && !src.IsNil() {
dst.Set(reflect.MakeMap(dst.Type()))
}
for _, key := range src.MapKeys() {
srcElement := src.MapIndex(key)
if !srcElement.IsValid() {
continue
}
dstElement := dst.MapIndex(key)
switch reflect.TypeOf(srcElement.Interface()).Kind() {
case reflect.Struct:
switch srcElement.Kind() {
case reflect.Chan, reflect.Func, reflect.Map, reflect.Interface, reflect.Slice:
if srcElement.IsNil() {
continue
}
fallthrough
case reflect.Map:
if err = deepMerge(dstElement, srcElement, visited, depth+1); err != nil {
return
default:
if !srcElement.CanInterface() {
continue
}
switch reflect.TypeOf(srcElement.Interface()).Kind() {
case reflect.Struct:
fallthrough
case reflect.Ptr:
fallthrough
case reflect.Map:
srcMapElm := srcElement
dstMapElm := dstElement
if srcMapElm.CanInterface() {
srcMapElm = reflect.ValueOf(srcMapElm.Interface())
if dstMapElm.IsValid() {
dstMapElm = reflect.ValueOf(dstMapElm.Interface())
}
}
if err = deepMerge(dstMapElm, srcMapElm, visited, depth+1, config); err != nil {
return
}
case reflect.Slice:
srcSlice := reflect.ValueOf(srcElement.Interface())
var dstSlice reflect.Value
if !dstElement.IsValid() || dstElement.IsNil() {
dstSlice = reflect.MakeSlice(srcSlice.Type(), 0, srcSlice.Len())
} else {
dstSlice = reflect.ValueOf(dstElement.Interface())
}
if !isEmptyValue(src) && (overwrite || isEmptyValue(dst)) && !config.AppendSlice {
dstSlice = srcSlice
} else if config.AppendSlice {
dstSlice = reflect.AppendSlice(dstSlice, srcSlice)
}
dst.SetMapIndex(key, dstSlice)
}
}
if !dstElement.IsValid() {
if dstElement.IsValid() && reflect.TypeOf(srcElement.Interface()).Kind() == reflect.Map {
continue
}
if srcElement.IsValid() && (overwrite || (!dstElement.IsValid() || isEmptyValue(dstElement))) {
if dst.IsNil() {
dst.Set(reflect.MakeMap(dst.Type()))
}
dst.SetMapIndex(key, srcElement)
}
}
case reflect.Slice:
if !dst.CanSet() {
break
}
if !isEmptyValue(src) && (overwrite || isEmptyValue(dst)) && !config.AppendSlice {
dst.Set(src)
} else if config.AppendSlice {
dst.Set(reflect.AppendSlice(dst, src))
}
case reflect.Ptr:
fallthrough
case reflect.Interface:
if src.IsNil() {
break
} else if dst.IsNil() {
if dst.CanSet() && isEmptyValue(dst) {
}
if src.Kind() != reflect.Interface {
if dst.IsNil() || overwrite {
if dst.CanSet() && (overwrite || isEmptyValue(dst)) {
dst.Set(src)
}
} else if src.Kind() == reflect.Ptr {
if err = deepMerge(dst.Elem(), src.Elem(), visited, depth+1, config); err != nil {
return
}
} else if dst.Elem().Type() == src.Type() {
if err = deepMerge(dst.Elem(), src, visited, depth+1, config); err != nil {
return
}
} else {
return ErrDifferentArgumentsTypes
}
break
}
if dst.IsNil() || overwrite {
if dst.CanSet() && (overwrite || isEmptyValue(dst)) {
dst.Set(src)
}
} else if err = deepMerge(dst.Elem(), src.Elem(), visited, depth+1); err != nil {
} else if err = deepMerge(dst.Elem(), src.Elem(), visited, depth+1, config); err != nil {
return
}
default:
if dst.CanSet() && !isEmptyValue(src) {
if dst.CanSet() && !isEmptyValue(src) && (overwrite || isEmptyValue(dst)) {
dst.Set(src)
}
}
return
}
// Merge sets fields' values in dst from src if they have a zero
// value of their type.
// dst and src must be valid same-type structs and dst must be
// a pointer to struct.
// It won't merge unexported (private) fields and will do recursively
// any exported field.
func Merge(dst, src interface{}) error {
// Merge will fill any empty for value type attributes on the dst struct using corresponding
// src attributes if they themselves are not empty. dst and src must be valid same-type structs
// and dst must be a pointer to struct.
// It won't merge unexported (private) fields and will do recursively any exported field.
func Merge(dst, src interface{}, opts ...func(*Config)) error {
return merge(dst, src, opts...)
}
// MergeWithOverwrite will do the same as Merge except that non-empty dst attributes will be overriden by
// non-empty src attribute values.
// Deprecated: use Merge(…) with WithOverride
func MergeWithOverwrite(dst, src interface{}, opts ...func(*Config)) error {
return merge(dst, src, append(opts, WithOverride)...)
}
// WithTransformers adds transformers to merge, allowing to customize the merging of some types.
func WithTransformers(transformers Transformers) func(*Config) {
return func(config *Config) {
config.Transformers = transformers
}
}
// WithOverride will make merge override non-empty dst attributes with non-empty src attributes values.
func WithOverride(config *Config) {
config.Overwrite = true
}
// WithAppendSlice will make merge append slices instead of overwriting it
func WithAppendSlice(config *Config) {
config.AppendSlice = true
}
func merge(dst, src interface{}, opts ...func(*Config)) error {
var (
vDst, vSrc reflect.Value
err error
)
config := &Config{}
for _, opt := range opts {
opt(config)
}
if vDst, vSrc, err = resolveValues(dst, src); err != nil {
return err
}
if vDst.Type() != vSrc.Type() {
return ErrDifferentArgumentsTypes
}
return deepMerge(vDst, vSrc, make(map[uintptr]*visit), 0)
return deepMerge(vDst, vSrc, make(map[uintptr]*visit), 0, config)
}

View File

@@ -32,7 +32,7 @@ type visit struct {
next *visit
}
// From src/pkg/encoding/json.
// From src/pkg/encoding/json/encode.go.
func isEmptyValue(v reflect.Value) bool {
switch v.Kind() {
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
@@ -46,7 +46,14 @@ func isEmptyValue(v reflect.Value) bool {
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.Interface, reflect.Ptr:
if v.IsNil() {
return true
}
return isEmptyValue(v.Elem())
case reflect.Func:
return v.IsNil()
case reflect.Invalid:
return true
}
return false
}

View File

@@ -1,12 +1,6 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
name = "github.com/json-iterator/go"
packages = ["."]
revision = "ca39e5af3ece67bbcda3d0f4f56a8e24d9f2dad4"
version = "1.1.3"
[[projects]]
name = "github.com/modern-go/concurrent"
packages = ["."]
@@ -16,12 +10,12 @@
[[projects]]
name = "github.com/modern-go/reflect2"
packages = ["."]
revision = "1df9eeb2bb81f327b96228865c5687bc2194af3f"
version = "1.0.0"
revision = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd"
version = "1.0.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "56a0b9e9e61d2bc8af5e1b68537401b7f4d60805eda3d107058f3171aa5cf793"
inputs-digest = "ea54a775e5a354cb015502d2e7aa4b74230fc77e894f34a838b268c25ec8eeb8"
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -23,4 +23,4 @@ ignored = ["github.com/davecgh/go-spew*","github.com/google/gofuzz*","github.com
[[constraint]]
name = "github.com/modern-go/reflect2"
version = "1.0.0"
version = "1.0.1"

View File

@@ -6,6 +6,7 @@ go:
before_install:
- go get -t -v ./...
- go get -t -v github.com/modern-go/reflect2-tests/...
script:
- ./test.sh

View File

@@ -24,7 +24,7 @@
# go-tests = true
# unused-packages = true
ignored = ["github.com/modern-go/test","github.com/modern-go/test/must","github.com/modern-go/test/should"]
ignored = []
[[constraint]]
name = "github.com/modern-go/concurrent"

View File

@@ -150,6 +150,9 @@ func (cfg *frozenConfig) TypeOf(obj interface{}) Type {
}
func (cfg *frozenConfig) Type2(type1 reflect.Type) Type {
if type1 == nil {
return nil
}
cacheKey := uintptr(unpackEFace(type1).data)
typeObj, found := cfg.cache.Load(cacheKey)
if found {

View File

@@ -3,7 +3,7 @@
set -e
echo "" > coverage.txt
for d in $(go list ./... | grep -v vendor); do
for d in $(go list github.com/modern-go/reflect2-tests/... | grep -v vendor); do
go test -coverprofile=profile.out -coverpkg=github.com/modern-go/reflect2 $d
if [ -f profile.out ]; then
cat profile.out >> coverage.txt

View File

@@ -4,6 +4,7 @@ import (
"reflect"
"runtime"
"strings"
"sync"
"unsafe"
)
@@ -15,10 +16,17 @@ func typelinks1() [][]unsafe.Pointer
//go:linkname typelinks2 reflect.typelinks
func typelinks2() (sections []unsafe.Pointer, offset [][]int32)
var types = map[string]reflect.Type{}
var packages = map[string]map[string]reflect.Type{}
// initOnce guards initialization of types and packages
var initOnce sync.Once
var types map[string]reflect.Type
var packages map[string]map[string]reflect.Type
// discoverTypes initializes types and packages
func discoverTypes() {
types = make(map[string]reflect.Type)
packages = make(map[string]map[string]reflect.Type)
func init() {
ver := runtime.Version()
if ver == "go1.5" || strings.HasPrefix(ver, "go1.5.") {
loadGo15Types()
@@ -90,11 +98,13 @@ type emptyInterface struct {
// TypeByName return the type by its name, just like Class.forName in java
func TypeByName(typeName string) Type {
initOnce.Do(discoverTypes)
return Type2(types[typeName])
}
// TypeByPackageName return the type by its package and name
func TypeByPackageName(pkgPath string, name string) Type {
initOnce.Do(discoverTypes)
pkgTypes := packages[pkgPath]
if pkgTypes == nil {
return nil

View File

@@ -108,9 +108,7 @@ func ReadPassword(fd int) ([]byte, error) {
return nil, err
}
defer func() {
unix.IoctlSetTermios(fd, ioctlWriteTermios, termios)
}()
defer unix.IoctlSetTermios(fd, ioctlWriteTermios, termios)
return readPasswordLine(passwordReader(fd))
}

View File

@@ -14,7 +14,7 @@ import (
// State contains the state of a terminal.
type State struct {
state *unix.Termios
termios unix.Termios
}
// IsTerminal returns true if the given file descriptor is a terminal.
@@ -75,47 +75,43 @@ func ReadPassword(fd int) ([]byte, error) {
// restored.
// see http://cr.illumos.org/~webrev/andy_js/1060/
func MakeRaw(fd int) (*State, error) {
oldTermiosPtr, err := unix.IoctlGetTermios(fd, unix.TCGETS)
termios, err := unix.IoctlGetTermios(fd, unix.TCGETS)
if err != nil {
return nil, err
}
oldTermios := *oldTermiosPtr
newTermios := oldTermios
newTermios.Iflag &^= syscall.IGNBRK | syscall.BRKINT | syscall.PARMRK | syscall.ISTRIP | syscall.INLCR | syscall.IGNCR | syscall.ICRNL | syscall.IXON
newTermios.Oflag &^= syscall.OPOST
newTermios.Lflag &^= syscall.ECHO | syscall.ECHONL | syscall.ICANON | syscall.ISIG | syscall.IEXTEN
newTermios.Cflag &^= syscall.CSIZE | syscall.PARENB
newTermios.Cflag |= syscall.CS8
newTermios.Cc[unix.VMIN] = 1
newTermios.Cc[unix.VTIME] = 0
oldState := State{termios: *termios}
if err := unix.IoctlSetTermios(fd, unix.TCSETS, &newTermios); err != nil {
termios.Iflag &^= unix.IGNBRK | unix.BRKINT | unix.PARMRK | unix.ISTRIP | unix.INLCR | unix.IGNCR | unix.ICRNL | unix.IXON
termios.Oflag &^= unix.OPOST
termios.Lflag &^= unix.ECHO | unix.ECHONL | unix.ICANON | unix.ISIG | unix.IEXTEN
termios.Cflag &^= unix.CSIZE | unix.PARENB
termios.Cflag |= unix.CS8
termios.Cc[unix.VMIN] = 1
termios.Cc[unix.VTIME] = 0
if err := unix.IoctlSetTermios(fd, unix.TCSETS, termios); err != nil {
return nil, err
}
return &State{
state: oldTermiosPtr,
}, nil
return &oldState, nil
}
// Restore restores the terminal connected to the given file descriptor to a
// previous state.
func Restore(fd int, oldState *State) error {
return unix.IoctlSetTermios(fd, unix.TCSETS, oldState.state)
return unix.IoctlSetTermios(fd, unix.TCSETS, &oldState.termios)
}
// GetState returns the current state of a terminal which may be useful to
// restore the terminal after a signal.
func GetState(fd int) (*State, error) {
oldTermiosPtr, err := unix.IoctlGetTermios(fd, unix.TCGETS)
termios, err := unix.IoctlGetTermios(fd, unix.TCGETS)
if err != nil {
return nil, err
}
return &State{
state: oldTermiosPtr,
}, nil
return &State{termios: *termios}, nil
}
// GetSize returns the dimensions of the given terminal.

View File

@@ -89,9 +89,7 @@ func ReadPassword(fd int) ([]byte, error) {
return nil, err
}
defer func() {
windows.SetConsoleMode(windows.Handle(fd), old)
}()
defer windows.SetConsoleMode(windows.Handle(fd), old)
var h windows.Handle
p, _ := windows.GetCurrentProcess()

View File

@@ -5,6 +5,8 @@
// Package context defines the Context type, which carries deadlines,
// cancelation signals, and other request-scoped values across API boundaries
// and between processes.
// As of Go 1.7 this package is available in the standard library under the
// name context. https://golang.org/pkg/context.
//
// Incoming requests to a server should create a Context, and outgoing calls to
// servers should accept a Context. The chain of function calls between must

50
vendor/golang.org/x/net/http/httpguts/guts.go generated vendored Normal file
View File

@@ -0,0 +1,50 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package httpguts provides functions implementing various details
// of the HTTP specification.
//
// This package is shared by the standard library (which vendors it)
// and x/net/http2. It comes with no API stability promise.
package httpguts
import (
"net/textproto"
"strings"
)
// ValidTrailerHeader reports whether name is a valid header field name to appear
// in trailers.
// See RFC 7230, Section 4.1.2
func ValidTrailerHeader(name string) bool {
name = textproto.CanonicalMIMEHeaderKey(name)
if strings.HasPrefix(name, "If-") || badTrailer[name] {
return false
}
return true
}
var badTrailer = map[string]bool{
"Authorization": true,
"Cache-Control": true,
"Connection": true,
"Content-Encoding": true,
"Content-Length": true,
"Content-Range": true,
"Content-Type": true,
"Expect": true,
"Host": true,
"Keep-Alive": true,
"Max-Forwards": true,
"Pragma": true,
"Proxy-Authenticate": true,
"Proxy-Authorization": true,
"Proxy-Connection": true,
"Range": true,
"Realm": true,
"Te": true,
"Trailer": true,
"Transfer-Encoding": true,
"Www-Authenticate": true,
}

View File

@@ -2,12 +2,7 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package httplex contains rules around lexical matters of various
// HTTP-related specifications.
//
// This package is shared by the standard library (which vendors it)
// and x/net/http2. It comes with no API stability promise.
package httplex
package httpguts
import (
"net"

View File

@@ -5,7 +5,7 @@
package http2
// A list of the possible cipher suite ids. Taken from
// http://www.iana.org/assignments/tls-parameters/tls-parameters.txt
// https://www.iana.org/assignments/tls-parameters/tls-parameters.txt
const (
cipher_TLS_NULL_WITH_NULL_NULL uint16 = 0x0000

View File

@@ -52,9 +52,31 @@ const (
noDialOnMiss = false
)
// shouldTraceGetConn reports whether getClientConn should call any
// ClientTrace.GetConn hook associated with the http.Request.
//
// This complexity is needed to avoid double calls of the GetConn hook
// during the back-and-forth between net/http and x/net/http2 (when the
// net/http.Transport is upgraded to also speak http2), as well as support
// the case where x/net/http2 is being used directly.
func (p *clientConnPool) shouldTraceGetConn(st clientConnIdleState) bool {
// If our Transport wasn't made via ConfigureTransport, always
// trace the GetConn hook if provided, because that means the
// http2 package is being used directly and it's the one
// dialing, as opposed to net/http.
if _, ok := p.t.ConnPool.(noDialClientConnPool); !ok {
return true
}
// Otherwise, only use the GetConn hook if this connection has
// been used previously for other requests. For fresh
// connections, the net/http package does the dialing.
return !st.freshConn
}
func (p *clientConnPool) getClientConn(req *http.Request, addr string, dialOnMiss bool) (*ClientConn, error) {
if isConnectionCloseRequest(req) && dialOnMiss {
// It gets its own connection.
traceGetConn(req, addr)
const singleUse = true
cc, err := p.t.dialClientConn(addr, singleUse)
if err != nil {
@@ -64,7 +86,10 @@ func (p *clientConnPool) getClientConn(req *http.Request, addr string, dialOnMis
}
p.mu.Lock()
for _, cc := range p.conns[addr] {
if cc.CanTakeNewRequest() {
if st := cc.idleState(); st.canTakeNewRequest {
if p.shouldTraceGetConn(st) {
traceGetConn(req, addr)
}
p.mu.Unlock()
return cc, nil
}
@@ -73,6 +98,7 @@ func (p *clientConnPool) getClientConn(req *http.Request, addr string, dialOnMis
p.mu.Unlock()
return nil, ErrNoCachedConn
}
traceGetConn(req, addr)
call := p.getStartDialLocked(addr)
p.mu.Unlock()
<-call.done

View File

@@ -57,7 +57,7 @@ func configureTransport(t1 *http.Transport) (*Transport, error) {
// registerHTTPSProtocol calls Transport.RegisterProtocol but
// converting panics into errors.
func registerHTTPSProtocol(t *http.Transport, rt http.RoundTripper) (err error) {
func registerHTTPSProtocol(t *http.Transport, rt noDialH2RoundTripper) (err error) {
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("%v", e)
@@ -69,11 +69,13 @@ func registerHTTPSProtocol(t *http.Transport, rt http.RoundTripper) (err error)
// noDialH2RoundTripper is a RoundTripper which only tries to complete the request
// if there's already has a cached connection to the host.
type noDialH2RoundTripper struct{ t *Transport }
// (The field is exported so it can be accessed via reflect from net/http; tested
// by TestNoDialH2RoundTripperType)
type noDialH2RoundTripper struct{ *Transport }
func (rt noDialH2RoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
res, err := rt.t.RoundTrip(req)
if err == ErrNoCachedConn {
res, err := rt.Transport.RoundTrip(req)
if isNoCachedConnError(err) {
return nil, http.ErrSkipAltProtocol
}
return res, err

View File

@@ -41,10 +41,10 @@ func (f *flow) take(n int32) {
// add adds n bytes (positive or negative) to the flow control window.
// It returns false if the sum would exceed 2^31-1.
func (f *flow) add(n int32) bool {
remain := (1<<31 - 1) - f.n
if n > remain {
return false
sum := f.n + n
if (sum > n) == (f.n > 0) {
f.n = sum
return true
}
f.n += n
return true
return false
}

View File

@@ -14,8 +14,8 @@ import (
"strings"
"sync"
"golang.org/x/net/http/httpguts"
"golang.org/x/net/http2/hpack"
"golang.org/x/net/lex/httplex"
)
const frameHeaderLen = 9
@@ -733,32 +733,67 @@ func (f *SettingsFrame) IsAck() bool {
return f.FrameHeader.Flags.Has(FlagSettingsAck)
}
func (f *SettingsFrame) Value(s SettingID) (v uint32, ok bool) {
func (f *SettingsFrame) Value(id SettingID) (v uint32, ok bool) {
f.checkValid()
buf := f.p
for len(buf) > 0 {
settingID := SettingID(binary.BigEndian.Uint16(buf[:2]))
if settingID == s {
return binary.BigEndian.Uint32(buf[2:6]), true
for i := 0; i < f.NumSettings(); i++ {
if s := f.Setting(i); s.ID == id {
return s.Val, true
}
buf = buf[6:]
}
return 0, false
}
// Setting returns the setting from the frame at the given 0-based index.
// The index must be >= 0 and less than f.NumSettings().
func (f *SettingsFrame) Setting(i int) Setting {
buf := f.p
return Setting{
ID: SettingID(binary.BigEndian.Uint16(buf[i*6 : i*6+2])),
Val: binary.BigEndian.Uint32(buf[i*6+2 : i*6+6]),
}
}
func (f *SettingsFrame) NumSettings() int { return len(f.p) / 6 }
// HasDuplicates reports whether f contains any duplicate setting IDs.
func (f *SettingsFrame) HasDuplicates() bool {
num := f.NumSettings()
if num == 0 {
return false
}
// If it's small enough (the common case), just do the n^2
// thing and avoid a map allocation.
if num < 10 {
for i := 0; i < num; i++ {
idi := f.Setting(i).ID
for j := i + 1; j < num; j++ {
idj := f.Setting(j).ID
if idi == idj {
return true
}
}
}
return false
}
seen := map[SettingID]bool{}
for i := 0; i < num; i++ {
id := f.Setting(i).ID
if seen[id] {
return true
}
seen[id] = true
}
return false
}
// ForeachSetting runs fn for each setting.
// It stops and returns the first error.
func (f *SettingsFrame) ForeachSetting(fn func(Setting) error) error {
f.checkValid()
buf := f.p
for len(buf) > 0 {
if err := fn(Setting{
SettingID(binary.BigEndian.Uint16(buf[:2])),
binary.BigEndian.Uint32(buf[2:6]),
}); err != nil {
for i := 0; i < f.NumSettings(); i++ {
if err := fn(f.Setting(i)); err != nil {
return err
}
buf = buf[6:]
}
return nil
}
@@ -1462,7 +1497,7 @@ func (fr *Framer) readMetaFrame(hf *HeadersFrame) (*MetaHeadersFrame, error) {
if VerboseLogs && fr.logReads {
fr.debugReadLoggerf("http2: decoded hpack field %+v", hf)
}
if !httplex.ValidHeaderFieldValue(hf.Value) {
if !httpguts.ValidHeaderFieldValue(hf.Value) {
invalid = headerFieldValueError(hf.Value)
}
isPseudo := strings.HasPrefix(hf.Name, ":")

26
vendor/golang.org/x/net/http2/go111.go generated vendored Normal file
View File

@@ -0,0 +1,26 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build go1.11
package http2
import "net/textproto"
func traceHasWroteHeaderField(trace *clientTrace) bool {
return trace != nil && trace.WroteHeaderField != nil
}
func traceWroteHeaderField(trace *clientTrace, k, v string) {
if trace != nil && trace.WroteHeaderField != nil {
trace.WroteHeaderField(k, []string{v})
}
}
func traceGot1xxResponseFunc(trace *clientTrace) func(int, textproto.MIMEHeader) error {
if trace != nil {
return trace.Got1xxResponse
}
return nil
}

View File

@@ -18,6 +18,8 @@ type contextContext interface {
context.Context
}
var errCanceled = context.Canceled
func serverConnBaseContext(c net.Conn, opts *ServeConnOpts) (ctx contextContext, cancel func()) {
ctx, cancel = context.WithCancel(context.Background())
ctx = context.WithValue(ctx, http.LocalAddrContextKey, c.LocalAddr())
@@ -48,6 +50,14 @@ func (t *Transport) idleConnTimeout() time.Duration {
func setResponseUncompressed(res *http.Response) { res.Uncompressed = true }
func traceGetConn(req *http.Request, hostPort string) {
trace := httptrace.ContextClientTrace(req.Context())
if trace == nil || trace.GetConn == nil {
return
}
trace.GetConn(hostPort)
}
func traceGotConn(req *http.Request, cc *ClientConn) {
trace := httptrace.ContextClientTrace(req.Context())
if trace == nil || trace.GotConn == nil {
@@ -104,3 +114,8 @@ func requestTrace(req *http.Request) *clientTrace {
func (cc *ClientConn) Ping(ctx context.Context) error {
return cc.ping(ctx)
}
// Shutdown gracefully closes the client connection, waiting for running streams to complete.
func (cc *ClientConn) Shutdown(ctx context.Context) error {
return cc.shutdown(ctx)
}

View File

@@ -206,7 +206,7 @@ func appendVarInt(dst []byte, n byte, i uint64) []byte {
}
// appendHpackString appends s, as encoded in "String Literal"
// representation, to dst and returns the the extended buffer.
// representation, to dst and returns the extended buffer.
//
// s will be encoded in Huffman codes only when it produces strictly
// shorter byte string.

View File

@@ -389,6 +389,12 @@ func (d *Decoder) callEmit(hf HeaderField) error {
// (same invariants and behavior as parseHeaderFieldRepr)
func (d *Decoder) parseDynamicTableSizeUpdate() error {
// RFC 7541, sec 4.2: This dynamic table size update MUST occur at the
// beginning of the first header block following the change to the dynamic table size.
if d.dynTab.size > 0 {
return DecodingError{errors.New("dynamic table size update MUST occur at the beginning of a header block")}
}
buf := d.buf
size, buf, err := readVarInt(5, buf)
if err != nil {

View File

@@ -47,6 +47,7 @@ var ErrInvalidHuffman = errors.New("hpack: invalid Huffman-encoded data")
// If maxLen is greater than 0, attempts to write more to buf than
// maxLen bytes will return ErrStringLength.
func huffmanDecode(buf *bytes.Buffer, maxLen int, v []byte) error {
rootHuffmanNode := getRootHuffmanNode()
n := rootHuffmanNode
// cur is the bit buffer that has not been fed into n.
// cbits is the number of low order bits in cur that are valid.
@@ -106,7 +107,7 @@ func huffmanDecode(buf *bytes.Buffer, maxLen int, v []byte) error {
type node struct {
// children is non-nil for internal nodes
children []*node
children *[256]*node
// The following are only valid if children is nil:
codeLen uint8 // number of bits that led to the output of sym
@@ -114,22 +115,31 @@ type node struct {
}
func newInternalNode() *node {
return &node{children: make([]*node, 256)}
return &node{children: new([256]*node)}
}
var rootHuffmanNode = newInternalNode()
var (
buildRootOnce sync.Once
lazyRootHuffmanNode *node
)
func init() {
func getRootHuffmanNode() *node {
buildRootOnce.Do(buildRootHuffmanNode)
return lazyRootHuffmanNode
}
func buildRootHuffmanNode() {
if len(huffmanCodes) != 256 {
panic("unexpected size")
}
lazyRootHuffmanNode = newInternalNode()
for i, code := range huffmanCodes {
addDecoderNode(byte(i), code, huffmanCodeLen[i])
}
}
func addDecoderNode(sym byte, code uint32, codeLen uint8) {
cur := rootHuffmanNode
cur := lazyRootHuffmanNode
for codeLen > 8 {
codeLen -= 8
i := uint8(code >> codeLen)

View File

@@ -29,7 +29,7 @@ import (
"strings"
"sync"
"golang.org/x/net/lex/httplex"
"golang.org/x/net/http/httpguts"
)
var (
@@ -179,7 +179,7 @@ var (
)
// validWireHeaderFieldName reports whether v is a valid header field
// name (key). See httplex.ValidHeaderName for the base rules.
// name (key). See httpguts.ValidHeaderName for the base rules.
//
// Further, http2 says:
// "Just as in HTTP/1.x, header field names are strings of ASCII
@@ -191,7 +191,7 @@ func validWireHeaderFieldName(v string) bool {
return false
}
for _, r := range v {
if !httplex.IsTokenRune(r) {
if !httpguts.IsTokenRune(r) {
return false
}
if 'A' <= r && r <= 'Z' {
@@ -312,7 +312,7 @@ func mustUint31(v int32) uint32 {
}
// bodyAllowedForStatus reports whether a given response status code
// permits a body. See RFC 2616, section 4.4.
// permits a body. See RFC 7230, section 3.3.
func bodyAllowedForStatus(status int) bool {
switch {
case status >= 100 && status <= 199:

17
vendor/golang.org/x/net/http2/not_go111.go generated vendored Normal file
View File

@@ -0,0 +1,17 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !go1.11
package http2
import "net/textproto"
func traceHasWroteHeaderField(trace *clientTrace) bool { return false }
func traceWroteHeaderField(trace *clientTrace, k, v string) {}
func traceGot1xxResponseFunc(trace *clientTrace) func(int, textproto.MIMEHeader) error {
return nil
}

View File

@@ -8,6 +8,7 @@ package http2
import (
"crypto/tls"
"errors"
"net"
"net/http"
"time"
@@ -18,6 +19,8 @@ type contextContext interface {
Err() error
}
var errCanceled = errors.New("canceled")
type fakeContext struct{}
func (fakeContext) Done() <-chan struct{} { return nil }
@@ -34,6 +37,7 @@ func setResponseUncompressed(res *http.Response) {
type clientTrace struct{}
func requestTrace(*http.Request) *clientTrace { return nil }
func traceGetConn(*http.Request, string) {}
func traceGotConn(*http.Request, *ClientConn) {}
func traceFirstResponseByte(*clientTrace) {}
func traceWroteHeaders(*clientTrace) {}
@@ -84,4 +88,8 @@ func (cc *ClientConn) Ping(ctx contextContext) error {
return cc.ping(ctx)
}
func (cc *ClientConn) Shutdown(ctx contextContext) error {
return cc.shutdown(ctx)
}
func (t *Transport) idleConnTimeout() time.Duration { return 0 }

View File

@@ -46,14 +46,16 @@ import (
"sync"
"time"
"golang.org/x/net/http/httpguts"
"golang.org/x/net/http2/hpack"
)
const (
prefaceTimeout = 10 * time.Second
firstSettingsTimeout = 2 * time.Second // should be in-flight with preface anyway
handlerChunkWriteSize = 4 << 10
defaultMaxStreams = 250 // TODO: make this 100 as the GFE seems to?
prefaceTimeout = 10 * time.Second
firstSettingsTimeout = 2 * time.Second // should be in-flight with preface anyway
handlerChunkWriteSize = 4 << 10
defaultMaxStreams = 250 // TODO: make this 100 as the GFE seems to?
maxQueuedControlFrames = 10000
)
var (
@@ -161,6 +163,15 @@ func (s *Server) maxConcurrentStreams() uint32 {
return defaultMaxStreams
}
// maxQueuedControlFrames is the maximum number of control frames like
// SETTINGS, PING and RST_STREAM that will be queued for writing before
// the connection is closed to prevent memory exhaustion attacks.
func (s *Server) maxQueuedControlFrames() int {
// TODO: if anybody asks, add a Server field, and remember to define the
// behavior of negative values.
return maxQueuedControlFrames
}
type serverInternalState struct {
mu sync.Mutex
activeConns map[*serverConn]struct{}
@@ -220,12 +231,15 @@ func ConfigureServer(s *http.Server, conf *Server) error {
} else if s.TLSConfig.CipherSuites != nil {
// If they already provided a CipherSuite list, return
// an error if it has a bad order or is missing
// ECDHE_RSA_WITH_AES_128_GCM_SHA256.
const requiredCipher = tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
// ECDHE_RSA_WITH_AES_128_GCM_SHA256 or ECDHE_ECDSA_WITH_AES_128_GCM_SHA256.
haveRequired := false
sawBad := false
for i, cs := range s.TLSConfig.CipherSuites {
if cs == requiredCipher {
switch cs {
case tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
// Alternative MTI cipher to not discourage ECDSA-only servers.
// See http://golang.org/cl/30721 for further information.
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256:
haveRequired = true
}
if isBadCipher(cs) {
@@ -235,7 +249,7 @@ func ConfigureServer(s *http.Server, conf *Server) error {
}
}
if !haveRequired {
return fmt.Errorf("http2: TLSConfig.CipherSuites is missing HTTP/2-required TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256")
return fmt.Errorf("http2: TLSConfig.CipherSuites is missing an HTTP/2-required AES_128_GCM_SHA256 cipher.")
}
}
@@ -403,7 +417,7 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) {
// addresses during development.
//
// TODO: optionally enforce? Or enforce at the time we receive
// a new request, and verify the the ServerName matches the :authority?
// a new request, and verify the ServerName matches the :authority?
// But that precludes proxy situations, perhaps.
//
// So for now, do nothing here again.
@@ -466,6 +480,7 @@ type serverConn struct {
sawFirstSettings bool // got the initial SETTINGS frame after the preface
needToSendSettingsAck bool
unackedSettings int // how many SETTINGS have we sent without ACKs?
queuedControlFrames int // control frames in the writeSched queue
clientMaxStreams uint32 // SETTINGS_MAX_CONCURRENT_STREAMS from client (our PUSH_PROMISE limit)
advMaxStreams uint32 // our SETTINGS_MAX_CONCURRENT_STREAMS advertised the client
curClientStreams uint32 // number of open streams initiated by the client
@@ -649,7 +664,7 @@ func (sc *serverConn) condlogf(err error, format string, args ...interface{}) {
if err == nil {
return
}
if err == io.EOF || err == io.ErrUnexpectedEOF || isClosedConnError(err) {
if err == io.EOF || err == io.ErrUnexpectedEOF || isClosedConnError(err) || err == errPrefaceTimeout {
// Boring, expected errors.
sc.vlogf(format, args...)
} else {
@@ -853,9 +868,22 @@ func (sc *serverConn) serve() {
}
}
if sc.inGoAway && sc.curOpenStreams() == 0 && !sc.needToSendGoAway && !sc.writingFrame {
// If the peer is causing us to generate a lot of control frames,
// but not reading them from us, assume they are trying to make us
// run out of memory.
if sc.queuedControlFrames > sc.srv.maxQueuedControlFrames() {
sc.vlogf("http2: too many control frames in send queue, closing connection")
return
}
// Start the shutdown timer after sending a GOAWAY. When sending GOAWAY
// with no error code (graceful shutdown), don't start the timer until
// all open streams have been completed.
sentGoAway := sc.inGoAway && !sc.needToSendGoAway && !sc.writingFrame
gracefulShutdownComplete := sc.goAwayCode == ErrCodeNo && sc.curOpenStreams() == 0
if sentGoAway && sc.shutdownTimer == nil && (sc.goAwayCode != ErrCodeNo || gracefulShutdownComplete) {
sc.shutDownIn(goAwayTimeout)
}
}
}
@@ -889,8 +917,11 @@ func (sc *serverConn) sendServeMsg(msg interface{}) {
}
}
// readPreface reads the ClientPreface greeting from the peer
// or returns an error on timeout or an invalid greeting.
var errPrefaceTimeout = errors.New("timeout waiting for client preface")
// readPreface reads the ClientPreface greeting from the peer or
// returns errPrefaceTimeout on timeout, or an error if the greeting
// is invalid.
func (sc *serverConn) readPreface() error {
errc := make(chan error, 1)
go func() {
@@ -908,7 +939,7 @@ func (sc *serverConn) readPreface() error {
defer timer.Stop()
select {
case <-timer.C:
return errors.New("timeout waiting for client preface")
return errPrefaceTimeout
case err := <-errc:
if err == nil {
if VerboseLogs {
@@ -1044,6 +1075,14 @@ func (sc *serverConn) writeFrame(wr FrameWriteRequest) {
}
if !ignoreWrite {
if wr.isControl() {
sc.queuedControlFrames++
// For extra safety, detect wraparounds, which should not happen,
// and pull the plug.
if sc.queuedControlFrames < 0 {
sc.conn.Close()
}
}
sc.writeSched.Push(wr)
}
sc.scheduleFrameWrite()
@@ -1161,10 +1200,8 @@ func (sc *serverConn) wroteFrame(res frameWriteResult) {
// If a frame is already being written, nothing happens. This will be called again
// when the frame is done being written.
//
// If a frame isn't being written we need to send one, the best frame
// to send is selected, preferring first things that aren't
// stream-specific (e.g. ACKing settings), and then finding the
// highest priority stream.
// If a frame isn't being written and we need to send one, the best frame
// to send is selected by writeSched.
//
// If a frame isn't being written and there's nothing else to send, we
// flush the write buffer.
@@ -1192,6 +1229,9 @@ func (sc *serverConn) scheduleFrameWrite() {
}
if !sc.inGoAway || sc.goAwayCode == ErrCodeNo {
if wr, ok := sc.writeSched.Pop(); ok {
if wr.isControl() {
sc.queuedControlFrames--
}
sc.startFrameWrite(wr)
continue
}
@@ -1218,30 +1258,31 @@ func (sc *serverConn) startGracefulShutdown() {
sc.shutdownOnce.Do(func() { sc.sendServeMsg(gracefulShutdownMsg) })
}
// After sending GOAWAY, the connection will close after goAwayTimeout.
// If we close the connection immediately after sending GOAWAY, there may
// be unsent data in our kernel receive buffer, which will cause the kernel
// to send a TCP RST on close() instead of a FIN. This RST will abort the
// connection immediately, whether or not the client had received the GOAWAY.
//
// Ideally we should delay for at least 1 RTT + epsilon so the client has
// a chance to read the GOAWAY and stop sending messages. Measuring RTT
// is hard, so we approximate with 1 second. See golang.org/issue/18701.
//
// This is a var so it can be shorter in tests, where all requests uses the
// loopback interface making the expected RTT very small.
//
// TODO: configurable?
var goAwayTimeout = 1 * time.Second
func (sc *serverConn) startGracefulShutdownInternal() {
sc.goAwayIn(ErrCodeNo, 0)
sc.goAway(ErrCodeNo)
}
func (sc *serverConn) goAway(code ErrCode) {
sc.serveG.check()
var forceCloseIn time.Duration
if code != ErrCodeNo {
forceCloseIn = 250 * time.Millisecond
} else {
// TODO: configurable
forceCloseIn = 1 * time.Second
}
sc.goAwayIn(code, forceCloseIn)
}
func (sc *serverConn) goAwayIn(code ErrCode, forceCloseIn time.Duration) {
sc.serveG.check()
if sc.inGoAway {
return
}
if forceCloseIn != 0 {
sc.shutDownIn(forceCloseIn)
}
sc.inGoAway = true
sc.needToSendGoAway = true
sc.goAwayCode = code
@@ -1474,9 +1515,17 @@ func (sc *serverConn) processSettings(f *SettingsFrame) error {
}
return nil
}
if f.NumSettings() > 100 || f.HasDuplicates() {
// This isn't actually in the spec, but hang up on
// suspiciously large settings frames or those with
// duplicate entries.
return ConnectionError(ErrCodeProtocol)
}
if err := f.ForeachSetting(sc.processSetting); err != nil {
return err
}
// TODO: judging by RFC 7540, Section 6.5.3 each SETTINGS frame should be
// acknowledged individually, even if multiple are received before the ACK.
sc.needToSendSettingsAck = true
sc.scheduleFrameWrite()
return nil
@@ -1562,6 +1611,12 @@ func (sc *serverConn) processData(f *DataFrame) error {
// type PROTOCOL_ERROR."
return ConnectionError(ErrCodeProtocol)
}
// RFC 7540, sec 6.1: If a DATA frame is received whose stream is not in
// "open" or "half-closed (local)" state, the recipient MUST respond with a
// stream error (Section 5.4.2) of type STREAM_CLOSED.
if state == stateClosed {
return streamError(id, ErrCodeStreamClosed)
}
if st == nil || state != stateOpen || st.gotTrailerHeader || st.resetQueued {
// This includes sending a RST_STREAM if the stream is
// in stateHalfClosedLocal (which currently means that
@@ -1595,7 +1650,10 @@ func (sc *serverConn) processData(f *DataFrame) error {
// Sender sending more than they'd declared?
if st.declBodyBytes != -1 && st.bodyBytes+int64(len(data)) > st.declBodyBytes {
st.body.CloseWithError(fmt.Errorf("sender tried to send more than declared Content-Length of %d bytes", st.declBodyBytes))
return streamError(id, ErrCodeStreamClosed)
// RFC 7540, sec 8.1.2.6: A request or response is also malformed if the
// value of a content-length header field does not equal the sum of the
// DATA frame payload lengths that form the body.
return streamError(id, ErrCodeProtocol)
}
if f.Length > 0 {
// Check whether the client has flow control quota.
@@ -1705,6 +1763,13 @@ func (sc *serverConn) processHeaders(f *MetaHeadersFrame) error {
// processing this frame.
return nil
}
// RFC 7540, sec 5.1: If an endpoint receives additional frames, other than
// WINDOW_UPDATE, PRIORITY, or RST_STREAM, for a stream that is in
// this state, it MUST respond with a stream error (Section 5.4.2) of
// type STREAM_CLOSED.
if st.state == stateHalfClosedRemote {
return streamError(id, ErrCodeStreamClosed)
}
return st.processTrailerHeaders(f)
}
@@ -1805,7 +1870,7 @@ func (st *stream) processTrailerHeaders(f *MetaHeadersFrame) error {
if st.trailer != nil {
for _, hf := range f.RegularFields() {
key := sc.canonicalHeader(hf.Name)
if !ValidTrailerHeader(key) {
if !httpguts.ValidTrailerHeader(key) {
// TODO: send more details to the peer somehow. But http2 has
// no way to send debug data at a stream level. Discuss with
// HTTP folk.
@@ -2272,8 +2337,8 @@ func (rws *responseWriterState) hasTrailers() bool { return len(rws.trailers) !=
// written in the trailers at the end of the response.
func (rws *responseWriterState) declareTrailer(k string) {
k = http.CanonicalHeaderKey(k)
if !ValidTrailerHeader(k) {
// Forbidden by RFC 2616 14.40.
if !httpguts.ValidTrailerHeader(k) {
// Forbidden by RFC 7230, section 4.1.2.
rws.conn.logf("ignoring invalid trailer %q", k)
return
}
@@ -2310,7 +2375,7 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
clen = strconv.Itoa(len(p))
}
_, hasContentType := rws.snapHeader["Content-Type"]
if !hasContentType && bodyAllowedForStatus(rws.status) {
if !hasContentType && bodyAllowedForStatus(rws.status) && len(p) > 0 {
ctype = http.DetectContentType(p)
}
var date string
@@ -2323,6 +2388,19 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
foreachHeaderElement(v, rws.declareTrailer)
}
// "Connection" headers aren't allowed in HTTP/2 (RFC 7540, 8.1.2.2),
// but respect "Connection" == "close" to mean sending a GOAWAY and tearing
// down the TCP connection when idle, like we do for HTTP/1.
// TODO: remove more Connection-specific header fields here, in addition
// to "Connection".
if _, ok := rws.snapHeader["Connection"]; ok {
v := rws.snapHeader.Get("Connection")
delete(rws.snapHeader, "Connection")
if v == "close" {
rws.conn.startGracefulShutdown()
}
}
endStream := (rws.handlerDone && !rws.hasTrailers() && len(p) == 0) || isHeadResp
err = rws.conn.writeHeaders(rws.stream, &writeResHeaders{
streamID: rws.stream.id,
@@ -2394,7 +2472,7 @@ const TrailerPrefix = "Trailer:"
// after the header has already been flushed. Because the Go
// ResponseWriter interface has no way to set Trailers (only the
// Header), and because we didn't want to expand the ResponseWriter
// interface, and because nobody used trailers, and because RFC 2616
// interface, and because nobody used trailers, and because RFC 7230
// says you SHOULD (but not must) predeclare any trailers in the
// header, the official ResponseWriter rules said trailers in Go must
// be predeclared, and then we reuse the same ResponseWriter.Header()
@@ -2478,6 +2556,24 @@ func (w *responseWriter) Header() http.Header {
return rws.handlerHeader
}
// checkWriteHeaderCode is a copy of net/http's checkWriteHeaderCode.
func checkWriteHeaderCode(code int) {
// Issue 22880: require valid WriteHeader status codes.
// For now we only enforce that it's three digits.
// In the future we might block things over 599 (600 and above aren't defined
// at http://httpwg.org/specs/rfc7231.html#status.codes)
// and we might block under 200 (once we have more mature 1xx support).
// But for now any three digits.
//
// We used to send "HTTP/1.1 000 0" on the wire in responses but there's
// no equivalent bogus thing we can realistically send in HTTP/2,
// so we'll consistently panic instead and help people find their bugs
// early. (We can't return an error from WriteHeader even if we wanted to.)
if code < 100 || code > 999 {
panic(fmt.Sprintf("invalid WriteHeader code %v", code))
}
}
func (w *responseWriter) WriteHeader(code int) {
rws := w.rws
if rws == nil {
@@ -2488,6 +2584,7 @@ func (w *responseWriter) WriteHeader(code int) {
func (rws *responseWriterState) writeHeader(code int) {
if !rws.wroteHeader {
checkWriteHeaderCode(code)
rws.wroteHeader = true
rws.status = code
if len(rws.handlerHeader) > 0 {
@@ -2759,7 +2856,7 @@ func (sc *serverConn) startPush(msg *startPushRequest) {
}
// foreachHeaderElement splits v according to the "#rule" construction
// in RFC 2616 section 2.1 and calls fn for each non-empty element.
// in RFC 7230 section 7 and calls fn for each non-empty element.
func foreachHeaderElement(v string, fn func(string)) {
v = textproto.TrimString(v)
if v == "" {
@@ -2807,41 +2904,6 @@ func new400Handler(err error) http.HandlerFunc {
}
}
// ValidTrailerHeader reports whether name is a valid header field name to appear
// in trailers.
// See: http://tools.ietf.org/html/rfc7230#section-4.1.2
func ValidTrailerHeader(name string) bool {
name = http.CanonicalHeaderKey(name)
if strings.HasPrefix(name, "If-") || badTrailer[name] {
return false
}
return true
}
var badTrailer = map[string]bool{
"Authorization": true,
"Cache-Control": true,
"Connection": true,
"Content-Encoding": true,
"Content-Length": true,
"Content-Range": true,
"Content-Type": true,
"Expect": true,
"Host": true,
"Keep-Alive": true,
"Max-Forwards": true,
"Pragma": true,
"Proxy-Authenticate": true,
"Proxy-Authorization": true,
"Proxy-Connection": true,
"Range": true,
"Realm": true,
"Te": true,
"Trailer": true,
"Transfer-Encoding": true,
"Www-Authenticate": true,
}
// h1ServerKeepAlivesDisabled reports whether hs has its keep-alives
// disabled. See comments on h1ServerShutdownChan above for why
// the code is written this way.

View File

@@ -21,15 +21,16 @@ import (
mathrand "math/rand"
"net"
"net/http"
"net/textproto"
"sort"
"strconv"
"strings"
"sync"
"time"
"golang.org/x/net/http/httpguts"
"golang.org/x/net/http2/hpack"
"golang.org/x/net/idna"
"golang.org/x/net/lex/httplex"
)
const (
@@ -87,7 +88,7 @@ type Transport struct {
// MaxHeaderListSize is the http2 SETTINGS_MAX_HEADER_LIST_SIZE to
// send in the initial settings frame. It is how many bytes
// of response headers are allow. Unlike the http2 spec, zero here
// of response headers are allowed. Unlike the http2 spec, zero here
// means to use a default limit (currently 10MB). If you actually
// want to advertise an ulimited value to the peer, Transport
// interprets the highest possible value here (0xffffffff or 1<<32-1)
@@ -159,6 +160,7 @@ type ClientConn struct {
cond *sync.Cond // hold mu; broadcast on flow/closed changes
flow flow // our conn-level flow control quota (cs.flow is per stream)
inflow flow // peer's conn-level flow control
closing bool
closed bool
wantSettingsAck bool // we sent a SETTINGS frame and haven't heard back
goAway *GoAwayFrame // if non-nil, the GoAwayFrame we received
@@ -172,9 +174,10 @@ type ClientConn struct {
fr *Framer
lastActive time.Time
// Settings from peer: (also guarded by mu)
maxFrameSize uint32
maxConcurrentStreams uint32
initialWindowSize uint32
maxFrameSize uint32
maxConcurrentStreams uint32
peerMaxHeaderListSize uint64
initialWindowSize uint32
hbuf bytes.Buffer // HPACK encoder writes into this
henc *hpack.Encoder
@@ -210,9 +213,10 @@ type clientStream struct {
done chan struct{} // closed when stream remove from cc.streams map; close calls guarded by cc.mu
// owned by clientConnReadLoop:
firstByte bool // got the first response byte
pastHeaders bool // got first MetaHeadersFrame (actual headers)
pastTrailers bool // got optional second MetaHeadersFrame (trailers)
firstByte bool // got the first response byte
pastHeaders bool // got first MetaHeadersFrame (actual headers)
pastTrailers bool // got optional second MetaHeadersFrame (trailers)
num1xx uint8 // number of 1xx responses seen
trailer http.Header // accumulated trailers
resTrailer *http.Header // client's Response.Trailer
@@ -236,6 +240,17 @@ func awaitRequestCancel(req *http.Request, done <-chan struct{}) error {
}
}
var got1xxFuncForTests func(int, textproto.MIMEHeader) error
// get1xxTraceFunc returns the value of request's httptrace.ClientTrace.Got1xxResponse func,
// if any. It returns nil if not set or if the Go version is too old.
func (cs *clientStream) get1xxTraceFunc() func(int, textproto.MIMEHeader) error {
if fn := got1xxFuncForTests; fn != nil {
return fn
}
return traceGot1xxResponseFunc(cs.trace)
}
// awaitRequestCancel waits for the user to cancel a request, its context to
// expire, or for the request to be done (any way it might be removed from the
// cc.streams map: peer reset, successful completion, TCP connection breakage,
@@ -273,6 +288,13 @@ func (cs *clientStream) checkResetOrDone() error {
}
}
func (cs *clientStream) getStartedWrite() bool {
cc := cs.cc
cc.mu.Lock()
defer cc.mu.Unlock()
return cs.startedWrite
}
func (cs *clientStream) abortRequestBodyWrite(err error) {
if err == nil {
panic("nil error")
@@ -298,7 +320,26 @@ func (sew stickyErrWriter) Write(p []byte) (n int, err error) {
return
}
var ErrNoCachedConn = errors.New("http2: no cached connection was available")
// noCachedConnError is the concrete type of ErrNoCachedConn, which
// needs to be detected by net/http regardless of whether it's its
// bundled version (in h2_bundle.go with a rewritten type name) or
// from a user's x/net/http2. As such, as it has a unique method name
// (IsHTTP2NoCachedConnError) that net/http sniffs for via func
// isNoCachedConnError.
type noCachedConnError struct{}
func (noCachedConnError) IsHTTP2NoCachedConnError() {}
func (noCachedConnError) Error() string { return "http2: no cached connection was available" }
// isNoCachedConnError reports whether err is of type noCachedConnError
// or its equivalent renamed type in net/http2's h2_bundle.go. Both types
// may coexist in the same running program.
func isNoCachedConnError(err error) bool {
_, ok := err.(interface{ IsHTTP2NoCachedConnError() })
return ok
}
var ErrNoCachedConn error = noCachedConnError{}
// RoundTripOpt are options for the Transport.RoundTripOpt method.
type RoundTripOpt struct {
@@ -348,14 +389,9 @@ func (t *Transport) RoundTripOpt(req *http.Request, opt RoundTripOpt) (*http.Res
return nil, err
}
traceGotConn(req, cc)
res, err := cc.RoundTrip(req)
res, gotErrAfterReqBodyWrite, err := cc.roundTrip(req)
if err != nil && retry <= 6 {
afterBodyWrite := false
if e, ok := err.(afterReqBodyWriteError); ok {
err = e
afterBodyWrite = true
}
if req, err = shouldRetryRequest(req, err, afterBodyWrite); err == nil {
if req, err = shouldRetryRequest(req, err, gotErrAfterReqBodyWrite); err == nil {
// After the first retry, do exponential backoff with 10% jitter.
if retry == 0 {
continue
@@ -393,16 +429,6 @@ var (
errClientConnGotGoAway = errors.New("http2: Transport received Server's graceful shutdown GOAWAY")
)
// afterReqBodyWriteError is a wrapper around errors returned by ClientConn.RoundTrip.
// It is used to signal that err happened after part of Request.Body was sent to the server.
type afterReqBodyWriteError struct {
err error
}
func (e afterReqBodyWriteError) Error() string {
return e.err.Error() + "; some request body already written"
}
// shouldRetryRequest is called by RoundTrip when a request fails to get
// response headers. It is always called with a non-nil error.
// It returns either a request to retry (either the same request, or a
@@ -411,27 +437,36 @@ func shouldRetryRequest(req *http.Request, err error, afterBodyWrite bool) (*htt
if !canRetryError(err) {
return nil, err
}
if !afterBodyWrite {
return req, nil
}
// If the Body is nil (or http.NoBody), it's safe to reuse
// this request and its Body.
if req.Body == nil || reqBodyIsNoBody(req.Body) {
return req, nil
}
// Otherwise we depend on the Request having its GetBody
// func defined.
// If the request body can be reset back to its original
// state via the optional req.GetBody, do that.
getBody := reqGetBody(req) // Go 1.8: getBody = req.GetBody
if getBody == nil {
return nil, fmt.Errorf("http2: Transport: cannot retry err [%v] after Request.Body was written; define Request.GetBody to avoid this error", err)
if getBody != nil {
// TODO: consider a req.Body.Close here? or audit that all caller paths do?
body, err := getBody()
if err != nil {
return nil, err
}
newReq := *req
newReq.Body = body
return &newReq, nil
}
body, err := getBody()
if err != nil {
return nil, err
// The Request.Body can't reset back to the beginning, but we
// don't seem to have started to read from it yet, so reuse
// the request directly. The "afterBodyWrite" means the
// bodyWrite process has started, which becomes true before
// the first Read.
if !afterBodyWrite {
return req, nil
}
newReq := *req
newReq.Body = body
return &newReq, nil
return nil, fmt.Errorf("http2: Transport: cannot retry err [%v] after Request.Body was written; define Request.GetBody to avoid this error", err)
}
func canRetryError(err error) bool {
@@ -519,17 +554,18 @@ func (t *Transport) NewClientConn(c net.Conn) (*ClientConn, error) {
func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, error) {
cc := &ClientConn{
t: t,
tconn: c,
readerDone: make(chan struct{}),
nextStreamID: 1,
maxFrameSize: 16 << 10, // spec default
initialWindowSize: 65535, // spec default
maxConcurrentStreams: 1000, // "infinite", per spec. 1000 seems good enough.
streams: make(map[uint32]*clientStream),
singleUse: singleUse,
wantSettingsAck: true,
pings: make(map[[8]byte]chan struct{}),
t: t,
tconn: c,
readerDone: make(chan struct{}),
nextStreamID: 1,
maxFrameSize: 16 << 10, // spec default
initialWindowSize: 65535, // spec default
maxConcurrentStreams: 1000, // "infinite", per spec. 1000 seems good enough.
peerMaxHeaderListSize: 0xffffffffffffffff, // "infinite", per spec. Use 2^64-1 instead.
streams: make(map[uint32]*clientStream),
singleUse: singleUse,
wantSettingsAck: true,
pings: make(map[[8]byte]chan struct{}),
}
if d := t.idleConnTimeout(); d != 0 {
cc.idleTimeout = d
@@ -554,6 +590,10 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro
// henc in response to SETTINGS frames?
cc.henc = hpack.NewEncoder(&cc.hbuf)
if t.AllowHTTP {
cc.nextStreamID = 3
}
if cs, ok := c.(connectionStater); ok {
state := cs.ConnectionState()
cc.tlsState = &state
@@ -613,12 +653,32 @@ func (cc *ClientConn) CanTakeNewRequest() bool {
return cc.canTakeNewRequestLocked()
}
func (cc *ClientConn) canTakeNewRequestLocked() bool {
// clientConnIdleState describes the suitability of a client
// connection to initiate a new RoundTrip request.
type clientConnIdleState struct {
canTakeNewRequest bool
freshConn bool // whether it's unused by any previous request
}
func (cc *ClientConn) idleState() clientConnIdleState {
cc.mu.Lock()
defer cc.mu.Unlock()
return cc.idleStateLocked()
}
func (cc *ClientConn) idleStateLocked() (st clientConnIdleState) {
if cc.singleUse && cc.nextStreamID > 1 {
return false
return
}
return cc.goAway == nil && !cc.closed &&
st.canTakeNewRequest = cc.goAway == nil && !cc.closed && !cc.closing &&
int64(cc.nextStreamID)+int64(cc.pendingRequests) < math.MaxInt32
st.freshConn = cc.nextStreamID == 1 && st.canTakeNewRequest
return
}
func (cc *ClientConn) canTakeNewRequestLocked() bool {
st := cc.idleStateLocked()
return st.canTakeNewRequest
}
// onIdleTimeout is called from a time.AfterFunc goroutine. It will
@@ -648,6 +708,88 @@ func (cc *ClientConn) closeIfIdle() {
cc.tconn.Close()
}
var shutdownEnterWaitStateHook = func() {}
// Shutdown gracefully close the client connection, waiting for running streams to complete.
// Public implementation is in go17.go and not_go17.go
func (cc *ClientConn) shutdown(ctx contextContext) error {
if err := cc.sendGoAway(); err != nil {
return err
}
// Wait for all in-flight streams to complete or connection to close
done := make(chan error, 1)
cancelled := false // guarded by cc.mu
go func() {
cc.mu.Lock()
defer cc.mu.Unlock()
for {
if len(cc.streams) == 0 || cc.closed {
cc.closed = true
done <- cc.tconn.Close()
break
}
if cancelled {
break
}
cc.cond.Wait()
}
}()
shutdownEnterWaitStateHook()
select {
case err := <-done:
return err
case <-ctx.Done():
cc.mu.Lock()
// Free the goroutine above
cancelled = true
cc.cond.Broadcast()
cc.mu.Unlock()
return ctx.Err()
}
}
func (cc *ClientConn) sendGoAway() error {
cc.mu.Lock()
defer cc.mu.Unlock()
cc.wmu.Lock()
defer cc.wmu.Unlock()
if cc.closing {
// GOAWAY sent already
return nil
}
// Send a graceful shutdown frame to server
maxStreamID := cc.nextStreamID
if err := cc.fr.WriteGoAway(maxStreamID, ErrCodeNo, nil); err != nil {
return err
}
if err := cc.bw.Flush(); err != nil {
return err
}
// Prevent new requests
cc.closing = true
return nil
}
// Close closes the client connection immediately.
//
// In-flight requests are interrupted. For a graceful shutdown, use Shutdown instead.
func (cc *ClientConn) Close() error {
cc.mu.Lock()
defer cc.cond.Broadcast()
defer cc.mu.Unlock()
err := errors.New("http2: client connection force closed via ClientConn.Close")
for id, cs := range cc.streams {
select {
case cs.resc <- resAndError{err: err}:
default:
}
cs.bufPipe.CloseWithError(err)
delete(cc.streams, id)
}
cc.closed = true
return cc.tconn.Close()
}
const maxAllocFrameSize = 512 << 10
// frameBuffer returns a scratch buffer suitable for writing DATA frames.
@@ -730,7 +872,7 @@ func checkConnHeaders(req *http.Request) error {
if vv := req.Header["Transfer-Encoding"]; len(vv) > 0 && (len(vv) > 1 || vv[0] != "" && vv[0] != "chunked") {
return fmt.Errorf("http2: invalid Transfer-Encoding request header: %q", vv)
}
if vv := req.Header["Connection"]; len(vv) > 0 && (len(vv) > 1 || vv[0] != "" && vv[0] != "close" && vv[0] != "keep-alive") {
if vv := req.Header["Connection"]; len(vv) > 0 && (len(vv) > 1 || vv[0] != "" && !strings.EqualFold(vv[0], "close") && !strings.EqualFold(vv[0], "keep-alive")) {
return fmt.Errorf("http2: invalid Connection request header: %q", vv)
}
return nil
@@ -750,8 +892,13 @@ func actualContentLength(req *http.Request) int64 {
}
func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
resp, _, err := cc.roundTrip(req)
return resp, err
}
func (cc *ClientConn) roundTrip(req *http.Request) (res *http.Response, gotErrAfterReqBodyWrite bool, err error) {
if err := checkConnHeaders(req); err != nil {
return nil, err
return nil, false, err
}
if cc.idleTimer != nil {
cc.idleTimer.Stop()
@@ -759,14 +906,14 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
trailers, err := commaSeparatedTrailers(req)
if err != nil {
return nil, err
return nil, false, err
}
hasTrailers := trailers != ""
cc.mu.Lock()
if err := cc.awaitOpenSlotForRequest(req); err != nil {
cc.mu.Unlock()
return nil, err
return nil, false, err
}
body := req.Body
@@ -800,7 +947,7 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
hdrs, err := cc.encodeHeaders(req, requestedGzip, trailers, contentLen)
if err != nil {
cc.mu.Unlock()
return nil, err
return nil, false, err
}
cs := cc.newStream()
@@ -812,7 +959,7 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
cc.wmu.Lock()
endStream := !hasBody && !hasTrailers
werr := cc.writeHeaders(cs.ID, endStream, hdrs)
werr := cc.writeHeaders(cs.ID, endStream, int(cc.maxFrameSize), hdrs)
cc.wmu.Unlock()
traceWroteHeaders(cs.trace)
cc.mu.Unlock()
@@ -826,7 +973,7 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
// Don't bother sending a RST_STREAM (our write already failed;
// no need to keep writing)
traceWroteRequest(cs.trace, werr)
return nil, werr
return nil, false, werr
}
var respHeaderTimer <-chan time.Time
@@ -845,7 +992,7 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
bodyWritten := false
ctx := reqContext(req)
handleReadLoopResponse := func(re resAndError) (*http.Response, error) {
handleReadLoopResponse := func(re resAndError) (*http.Response, bool, error) {
res := re.res
if re.err != nil || res.StatusCode > 299 {
// On error or status code 3xx, 4xx, 5xx, etc abort any
@@ -861,18 +1008,12 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
cs.abortRequestBodyWrite(errStopReqBodyWrite)
}
if re.err != nil {
cc.mu.Lock()
afterBodyWrite := cs.startedWrite
cc.mu.Unlock()
cc.forgetStreamID(cs.ID)
if afterBodyWrite {
return nil, afterReqBodyWriteError{re.err}
}
return nil, re.err
return nil, cs.getStartedWrite(), re.err
}
res.Request = req
res.TLS = cc.tlsState
return res, nil
return res, false, nil
}
for {
@@ -887,7 +1028,7 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
cs.abortRequestBodyWrite(errStopReqBodyWriteAndCancel)
}
cc.forgetStreamID(cs.ID)
return nil, errTimeout
return nil, cs.getStartedWrite(), errTimeout
case <-ctx.Done():
if !hasBody || bodyWritten {
cc.writeStreamReset(cs.ID, ErrCodeCancel, nil)
@@ -896,7 +1037,7 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
cs.abortRequestBodyWrite(errStopReqBodyWriteAndCancel)
}
cc.forgetStreamID(cs.ID)
return nil, ctx.Err()
return nil, cs.getStartedWrite(), ctx.Err()
case <-req.Cancel:
if !hasBody || bodyWritten {
cc.writeStreamReset(cs.ID, ErrCodeCancel, nil)
@@ -905,12 +1046,12 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
cs.abortRequestBodyWrite(errStopReqBodyWriteAndCancel)
}
cc.forgetStreamID(cs.ID)
return nil, errRequestCanceled
return nil, cs.getStartedWrite(), errRequestCanceled
case <-cs.peerReset:
// processResetStream already removed the
// stream from the streams map; no need for
// forgetStreamID.
return nil, cs.resetErr
return nil, cs.getStartedWrite(), cs.resetErr
case err := <-bodyWriter.resc:
// Prefer the read loop's response, if available. Issue 16102.
select {
@@ -919,7 +1060,8 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
default:
}
if err != nil {
return nil, err
cc.forgetStreamID(cs.ID)
return nil, cs.getStartedWrite(), err
}
bodyWritten = true
if d := cc.responseHeaderTimeout(); d != 0 {
@@ -939,6 +1081,9 @@ func (cc *ClientConn) awaitOpenSlotForRequest(req *http.Request) error {
for {
cc.lastActive = time.Now()
if cc.closed || !cc.canTakeNewRequestLocked() {
if waitingForConn != nil {
close(waitingForConn)
}
return errClientConnUnusable
}
if int64(len(cc.streams))+1 <= int64(cc.maxConcurrentStreams) {
@@ -971,13 +1116,12 @@ func (cc *ClientConn) awaitOpenSlotForRequest(req *http.Request) error {
}
// requires cc.wmu be held
func (cc *ClientConn) writeHeaders(streamID uint32, endStream bool, hdrs []byte) error {
func (cc *ClientConn) writeHeaders(streamID uint32, endStream bool, maxFrameSize int, hdrs []byte) error {
first := true // first frame written (HEADERS is first, then CONTINUATION)
frameSize := int(cc.maxFrameSize)
for len(hdrs) > 0 && cc.werr == nil {
chunk := hdrs
if len(chunk) > frameSize {
chunk = chunk[:frameSize]
if len(chunk) > maxFrameSize {
chunk = chunk[:maxFrameSize]
}
hdrs = hdrs[len(chunk):]
endHeaders := len(hdrs) == 0
@@ -1038,6 +1182,7 @@ func (cs *clientStream) writeRequestBody(body io.Reader, bodyCloser io.Closer) (
sawEOF = true
err = nil
} else if err != nil {
cc.writeStreamReset(cs.ID, ErrCodeCancel, err)
return err
}
@@ -1085,17 +1230,26 @@ func (cs *clientStream) writeRequestBody(body io.Reader, bodyCloser io.Closer) (
var trls []byte
if hasTrailers {
cc.mu.Lock()
defer cc.mu.Unlock()
trls = cc.encodeTrailers(req)
trls, err = cc.encodeTrailers(req)
cc.mu.Unlock()
if err != nil {
cc.writeStreamReset(cs.ID, ErrCodeInternal, err)
cc.forgetStreamID(cs.ID)
return err
}
}
cc.mu.Lock()
maxFrameSize := int(cc.maxFrameSize)
cc.mu.Unlock()
cc.wmu.Lock()
defer cc.wmu.Unlock()
// Two ways to send END_STREAM: either with trailers, or
// with an empty DATA frame.
if len(trls) > 0 {
err = cc.writeHeaders(cs.ID, true, trls)
err = cc.writeHeaders(cs.ID, true, maxFrameSize, trls)
} else {
err = cc.fr.WriteData(cs.ID, true, nil)
}
@@ -1154,7 +1308,7 @@ func (cc *ClientConn) encodeHeaders(req *http.Request, addGzipHeader bool, trail
if host == "" {
host = req.URL.Host
}
host, err := httplex.PunycodeHostPort(host)
host, err := httpguts.PunycodeHostPort(host)
if err != nil {
return nil, err
}
@@ -1179,72 +1333,103 @@ func (cc *ClientConn) encodeHeaders(req *http.Request, addGzipHeader bool, trail
// potentially pollute our hpack state. (We want to be able to
// continue to reuse the hpack encoder for future requests)
for k, vv := range req.Header {
if !httplex.ValidHeaderFieldName(k) {
if !httpguts.ValidHeaderFieldName(k) {
return nil, fmt.Errorf("invalid HTTP header name %q", k)
}
for _, v := range vv {
if !httplex.ValidHeaderFieldValue(v) {
if !httpguts.ValidHeaderFieldValue(v) {
return nil, fmt.Errorf("invalid HTTP header value %q for header %q", v, k)
}
}
}
// 8.1.2.3 Request Pseudo-Header Fields
// The :path pseudo-header field includes the path and query parts of the
// target URI (the path-absolute production and optionally a '?' character
// followed by the query production (see Sections 3.3 and 3.4 of
// [RFC3986]).
cc.writeHeader(":authority", host)
cc.writeHeader(":method", req.Method)
if req.Method != "CONNECT" {
cc.writeHeader(":path", path)
cc.writeHeader(":scheme", req.URL.Scheme)
}
if trailers != "" {
cc.writeHeader("trailer", trailers)
enumerateHeaders := func(f func(name, value string)) {
// 8.1.2.3 Request Pseudo-Header Fields
// The :path pseudo-header field includes the path and query parts of the
// target URI (the path-absolute production and optionally a '?' character
// followed by the query production (see Sections 3.3 and 3.4 of
// [RFC3986]).
f(":authority", host)
f(":method", req.Method)
if req.Method != "CONNECT" {
f(":path", path)
f(":scheme", req.URL.Scheme)
}
if trailers != "" {
f("trailer", trailers)
}
var didUA bool
for k, vv := range req.Header {
if strings.EqualFold(k, "host") || strings.EqualFold(k, "content-length") {
// Host is :authority, already sent.
// Content-Length is automatic, set below.
continue
} else if strings.EqualFold(k, "connection") || strings.EqualFold(k, "proxy-connection") ||
strings.EqualFold(k, "transfer-encoding") || strings.EqualFold(k, "upgrade") ||
strings.EqualFold(k, "keep-alive") {
// Per 8.1.2.2 Connection-Specific Header
// Fields, don't send connection-specific
// fields. We have already checked if any
// are error-worthy so just ignore the rest.
continue
} else if strings.EqualFold(k, "user-agent") {
// Match Go's http1 behavior: at most one
// User-Agent. If set to nil or empty string,
// then omit it. Otherwise if not mentioned,
// include the default (below).
didUA = true
if len(vv) < 1 {
continue
}
vv = vv[:1]
if vv[0] == "" {
continue
}
}
for _, v := range vv {
f(k, v)
}
}
if shouldSendReqContentLength(req.Method, contentLength) {
f("content-length", strconv.FormatInt(contentLength, 10))
}
if addGzipHeader {
f("accept-encoding", "gzip")
}
if !didUA {
f("user-agent", defaultUserAgent)
}
}
var didUA bool
for k, vv := range req.Header {
lowKey := strings.ToLower(k)
switch lowKey {
case "host", "content-length":
// Host is :authority, already sent.
// Content-Length is automatic, set below.
continue
case "connection", "proxy-connection", "transfer-encoding", "upgrade", "keep-alive":
// Per 8.1.2.2 Connection-Specific Header
// Fields, don't send connection-specific
// fields. We have already checked if any
// are error-worthy so just ignore the rest.
continue
case "user-agent":
// Match Go's http1 behavior: at most one
// User-Agent. If set to nil or empty string,
// then omit it. Otherwise if not mentioned,
// include the default (below).
didUA = true
if len(vv) < 1 {
continue
}
vv = vv[:1]
if vv[0] == "" {
continue
}
// Do a first pass over the headers counting bytes to ensure
// we don't exceed cc.peerMaxHeaderListSize. This is done as a
// separate pass before encoding the headers to prevent
// modifying the hpack state.
hlSize := uint64(0)
enumerateHeaders(func(name, value string) {
hf := hpack.HeaderField{Name: name, Value: value}
hlSize += uint64(hf.Size())
})
if hlSize > cc.peerMaxHeaderListSize {
return nil, errRequestHeaderListSize
}
trace := requestTrace(req)
traceHeaders := traceHasWroteHeaderField(trace)
// Header list size is ok. Write the headers.
enumerateHeaders(func(name, value string) {
name = strings.ToLower(name)
cc.writeHeader(name, value)
if traceHeaders {
traceWroteHeaderField(trace, name, value)
}
for _, v := range vv {
cc.writeHeader(lowKey, v)
}
}
if shouldSendReqContentLength(req.Method, contentLength) {
cc.writeHeader("content-length", strconv.FormatInt(contentLength, 10))
}
if addGzipHeader {
cc.writeHeader("accept-encoding", "gzip")
}
if !didUA {
cc.writeHeader("user-agent", defaultUserAgent)
}
})
return cc.hbuf.Bytes(), nil
}
@@ -1271,17 +1456,29 @@ func shouldSendReqContentLength(method string, contentLength int64) bool {
}
// requires cc.mu be held.
func (cc *ClientConn) encodeTrailers(req *http.Request) []byte {
func (cc *ClientConn) encodeTrailers(req *http.Request) ([]byte, error) {
cc.hbuf.Reset()
hlSize := uint64(0)
for k, vv := range req.Trailer {
// Transfer-Encoding, etc.. have already been filter at the
for _, v := range vv {
hf := hpack.HeaderField{Name: k, Value: v}
hlSize += uint64(hf.Size())
}
}
if hlSize > cc.peerMaxHeaderListSize {
return nil, errRequestHeaderListSize
}
for k, vv := range req.Trailer {
// Transfer-Encoding, etc.. have already been filtered at the
// start of RoundTrip
lowKey := strings.ToLower(k)
for _, v := range vv {
cc.writeHeader(lowKey, v)
}
}
return cc.hbuf.Bytes()
return cc.hbuf.Bytes(), nil
}
func (cc *ClientConn) writeHeader(name, value string) {
@@ -1339,17 +1536,12 @@ func (cc *ClientConn) streamByID(id uint32, andRemove bool) *clientStream {
// clientConnReadLoop is the state owned by the clientConn's frame-reading readLoop.
type clientConnReadLoop struct {
cc *ClientConn
activeRes map[uint32]*clientStream // keyed by streamID
closeWhenIdle bool
}
// readLoop runs in its own goroutine and reads and dispatches frames.
func (cc *ClientConn) readLoop() {
rl := &clientConnReadLoop{
cc: cc,
activeRes: make(map[uint32]*clientStream),
}
rl := &clientConnReadLoop{cc: cc}
defer rl.cleanup()
cc.readerErr = rl.run()
if ce, ok := cc.readerErr.(ConnectionError); ok {
@@ -1404,10 +1596,8 @@ func (rl *clientConnReadLoop) cleanup() {
} else if err == io.EOF {
err = io.ErrUnexpectedEOF
}
for _, cs := range rl.activeRes {
cs.bufPipe.CloseWithError(err)
}
for _, cs := range cc.streams {
cs.bufPipe.CloseWithError(err) // no-op if already closed
select {
case cs.resc <- resAndError{err: err}:
default:
@@ -1485,7 +1675,7 @@ func (rl *clientConnReadLoop) run() error {
}
return err
}
if rl.closeWhenIdle && gotReply && maybeIdle && len(rl.activeRes) == 0 {
if rl.closeWhenIdle && gotReply && maybeIdle {
cc.closeIfIdle()
}
}
@@ -1493,13 +1683,31 @@ func (rl *clientConnReadLoop) run() error {
func (rl *clientConnReadLoop) processHeaders(f *MetaHeadersFrame) error {
cc := rl.cc
cs := cc.streamByID(f.StreamID, f.StreamEnded())
cs := cc.streamByID(f.StreamID, false)
if cs == nil {
// We'd get here if we canceled a request while the
// server had its response still in flight. So if this
// was just something we canceled, ignore it.
return nil
}
if f.StreamEnded() {
// Issue 20521: If the stream has ended, streamByID() causes
// clientStream.done to be closed, which causes the request's bodyWriter
// to be closed with an errStreamClosed, which may be received by
// clientConn.RoundTrip before the result of processing these headers.
// Deferring stream closure allows the header processing to occur first.
// clientConn.RoundTrip may still receive the bodyWriter error first, but
// the fix for issue 16102 prioritises any response.
//
// Issue 22413: If there is no request body, we should close the
// stream before writing to cs.resc so that the stream is closed
// immediately once RoundTrip returns.
if cs.req.Body != nil {
defer cc.forgetStreamID(f.StreamID)
} else {
cc.forgetStreamID(f.StreamID)
}
}
if !cs.firstByte {
if cs.trace != nil {
// TODO(bradfitz): move first response byte earlier,
@@ -1523,6 +1731,7 @@ func (rl *clientConnReadLoop) processHeaders(f *MetaHeadersFrame) error {
}
// Any other error type is a stream error.
cs.cc.writeStreamReset(f.StreamID, ErrCodeProtocol, err)
cc.forgetStreamID(cs.ID)
cs.resc <- resAndError{err: err}
return nil // return nil from process* funcs to keep conn alive
}
@@ -1530,9 +1739,6 @@ func (rl *clientConnReadLoop) processHeaders(f *MetaHeadersFrame) error {
// (nil, nil) special case. See handleResponse docs.
return nil
}
if res.Body != noBody {
rl.activeRes[cs.ID] = cs
}
cs.resTrailer = &res.Trailer
cs.resc <- resAndError{res: res}
return nil
@@ -1543,8 +1749,7 @@ func (rl *clientConnReadLoop) processHeaders(f *MetaHeadersFrame) error {
// is the detail.
//
// As a special case, handleResponse may return (nil, nil) to skip the
// frame (currently only used for 100 expect continue). This special
// case is going away after Issue 13851 is fixed.
// frame (currently only used for 1xx responses).
func (rl *clientConnReadLoop) handleResponse(cs *clientStream, f *MetaHeadersFrame) (*http.Response, error) {
if f.Truncated {
return nil, errResponseHeaderListSize
@@ -1552,20 +1757,11 @@ func (rl *clientConnReadLoop) handleResponse(cs *clientStream, f *MetaHeadersFra
status := f.PseudoValue("status")
if status == "" {
return nil, errors.New("missing status pseudo header")
return nil, errors.New("malformed response from server: missing status pseudo header")
}
statusCode, err := strconv.Atoi(status)
if err != nil {
return nil, errors.New("malformed non-numeric status pseudo header")
}
if statusCode == 100 {
traceGot100Continue(cs.trace)
if cs.on100 != nil {
cs.on100() // forces any write delay timer to fire
}
cs.pastHeaders = false // do it all again
return nil, nil
return nil, errors.New("malformed response from server: malformed non-numeric status pseudo header")
}
header := make(http.Header)
@@ -1592,6 +1788,27 @@ func (rl *clientConnReadLoop) handleResponse(cs *clientStream, f *MetaHeadersFra
}
}
if statusCode >= 100 && statusCode <= 199 {
cs.num1xx++
const max1xxResponses = 5 // arbitrary bound on number of informational responses, same as net/http
if cs.num1xx > max1xxResponses {
return nil, errors.New("http2: too many 1xx informational responses")
}
if fn := cs.get1xxTraceFunc(); fn != nil {
if err := fn(statusCode, textproto.MIMEHeader(header)); err != nil {
return nil, err
}
}
if statusCode == 100 {
traceGot100Continue(cs.trace)
if cs.on100 != nil {
cs.on100() // forces any write delay timer to fire
}
}
cs.pastHeaders = false // do it all again
return nil, nil
}
streamEnded := f.StreamEnded()
isHead := cs.req.Method == "HEAD"
if !streamEnded || isHead {
@@ -1789,7 +2006,23 @@ func (rl *clientConnReadLoop) processData(f *DataFrame) error {
}
return nil
}
if !cs.firstByte {
cc.logf("protocol error: received DATA before a HEADERS frame")
rl.endStreamError(cs, StreamError{
StreamID: f.StreamID,
Code: ErrCodeProtocol,
})
return nil
}
if f.Length > 0 {
if cs.req.Method == "HEAD" && len(data) > 0 {
cc.logf("protocol error: received DATA on a HEAD request")
rl.endStreamError(cs, StreamError{
StreamID: f.StreamID,
Code: ErrCodeProtocol,
})
return nil
}
// Check connection-level flow control.
cc.mu.Lock()
if cs.inflow.available() >= int32(f.Length) {
@@ -1851,11 +2084,10 @@ func (rl *clientConnReadLoop) endStreamError(cs *clientStream, err error) {
err = io.EOF
code = cs.copyTrailers
}
cs.bufPipe.closeWithErrorAndCode(err, code)
delete(rl.activeRes, cs.ID)
if isConnectionCloseRequest(cs.req) {
rl.closeWhenIdle = true
}
cs.bufPipe.closeWithErrorAndCode(err, code)
select {
case cs.resc <- resAndError{err: err}:
@@ -1903,6 +2135,8 @@ func (rl *clientConnReadLoop) processSettings(f *SettingsFrame) error {
cc.maxFrameSize = s.Val
case SettingMaxConcurrentStreams:
cc.maxConcurrentStreams = s.Val
case SettingMaxHeaderListSize:
cc.peerMaxHeaderListSize = uint64(s.Val)
case SettingInitialWindowSize:
// Values above the maximum flow-control
// window size of 2^31-1 MUST be treated as a
@@ -1980,7 +2214,6 @@ func (rl *clientConnReadLoop) processResetStream(f *RSTStreamFrame) error {
cs.bufPipe.CloseWithError(err)
cs.cc.cond.Broadcast() // wake up checkResetOrDone via clientStream.awaitFlowControl
}
delete(rl.activeRes, cs.ID)
return nil
}
@@ -2069,6 +2302,7 @@ func (cc *ClientConn) writeStreamReset(streamID uint32, code ErrCode, err error)
var (
errResponseHeaderListSize = errors.New("http2: response header list larger than advertised limit")
errRequestHeaderListSize = errors.New("http2: request header list larger than peer's advertised limit")
errPseudoTrailers = errors.New("http2: invalid pseudo header in trailers")
)
@@ -2162,7 +2396,7 @@ func (t *Transport) getBodyWriterState(cs *clientStream, body io.Reader) (s body
}
s.delay = t.expectContinueTimeout()
if s.delay == 0 ||
!httplex.HeaderValuesContainsToken(
!httpguts.HeaderValuesContainsToken(
cs.req.Header["Expect"],
"100-continue") {
return
@@ -2217,5 +2451,5 @@ func (s bodyWriterState) scheduleBodyWrite() {
// isConnectionCloseRequest reports whether req should use its own
// connection for a single request and then close the connection.
func isConnectionCloseRequest(req *http.Request) bool {
return req.Close || httplex.HeaderValuesContainsToken(req.Header["Connection"], "close")
return req.Close || httpguts.HeaderValuesContainsToken(req.Header["Connection"], "close")
}

View File

@@ -10,10 +10,9 @@ import (
"log"
"net/http"
"net/url"
"time"
"golang.org/x/net/http/httpguts"
"golang.org/x/net/http2/hpack"
"golang.org/x/net/lex/httplex"
)
// writeFramer is implemented by any type that is used to write frames.
@@ -90,11 +89,7 @@ type writeGoAway struct {
func (p *writeGoAway) writeFrame(ctx writeContext) error {
err := ctx.Framer().WriteGoAway(p.maxStreamID, p.code, nil)
if p.code != 0 {
ctx.Flush() // ignore error: we're hanging up on them anyway
time.Sleep(50 * time.Millisecond)
ctx.CloseConn()
}
ctx.Flush() // ignore error: we're hanging up on them anyway
return err
}
@@ -355,7 +350,7 @@ func encodeHeaders(enc *hpack.Encoder, h http.Header, keys []string) {
}
isTE := k == "transfer-encoding"
for _, v := range vv {
if !httplex.ValidHeaderFieldValue(v) {
if !httpguts.ValidHeaderFieldValue(v) {
// TODO: return an error? golang.org/issue/14048
// For now just omit it.
continue

View File

@@ -32,7 +32,7 @@ type WriteScheduler interface {
// Pop dequeues the next frame to write. Returns false if no frames can
// be written. Frames with a given wr.StreamID() are Pop'd in the same
// order they are Push'd.
// order they are Push'd. No frames should be discarded except by CloseStream.
Pop() (wr FrameWriteRequest, ok bool)
}
@@ -76,6 +76,12 @@ func (wr FrameWriteRequest) StreamID() uint32 {
return wr.stream.id
}
// isControl reports whether wr is a control frame for MaxQueuedControlFrames
// purposes. That includes non-stream frames and RST_STREAM frames.
func (wr FrameWriteRequest) isControl() bool {
return wr.stream == nil
}
// DataSize returns the number of flow control bytes that must be consumed
// to write this entire frame. This is 0 for non-DATA frames.
func (wr FrameWriteRequest) DataSize() int {

126
vendor/golang.org/x/net/idna/idna.go generated vendored
View File

@@ -21,6 +21,7 @@ import (
"unicode/utf8"
"golang.org/x/text/secure/bidirule"
"golang.org/x/text/unicode/bidi"
"golang.org/x/text/unicode/norm"
)
@@ -68,7 +69,7 @@ func VerifyDNSLength(verify bool) Option {
}
// RemoveLeadingDots removes leading label separators. Leading runes that map to
// dots, such as U+3002, are removed as well.
// dots, such as U+3002 IDEOGRAPHIC FULL STOP, are removed as well.
//
// This is the behavior suggested by the UTS #46 and is adopted by some
// browsers.
@@ -92,7 +93,7 @@ func ValidateLabels(enable bool) Option {
}
}
// StrictDomainName limits the set of permissable ASCII characters to those
// StrictDomainName limits the set of permissible ASCII characters to those
// allowed in domain names as defined in RFC 1034 (A-Z, a-z, 0-9 and the
// hyphen). This is set by default for MapForLookup and ValidateForRegistration.
//
@@ -142,7 +143,6 @@ func MapForLookup() Option {
o.mapping = validateAndMap
StrictDomainName(true)(o)
ValidateLabels(true)(o)
RemoveLeadingDots(true)(o)
}
}
@@ -160,14 +160,14 @@ type options struct {
// mapping implements a validation and mapping step as defined in RFC 5895
// or UTS 46, tailored to, for example, domain registration or lookup.
mapping func(p *Profile, s string) (string, error)
mapping func(p *Profile, s string) (mapped string, isBidi bool, err error)
// bidirule, if specified, checks whether s conforms to the Bidi Rule
// defined in RFC 5893.
bidirule func(s string) bool
}
// A Profile defines the configuration of a IDNA mapper.
// A Profile defines the configuration of an IDNA mapper.
type Profile struct {
options
}
@@ -251,23 +251,21 @@ var (
punycode = &Profile{}
lookup = &Profile{options{
transitional: true,
useSTD3Rules: true,
validateLabels: true,
removeLeadingDots: true,
trie: trie,
fromPuny: validateFromPunycode,
mapping: validateAndMap,
bidirule: bidirule.ValidString,
transitional: true,
useSTD3Rules: true,
validateLabels: true,
trie: trie,
fromPuny: validateFromPunycode,
mapping: validateAndMap,
bidirule: bidirule.ValidString,
}}
display = &Profile{options{
useSTD3Rules: true,
validateLabels: true,
removeLeadingDots: true,
trie: trie,
fromPuny: validateFromPunycode,
mapping: validateAndMap,
bidirule: bidirule.ValidString,
useSTD3Rules: true,
validateLabels: true,
trie: trie,
fromPuny: validateFromPunycode,
mapping: validateAndMap,
bidirule: bidirule.ValidString,
}}
registration = &Profile{options{
useSTD3Rules: true,
@@ -302,14 +300,16 @@ func (e runeError) Error() string {
// see http://www.unicode.org/reports/tr46.
func (p *Profile) process(s string, toASCII bool) (string, error) {
var err error
var isBidi bool
if p.mapping != nil {
s, err = p.mapping(p, s)
s, isBidi, err = p.mapping(p, s)
}
// Remove leading empty labels.
if p.removeLeadingDots {
for ; len(s) > 0 && s[0] == '.'; s = s[1:] {
}
}
// TODO: allow for a quick check of the tables data.
// It seems like we should only create this error on ToASCII, but the
// UTS 46 conformance tests suggests we should always check this.
if err == nil && p.verifyDNSLength && s == "" {
@@ -335,6 +335,7 @@ func (p *Profile) process(s string, toASCII bool) (string, error) {
// Spec says keep the old label.
continue
}
isBidi = isBidi || bidirule.DirectionString(u) != bidi.LeftToRight
labels.set(u)
if err == nil && p.validateLabels {
err = p.fromPuny(p, u)
@@ -349,6 +350,14 @@ func (p *Profile) process(s string, toASCII bool) (string, error) {
err = p.validateLabel(label)
}
}
if isBidi && p.bidirule != nil && err == nil {
for labels.reset(); !labels.done(); labels.next() {
if !p.bidirule(labels.label()) {
err = &labelError{s, "B"}
break
}
}
}
if toASCII {
for labels.reset(); !labels.done(); labels.next() {
label := labels.label()
@@ -380,16 +389,26 @@ func (p *Profile) process(s string, toASCII bool) (string, error) {
return s, err
}
func normalize(p *Profile, s string) (string, error) {
return norm.NFC.String(s), nil
func normalize(p *Profile, s string) (mapped string, isBidi bool, err error) {
// TODO: consider first doing a quick check to see if any of these checks
// need to be done. This will make it slower in the general case, but
// faster in the common case.
mapped = norm.NFC.String(s)
isBidi = bidirule.DirectionString(mapped) == bidi.RightToLeft
return mapped, isBidi, nil
}
func validateRegistration(p *Profile, s string) (string, error) {
func validateRegistration(p *Profile, s string) (idem string, bidi bool, err error) {
// TODO: filter need for normalization in loop below.
if !norm.NFC.IsNormalString(s) {
return s, &labelError{s, "V1"}
return s, false, &labelError{s, "V1"}
}
for i := 0; i < len(s); {
v, sz := trie.lookupString(s[i:])
if sz == 0 {
return s, bidi, runeError(utf8.RuneError)
}
bidi = bidi || info(v).isBidi(s[i:])
// Copy bytes not copied so far.
switch p.simplify(info(v).category()) {
// TODO: handle the NV8 defined in the Unicode idna data set to allow
@@ -397,21 +416,50 @@ func validateRegistration(p *Profile, s string) (string, error) {
case valid, deviation:
case disallowed, mapped, unknown, ignored:
r, _ := utf8.DecodeRuneInString(s[i:])
return s, runeError(r)
return s, bidi, runeError(r)
}
i += sz
}
return s, nil
return s, bidi, nil
}
func validateAndMap(p *Profile, s string) (string, error) {
func (c info) isBidi(s string) bool {
if !c.isMapped() {
return c&attributesMask == rtl
}
// TODO: also store bidi info for mapped data. This is possible, but a bit
// cumbersome and not for the common case.
p, _ := bidi.LookupString(s)
switch p.Class() {
case bidi.R, bidi.AL, bidi.AN:
return true
}
return false
}
func validateAndMap(p *Profile, s string) (vm string, bidi bool, err error) {
var (
err error
b []byte
k int
b []byte
k int
)
// combinedInfoBits contains the or-ed bits of all runes. We use this
// to derive the mayNeedNorm bit later. This may trigger normalization
// overeagerly, but it will not do so in the common case. The end result
// is another 10% saving on BenchmarkProfile for the common case.
var combinedInfoBits info
for i := 0; i < len(s); {
v, sz := trie.lookupString(s[i:])
if sz == 0 {
b = append(b, s[k:i]...)
b = append(b, "\ufffd"...)
k = len(s)
if err == nil {
err = runeError(utf8.RuneError)
}
break
}
combinedInfoBits |= info(v)
bidi = bidi || info(v).isBidi(s[i:])
start := i
i += sz
// Copy bytes not copied so far.
@@ -438,7 +486,9 @@ func validateAndMap(p *Profile, s string) (string, error) {
}
if k == 0 {
// No changes so far.
s = norm.NFC.String(s)
if combinedInfoBits&mayNeedNorm != 0 {
s = norm.NFC.String(s)
}
} else {
b = append(b, s[k:]...)
if norm.NFC.QuickSpan(b) != len(b) {
@@ -447,7 +497,7 @@ func validateAndMap(p *Profile, s string) (string, error) {
// TODO: the punycode converters require strings as input.
s = string(b)
}
return s, err
return s, bidi, err
}
// A labelIter allows iterating over domain name labels.
@@ -542,8 +592,13 @@ func validateFromPunycode(p *Profile, s string) error {
if !norm.NFC.IsNormalString(s) {
return &labelError{s, "V1"}
}
// TODO: detect whether string may have to be normalized in the following
// loop.
for i := 0; i < len(s); {
v, sz := trie.lookupString(s[i:])
if sz == 0 {
return runeError(utf8.RuneError)
}
if c := p.simplify(info(v).category()); c != valid && c != deviation {
return &labelError{s, "V6"}
}
@@ -616,16 +671,13 @@ var joinStates = [][numJoinTypes]joinState{
// validateLabel validates the criteria from Section 4.1. Item 1, 4, and 6 are
// already implicitly satisfied by the overall implementation.
func (p *Profile) validateLabel(s string) error {
func (p *Profile) validateLabel(s string) (err error) {
if s == "" {
if p.verifyDNSLength {
return &labelError{s, "A4"}
}
return nil
}
if p.bidirule != nil && !p.bidirule(s) {
return &labelError{s, "B"}
}
if !p.validateLabels {
return nil
}

4396
vendor/golang.org/x/net/idna/tables.go generated vendored

File diff suppressed because it is too large Load Diff

View File

@@ -26,9 +26,9 @@ package idna
// 15..3 index into xor or mapping table
// }
// } else {
// 15..13 unused
// 12 modifier (including virama)
// 11 virama modifier
// 15..14 unused
// 13 mayNeedNorm
// 12..11 attributes
// 10..8 joining type
// 7..3 category type
// }
@@ -49,15 +49,20 @@ const (
joinShift = 8
joinMask = 0x07
viramaModifier = 0x0800
// Attributes
attributesMask = 0x1800
viramaModifier = 0x1800
modifier = 0x1000
rtl = 0x0800
mayNeedNorm = 0x2000
)
// A category corresponds to a category defined in the IDNA mapping table.
type category uint16
const (
unknown category = 0 // not defined currently in unicode.
unknown category = 0 // not currently defined in unicode.
mapped category = 1
disallowedSTD3Mapped category = 2
deviation category = 3
@@ -110,5 +115,5 @@ func (c info) isModifier() bool {
}
func (c info) isViramaModifier() bool {
return c&(viramaModifier|catSmallMask) == viramaModifier
return c&(attributesMask|catSmallMask) == viramaModifier
}

13
vendor/golang.org/x/oauth2/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,13 @@
language: go
go:
- tip
install:
- export GOPATH="$HOME/gopath"
- mkdir -p "$GOPATH/src/golang.org/x"
- mv "$TRAVIS_BUILD_DIR" "$GOPATH/src/golang.org/x/oauth2"
- go get -v -t -d golang.org/x/oauth2/...
script:
- go test -v golang.org/x/oauth2/...

31
vendor/golang.org/x/oauth2/CONTRIBUTING.md generated vendored Normal file
View File

@@ -0,0 +1,31 @@
# Contributing to Go
Go is an open source project.
It is the work of hundreds of contributors. We appreciate your help!
## Filing issues
When [filing an issue](https://github.com/golang/oauth2/issues), make sure to answer these five questions:
1. What version of Go are you using (`go version`)?
2. What operating system and processor architecture are you using?
3. What did you do?
4. What did you expect to see?
5. What did you see instead?
General questions should go to the [golang-nuts mailing list](https://groups.google.com/group/golang-nuts) instead of the issue tracker.
The gophers there will answer or ask you to file an issue if you've tripped over a bug.
## Contributing code
Please read the [Contribution Guidelines](https://golang.org/doc/contribute.html)
before sending patches.
**We do not accept GitHub pull requests**
(we use [Gerrit](https://code.google.com/p/gerrit/) instead for code review).
Unless otherwise noted, the Go source files are distributed under
the BSD-style license found in the LICENSE file.

View File

@@ -1,4 +1,4 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Copyright (c) 2009 The oauth2 Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are

74
vendor/golang.org/x/oauth2/README.md generated vendored Normal file
View File

@@ -0,0 +1,74 @@
# OAuth2 for Go
[![Build Status](https://travis-ci.org/golang/oauth2.svg?branch=master)](https://travis-ci.org/golang/oauth2)
[![GoDoc](https://godoc.org/golang.org/x/oauth2?status.svg)](https://godoc.org/golang.org/x/oauth2)
oauth2 package contains a client implementation for OAuth 2.0 spec.
## Installation
~~~~
go get golang.org/x/oauth2
~~~~
See godoc for further documentation and examples.
* [godoc.org/golang.org/x/oauth2](http://godoc.org/golang.org/x/oauth2)
* [godoc.org/golang.org/x/oauth2/google](http://godoc.org/golang.org/x/oauth2/google)
## App Engine
In change 96e89be (March 2015) we removed the `oauth2.Context2` type in favor
of the [`context.Context`](https://golang.org/x/net/context#Context) type from
the `golang.org/x/net/context` package
This means its no longer possible to use the "Classic App Engine"
`appengine.Context` type with the `oauth2` package. (You're using
Classic App Engine if you import the package `"appengine"`.)
To work around this, you may use the new `"google.golang.org/appengine"`
package. This package has almost the same API as the `"appengine"` package,
but it can be fetched with `go get` and used on "Managed VMs" and well as
Classic App Engine.
See the [new `appengine` package's readme](https://github.com/golang/appengine#updating-a-go-app-engine-app)
for information on updating your app.
If you don't want to update your entire app to use the new App Engine packages,
you may use both sets of packages in parallel, using only the new packages
with the `oauth2` package.
import (
"golang.org/x/net/context"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
newappengine "google.golang.org/appengine"
newurlfetch "google.golang.org/appengine/urlfetch"
"appengine"
)
func handler(w http.ResponseWriter, r *http.Request) {
var c appengine.Context = appengine.NewContext(r)
c.Infof("Logging a message with the old package")
var ctx context.Context = newappengine.NewContext(r)
client := &http.Client{
Transport: &oauth2.Transport{
Source: google.AppEngineTokenSource(ctx, "scope"),
Base: &newurlfetch.Transport{Context: ctx},
},
}
client.Get("...")
}
## Contributing
We appreciate your help!
To contribute, please read the contribution guidelines:
https://golang.org/doc/contribute.html
Note that the Go project does not use GitHub pull requests but
uses Gerrit for code reviews. See the contribution guide for details.

25
vendor/golang.org/x/oauth2/client_appengine.go generated vendored Normal file
View File

@@ -0,0 +1,25 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build appengine
// App Engine hooks.
package oauth2
import (
"net/http"
"golang.org/x/net/context"
"golang.org/x/oauth2/internal"
"google.golang.org/appengine/urlfetch"
)
func init() {
internal.RegisterContextClientFunc(contextClientAppEngine)
}
func contextClientAppEngine(ctx context.Context) (*http.Client, error) {
return urlfetch.Client(ctx), nil
}

76
vendor/golang.org/x/oauth2/internal/oauth2.go generated vendored Normal file
View File

@@ -0,0 +1,76 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package internal contains support packages for oauth2 package.
package internal
import (
"bufio"
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"errors"
"fmt"
"io"
"strings"
)
// ParseKey converts the binary contents of a private key file
// to an *rsa.PrivateKey. It detects whether the private key is in a
// PEM container or not. If so, it extracts the the private key
// from PEM container before conversion. It only supports PEM
// containers with no passphrase.
func ParseKey(key []byte) (*rsa.PrivateKey, error) {
block, _ := pem.Decode(key)
if block != nil {
key = block.Bytes
}
parsedKey, err := x509.ParsePKCS8PrivateKey(key)
if err != nil {
parsedKey, err = x509.ParsePKCS1PrivateKey(key)
if err != nil {
return nil, fmt.Errorf("private key should be a PEM or plain PKSC1 or PKCS8; parse error: %v", err)
}
}
parsed, ok := parsedKey.(*rsa.PrivateKey)
if !ok {
return nil, errors.New("private key is invalid")
}
return parsed, nil
}
func ParseINI(ini io.Reader) (map[string]map[string]string, error) {
result := map[string]map[string]string{
"": {}, // root section
}
scanner := bufio.NewScanner(ini)
currentSection := ""
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if strings.HasPrefix(line, ";") {
// comment.
continue
}
if strings.HasPrefix(line, "[") && strings.HasSuffix(line, "]") {
currentSection = strings.TrimSpace(line[1 : len(line)-1])
result[currentSection] = map[string]string{}
continue
}
parts := strings.SplitN(line, "=", 2)
if len(parts) == 2 && parts[0] != "" {
result[currentSection][strings.TrimSpace(parts[0])] = strings.TrimSpace(parts[1])
}
}
if err := scanner.Err(); err != nil {
return nil, fmt.Errorf("error scanning ini: %v", err)
}
return result, nil
}
func CondVal(v string) []string {
if v == "" {
return nil
}
return []string{v}
}

247
vendor/golang.org/x/oauth2/internal/token.go generated vendored Normal file
View File

@@ -0,0 +1,247 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package internal contains support packages for oauth2 package.
package internal
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"mime"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"golang.org/x/net/context"
)
// Token represents the crendentials used to authorize
// the requests to access protected resources on the OAuth 2.0
// provider's backend.
//
// This type is a mirror of oauth2.Token and exists to break
// an otherwise-circular dependency. Other internal packages
// should convert this Token into an oauth2.Token before use.
type Token struct {
// AccessToken is the token that authorizes and authenticates
// the requests.
AccessToken string
// TokenType is the type of token.
// The Type method returns either this or "Bearer", the default.
TokenType string
// RefreshToken is a token that's used by the application
// (as opposed to the user) to refresh the access token
// if it expires.
RefreshToken string
// Expiry is the optional expiration time of the access token.
//
// If zero, TokenSource implementations will reuse the same
// token forever and RefreshToken or equivalent
// mechanisms for that TokenSource will not be used.
Expiry time.Time
// Raw optionally contains extra metadata from the server
// when updating a token.
Raw interface{}
}
// tokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type tokenJSON struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
RefreshToken string `json:"refresh_token"`
ExpiresIn expirationTime `json:"expires_in"` // at least PayPal returns string, while most return number
Expires expirationTime `json:"expires"` // broken Facebook spelling of expires_in
}
func (e *tokenJSON) expiry() (t time.Time) {
if v := e.ExpiresIn; v != 0 {
return time.Now().Add(time.Duration(v) * time.Second)
}
if v := e.Expires; v != 0 {
return time.Now().Add(time.Duration(v) * time.Second)
}
return
}
type expirationTime int32
func (e *expirationTime) UnmarshalJSON(b []byte) error {
var n json.Number
err := json.Unmarshal(b, &n)
if err != nil {
return err
}
i, err := n.Int64()
if err != nil {
return err
}
*e = expirationTime(i)
return nil
}
var brokenAuthHeaderProviders = []string{
"https://accounts.google.com/",
"https://api.codeswholesale.com/oauth/token",
"https://api.dropbox.com/",
"https://api.dropboxapi.com/",
"https://api.instagram.com/",
"https://api.netatmo.net/",
"https://api.odnoklassniki.ru/",
"https://api.pushbullet.com/",
"https://api.soundcloud.com/",
"https://api.twitch.tv/",
"https://app.box.com/",
"https://connect.stripe.com/",
"https://graph.facebook.com", // see https://github.com/golang/oauth2/issues/214
"https://login.microsoftonline.com/",
"https://login.salesforce.com/",
"https://oauth.sandbox.trainingpeaks.com/",
"https://oauth.trainingpeaks.com/",
"https://oauth.vk.com/",
"https://openapi.baidu.com/",
"https://slack.com/",
"https://test-sandbox.auth.corp.google.com",
"https://test.salesforce.com/",
"https://user.gini.net/",
"https://www.douban.com/",
"https://www.googleapis.com/",
"https://www.linkedin.com/",
"https://www.strava.com/oauth/",
"https://www.wunderlist.com/oauth/",
"https://api.patreon.com/",
"https://sandbox.codeswholesale.com/oauth/token",
}
// brokenAuthHeaderDomains lists broken providers that issue dynamic endpoints.
var brokenAuthHeaderDomains = []string{
".force.com",
".okta.com",
".oktapreview.com",
}
func RegisterBrokenAuthHeaderProvider(tokenURL string) {
brokenAuthHeaderProviders = append(brokenAuthHeaderProviders, tokenURL)
}
// providerAuthHeaderWorks reports whether the OAuth2 server identified by the tokenURL
// implements the OAuth2 spec correctly
// See https://code.google.com/p/goauth2/issues/detail?id=31 for background.
// In summary:
// - Reddit only accepts client secret in the Authorization header
// - Dropbox accepts either it in URL param or Auth header, but not both.
// - Google only accepts URL param (not spec compliant?), not Auth header
// - Stripe only accepts client secret in Auth header with Bearer method, not Basic
func providerAuthHeaderWorks(tokenURL string) bool {
for _, s := range brokenAuthHeaderProviders {
if strings.HasPrefix(tokenURL, s) {
// Some sites fail to implement the OAuth2 spec fully.
return false
}
}
if u, err := url.Parse(tokenURL); err == nil {
for _, s := range brokenAuthHeaderDomains {
if strings.HasSuffix(u.Host, s) {
return false
}
}
}
// Assume the provider implements the spec properly
// otherwise. We can add more exceptions as they're
// discovered. We will _not_ be adding configurable hooks
// to this package to let users select server bugs.
return true
}
func RetrieveToken(ctx context.Context, clientID, clientSecret, tokenURL string, v url.Values) (*Token, error) {
hc, err := ContextClient(ctx)
if err != nil {
return nil, err
}
bustedAuth := !providerAuthHeaderWorks(tokenURL)
if bustedAuth {
if clientID != "" {
v.Set("client_id", clientID)
}
if clientSecret != "" {
v.Set("client_secret", clientSecret)
}
}
req, err := http.NewRequest("POST", tokenURL, strings.NewReader(v.Encode()))
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
if !bustedAuth {
req.SetBasicAuth(clientID, clientSecret)
}
r, err := hc.Do(req)
if err != nil {
return nil, err
}
defer r.Body.Close()
body, err := ioutil.ReadAll(io.LimitReader(r.Body, 1<<20))
if err != nil {
return nil, fmt.Errorf("oauth2: cannot fetch token: %v", err)
}
if code := r.StatusCode; code < 200 || code > 299 {
return nil, fmt.Errorf("oauth2: cannot fetch token: %v\nResponse: %s", r.Status, body)
}
var token *Token
content, _, _ := mime.ParseMediaType(r.Header.Get("Content-Type"))
switch content {
case "application/x-www-form-urlencoded", "text/plain":
vals, err := url.ParseQuery(string(body))
if err != nil {
return nil, err
}
token = &Token{
AccessToken: vals.Get("access_token"),
TokenType: vals.Get("token_type"),
RefreshToken: vals.Get("refresh_token"),
Raw: vals,
}
e := vals.Get("expires_in")
if e == "" {
// TODO(jbd): Facebook's OAuth2 implementation is broken and
// returns expires_in field in expires. Remove the fallback to expires,
// when Facebook fixes their implementation.
e = vals.Get("expires")
}
expires, _ := strconv.Atoi(e)
if expires != 0 {
token.Expiry = time.Now().Add(time.Duration(expires) * time.Second)
}
default:
var tj tokenJSON
if err = json.Unmarshal(body, &tj); err != nil {
return nil, err
}
token = &Token{
AccessToken: tj.AccessToken,
TokenType: tj.TokenType,
RefreshToken: tj.RefreshToken,
Expiry: tj.expiry(),
Raw: make(map[string]interface{}),
}
json.Unmarshal(body, &token.Raw) // no error checks for optional fields
}
// Don't overwrite `RefreshToken` with an empty value
// if this was a token refreshing request.
if token.RefreshToken == "" {
token.RefreshToken = v.Get("refresh_token")
}
return token, nil
}

69
vendor/golang.org/x/oauth2/internal/transport.go generated vendored Normal file
View File

@@ -0,0 +1,69 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package internal contains support packages for oauth2 package.
package internal
import (
"net/http"
"golang.org/x/net/context"
)
// HTTPClient is the context key to use with golang.org/x/net/context's
// WithValue function to associate an *http.Client value with a context.
var HTTPClient ContextKey
// ContextKey is just an empty struct. It exists so HTTPClient can be
// an immutable public variable with a unique type. It's immutable
// because nobody else can create a ContextKey, being unexported.
type ContextKey struct{}
// ContextClientFunc is a func which tries to return an *http.Client
// given a Context value. If it returns an error, the search stops
// with that error. If it returns (nil, nil), the search continues
// down the list of registered funcs.
type ContextClientFunc func(context.Context) (*http.Client, error)
var contextClientFuncs []ContextClientFunc
func RegisterContextClientFunc(fn ContextClientFunc) {
contextClientFuncs = append(contextClientFuncs, fn)
}
func ContextClient(ctx context.Context) (*http.Client, error) {
if ctx != nil {
if hc, ok := ctx.Value(HTTPClient).(*http.Client); ok {
return hc, nil
}
}
for _, fn := range contextClientFuncs {
c, err := fn(ctx)
if err != nil {
return nil, err
}
if c != nil {
return c, nil
}
}
return http.DefaultClient, nil
}
func ContextTransport(ctx context.Context) http.RoundTripper {
hc, err := ContextClient(ctx)
// This is a rare error case (somebody using nil on App Engine).
if err != nil {
return ErrorTransport{err}
}
return hc.Transport
}
// ErrorTransport returns the specified error on RoundTrip.
// This RoundTripper should be used in rare error cases where
// error handling can be postponed to response handling time.
type ErrorTransport struct{ Err error }
func (t ErrorTransport) RoundTrip(*http.Request) (*http.Response, error) {
return nil, t.Err
}

340
vendor/golang.org/x/oauth2/oauth2.go generated vendored Normal file
View File

@@ -0,0 +1,340 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package oauth2 provides support for making
// OAuth2 authorized and authenticated HTTP requests.
// It can additionally grant authorization with Bearer JWT.
package oauth2
import (
"bytes"
"errors"
"net/http"
"net/url"
"strings"
"sync"
"golang.org/x/net/context"
"golang.org/x/oauth2/internal"
)
// NoContext is the default context you should supply if not using
// your own context.Context (see https://golang.org/x/net/context).
//
// Deprecated: Use context.Background() or context.TODO() instead.
var NoContext = context.TODO()
// RegisterBrokenAuthHeaderProvider registers an OAuth2 server
// identified by the tokenURL prefix as an OAuth2 implementation
// which doesn't support the HTTP Basic authentication
// scheme to authenticate with the authorization server.
// Once a server is registered, credentials (client_id and client_secret)
// will be passed as query parameters rather than being present
// in the Authorization header.
// See https://code.google.com/p/goauth2/issues/detail?id=31 for background.
func RegisterBrokenAuthHeaderProvider(tokenURL string) {
internal.RegisterBrokenAuthHeaderProvider(tokenURL)
}
// Config describes a typical 3-legged OAuth2 flow, with both the
// client application information and the server's endpoint URLs.
// For the client credentials 2-legged OAuth2 flow, see the clientcredentials
// package (https://golang.org/x/oauth2/clientcredentials).
type Config struct {
// ClientID is the application's ID.
ClientID string
// ClientSecret is the application's secret.
ClientSecret string
// Endpoint contains the resource server's token endpoint
// URLs. These are constants specific to each server and are
// often available via site-specific packages, such as
// google.Endpoint or github.Endpoint.
Endpoint Endpoint
// RedirectURL is the URL to redirect users going through
// the OAuth flow, after the resource owner's URLs.
RedirectURL string
// Scope specifies optional requested permissions.
Scopes []string
}
// A TokenSource is anything that can return a token.
type TokenSource interface {
// Token returns a token or an error.
// Token must be safe for concurrent use by multiple goroutines.
// The returned Token must not be modified.
Token() (*Token, error)
}
// Endpoint contains the OAuth 2.0 provider's authorization and token
// endpoint URLs.
type Endpoint struct {
AuthURL string
TokenURL string
}
var (
// AccessTypeOnline and AccessTypeOffline are options passed
// to the Options.AuthCodeURL method. They modify the
// "access_type" field that gets sent in the URL returned by
// AuthCodeURL.
//
// Online is the default if neither is specified. If your
// application needs to refresh access tokens when the user
// is not present at the browser, then use offline. This will
// result in your application obtaining a refresh token the
// first time your application exchanges an authorization
// code for a user.
AccessTypeOnline AuthCodeOption = SetAuthURLParam("access_type", "online")
AccessTypeOffline AuthCodeOption = SetAuthURLParam("access_type", "offline")
// ApprovalForce forces the users to view the consent dialog
// and confirm the permissions request at the URL returned
// from AuthCodeURL, even if they've already done so.
ApprovalForce AuthCodeOption = SetAuthURLParam("approval_prompt", "force")
)
// An AuthCodeOption is passed to Config.AuthCodeURL.
type AuthCodeOption interface {
setValue(url.Values)
}
type setParam struct{ k, v string }
func (p setParam) setValue(m url.Values) { m.Set(p.k, p.v) }
// SetAuthURLParam builds an AuthCodeOption which passes key/value parameters
// to a provider's authorization endpoint.
func SetAuthURLParam(key, value string) AuthCodeOption {
return setParam{key, value}
}
// AuthCodeURL returns a URL to OAuth 2.0 provider's consent page
// that asks for permissions for the required scopes explicitly.
//
// State is a token to protect the user from CSRF attacks. You must
// always provide a non-zero string and validate that it matches the
// the state query parameter on your redirect callback.
// See http://tools.ietf.org/html/rfc6749#section-10.12 for more info.
//
// Opts may include AccessTypeOnline or AccessTypeOffline, as well
// as ApprovalForce.
func (c *Config) AuthCodeURL(state string, opts ...AuthCodeOption) string {
var buf bytes.Buffer
buf.WriteString(c.Endpoint.AuthURL)
v := url.Values{
"response_type": {"code"},
"client_id": {c.ClientID},
"redirect_uri": internal.CondVal(c.RedirectURL),
"scope": internal.CondVal(strings.Join(c.Scopes, " ")),
"state": internal.CondVal(state),
}
for _, opt := range opts {
opt.setValue(v)
}
if strings.Contains(c.Endpoint.AuthURL, "?") {
buf.WriteByte('&')
} else {
buf.WriteByte('?')
}
buf.WriteString(v.Encode())
return buf.String()
}
// PasswordCredentialsToken converts a resource owner username and password
// pair into a token.
//
// Per the RFC, this grant type should only be used "when there is a high
// degree of trust between the resource owner and the client (e.g., the client
// is part of the device operating system or a highly privileged application),
// and when other authorization grant types are not available."
// See https://tools.ietf.org/html/rfc6749#section-4.3 for more info.
//
// The HTTP client to use is derived from the context.
// If nil, http.DefaultClient is used.
func (c *Config) PasswordCredentialsToken(ctx context.Context, username, password string) (*Token, error) {
return retrieveToken(ctx, c, url.Values{
"grant_type": {"password"},
"username": {username},
"password": {password},
"scope": internal.CondVal(strings.Join(c.Scopes, " ")),
})
}
// Exchange converts an authorization code into a token.
//
// It is used after a resource provider redirects the user back
// to the Redirect URI (the URL obtained from AuthCodeURL).
//
// The HTTP client to use is derived from the context.
// If a client is not provided via the context, http.DefaultClient is used.
//
// The code will be in the *http.Request.FormValue("code"). Before
// calling Exchange, be sure to validate FormValue("state").
func (c *Config) Exchange(ctx context.Context, code string) (*Token, error) {
return retrieveToken(ctx, c, url.Values{
"grant_type": {"authorization_code"},
"code": {code},
"redirect_uri": internal.CondVal(c.RedirectURL),
})
}
// Client returns an HTTP client using the provided token.
// The token will auto-refresh as necessary. The underlying
// HTTP transport will be obtained using the provided context.
// The returned client and its Transport should not be modified.
func (c *Config) Client(ctx context.Context, t *Token) *http.Client {
return NewClient(ctx, c.TokenSource(ctx, t))
}
// TokenSource returns a TokenSource that returns t until t expires,
// automatically refreshing it as necessary using the provided context.
//
// Most users will use Config.Client instead.
func (c *Config) TokenSource(ctx context.Context, t *Token) TokenSource {
tkr := &tokenRefresher{
ctx: ctx,
conf: c,
}
if t != nil {
tkr.refreshToken = t.RefreshToken
}
return &reuseTokenSource{
t: t,
new: tkr,
}
}
// tokenRefresher is a TokenSource that makes "grant_type"=="refresh_token"
// HTTP requests to renew a token using a RefreshToken.
type tokenRefresher struct {
ctx context.Context // used to get HTTP requests
conf *Config
refreshToken string
}
// WARNING: Token is not safe for concurrent access, as it
// updates the tokenRefresher's refreshToken field.
// Within this package, it is used by reuseTokenSource which
// synchronizes calls to this method with its own mutex.
func (tf *tokenRefresher) Token() (*Token, error) {
if tf.refreshToken == "" {
return nil, errors.New("oauth2: token expired and refresh token is not set")
}
tk, err := retrieveToken(tf.ctx, tf.conf, url.Values{
"grant_type": {"refresh_token"},
"refresh_token": {tf.refreshToken},
})
if err != nil {
return nil, err
}
if tf.refreshToken != tk.RefreshToken {
tf.refreshToken = tk.RefreshToken
}
return tk, err
}
// reuseTokenSource is a TokenSource that holds a single token in memory
// and validates its expiry before each call to retrieve it with
// Token. If it's expired, it will be auto-refreshed using the
// new TokenSource.
type reuseTokenSource struct {
new TokenSource // called when t is expired.
mu sync.Mutex // guards t
t *Token
}
// Token returns the current token if it's still valid, else will
// refresh the current token (using r.Context for HTTP client
// information) and return the new one.
func (s *reuseTokenSource) Token() (*Token, error) {
s.mu.Lock()
defer s.mu.Unlock()
if s.t.Valid() {
return s.t, nil
}
t, err := s.new.Token()
if err != nil {
return nil, err
}
s.t = t
return t, nil
}
// StaticTokenSource returns a TokenSource that always returns the same token.
// Because the provided token t is never refreshed, StaticTokenSource is only
// useful for tokens that never expire.
func StaticTokenSource(t *Token) TokenSource {
return staticTokenSource{t}
}
// staticTokenSource is a TokenSource that always returns the same Token.
type staticTokenSource struct {
t *Token
}
func (s staticTokenSource) Token() (*Token, error) {
return s.t, nil
}
// HTTPClient is the context key to use with golang.org/x/net/context's
// WithValue function to associate an *http.Client value with a context.
var HTTPClient internal.ContextKey
// NewClient creates an *http.Client from a Context and TokenSource.
// The returned client is not valid beyond the lifetime of the context.
//
// As a special case, if src is nil, a non-OAuth2 client is returned
// using the provided context. This exists to support related OAuth2
// packages.
func NewClient(ctx context.Context, src TokenSource) *http.Client {
if src == nil {
c, err := internal.ContextClient(ctx)
if err != nil {
return &http.Client{Transport: internal.ErrorTransport{Err: err}}
}
return c
}
return &http.Client{
Transport: &Transport{
Base: internal.ContextTransport(ctx),
Source: ReuseTokenSource(nil, src),
},
}
}
// ReuseTokenSource returns a TokenSource which repeatedly returns the
// same token as long as it's valid, starting with t.
// When its cached token is invalid, a new token is obtained from src.
//
// ReuseTokenSource is typically used to reuse tokens from a cache
// (such as a file on disk) between runs of a program, rather than
// obtaining new tokens unnecessarily.
//
// The initial token t may be nil, in which case the TokenSource is
// wrapped in a caching version if it isn't one already. This also
// means it's always safe to wrap ReuseTokenSource around any other
// TokenSource without adverse effects.
func ReuseTokenSource(t *Token, src TokenSource) TokenSource {
// Don't wrap a reuseTokenSource in itself. That would work,
// but cause an unnecessary number of mutex operations.
// Just build the equivalent one.
if rt, ok := src.(*reuseTokenSource); ok {
if t == nil {
// Just use it directly.
return rt
}
src = rt.new
}
return &reuseTokenSource{
t: t,
new: src,
}
}

158
vendor/golang.org/x/oauth2/token.go generated vendored Normal file
View File

@@ -0,0 +1,158 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package oauth2
import (
"net/http"
"net/url"
"strconv"
"strings"
"time"
"golang.org/x/net/context"
"golang.org/x/oauth2/internal"
)
// expiryDelta determines how earlier a token should be considered
// expired than its actual expiration time. It is used to avoid late
// expirations due to client-server time mismatches.
const expiryDelta = 10 * time.Second
// Token represents the crendentials used to authorize
// the requests to access protected resources on the OAuth 2.0
// provider's backend.
//
// Most users of this package should not access fields of Token
// directly. They're exported mostly for use by related packages
// implementing derivative OAuth2 flows.
type Token struct {
// AccessToken is the token that authorizes and authenticates
// the requests.
AccessToken string `json:"access_token"`
// TokenType is the type of token.
// The Type method returns either this or "Bearer", the default.
TokenType string `json:"token_type,omitempty"`
// RefreshToken is a token that's used by the application
// (as opposed to the user) to refresh the access token
// if it expires.
RefreshToken string `json:"refresh_token,omitempty"`
// Expiry is the optional expiration time of the access token.
//
// If zero, TokenSource implementations will reuse the same
// token forever and RefreshToken or equivalent
// mechanisms for that TokenSource will not be used.
Expiry time.Time `json:"expiry,omitempty"`
// raw optionally contains extra metadata from the server
// when updating a token.
raw interface{}
}
// Type returns t.TokenType if non-empty, else "Bearer".
func (t *Token) Type() string {
if strings.EqualFold(t.TokenType, "bearer") {
return "Bearer"
}
if strings.EqualFold(t.TokenType, "mac") {
return "MAC"
}
if strings.EqualFold(t.TokenType, "basic") {
return "Basic"
}
if t.TokenType != "" {
return t.TokenType
}
return "Bearer"
}
// SetAuthHeader sets the Authorization header to r using the access
// token in t.
//
// This method is unnecessary when using Transport or an HTTP Client
// returned by this package.
func (t *Token) SetAuthHeader(r *http.Request) {
r.Header.Set("Authorization", t.Type()+" "+t.AccessToken)
}
// WithExtra returns a new Token that's a clone of t, but using the
// provided raw extra map. This is only intended for use by packages
// implementing derivative OAuth2 flows.
func (t *Token) WithExtra(extra interface{}) *Token {
t2 := new(Token)
*t2 = *t
t2.raw = extra
return t2
}
// Extra returns an extra field.
// Extra fields are key-value pairs returned by the server as a
// part of the token retrieval response.
func (t *Token) Extra(key string) interface{} {
if raw, ok := t.raw.(map[string]interface{}); ok {
return raw[key]
}
vals, ok := t.raw.(url.Values)
if !ok {
return nil
}
v := vals.Get(key)
switch s := strings.TrimSpace(v); strings.Count(s, ".") {
case 0: // Contains no "."; try to parse as int
if i, err := strconv.ParseInt(s, 10, 64); err == nil {
return i
}
case 1: // Contains a single "."; try to parse as float
if f, err := strconv.ParseFloat(s, 64); err == nil {
return f
}
}
return v
}
// expired reports whether the token is expired.
// t must be non-nil.
func (t *Token) expired() bool {
if t.Expiry.IsZero() {
return false
}
return t.Expiry.Add(-expiryDelta).Before(time.Now())
}
// Valid reports whether t is non-nil, has an AccessToken, and is not expired.
func (t *Token) Valid() bool {
return t != nil && t.AccessToken != "" && !t.expired()
}
// tokenFromInternal maps an *internal.Token struct into
// a *Token struct.
func tokenFromInternal(t *internal.Token) *Token {
if t == nil {
return nil
}
return &Token{
AccessToken: t.AccessToken,
TokenType: t.TokenType,
RefreshToken: t.RefreshToken,
Expiry: t.Expiry,
raw: t.Raw,
}
}
// retrieveToken takes a *Config and uses that to retrieve an *internal.Token.
// This token is then mapped from *internal.Token into an *oauth2.Token which is returned along
// with an error..
func retrieveToken(ctx context.Context, c *Config, v url.Values) (*Token, error) {
tk, err := internal.RetrieveToken(ctx, c.ClientID, c.ClientSecret, c.Endpoint.TokenURL, v)
if err != nil {
return nil, err
}
return tokenFromInternal(tk), nil
}

132
vendor/golang.org/x/oauth2/transport.go generated vendored Normal file
View File

@@ -0,0 +1,132 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package oauth2
import (
"errors"
"io"
"net/http"
"sync"
)
// Transport is an http.RoundTripper that makes OAuth 2.0 HTTP requests,
// wrapping a base RoundTripper and adding an Authorization header
// with a token from the supplied Sources.
//
// Transport is a low-level mechanism. Most code will use the
// higher-level Config.Client method instead.
type Transport struct {
// Source supplies the token to add to outgoing requests'
// Authorization headers.
Source TokenSource
// Base is the base RoundTripper used to make HTTP requests.
// If nil, http.DefaultTransport is used.
Base http.RoundTripper
mu sync.Mutex // guards modReq
modReq map[*http.Request]*http.Request // original -> modified
}
// RoundTrip authorizes and authenticates the request with an
// access token. If no token exists or token is expired,
// tries to refresh/fetch a new token.
func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
if t.Source == nil {
return nil, errors.New("oauth2: Transport's Source is nil")
}
token, err := t.Source.Token()
if err != nil {
return nil, err
}
req2 := cloneRequest(req) // per RoundTripper contract
token.SetAuthHeader(req2)
t.setModReq(req, req2)
res, err := t.base().RoundTrip(req2)
if err != nil {
t.setModReq(req, nil)
return nil, err
}
res.Body = &onEOFReader{
rc: res.Body,
fn: func() { t.setModReq(req, nil) },
}
return res, nil
}
// CancelRequest cancels an in-flight request by closing its connection.
func (t *Transport) CancelRequest(req *http.Request) {
type canceler interface {
CancelRequest(*http.Request)
}
if cr, ok := t.base().(canceler); ok {
t.mu.Lock()
modReq := t.modReq[req]
delete(t.modReq, req)
t.mu.Unlock()
cr.CancelRequest(modReq)
}
}
func (t *Transport) base() http.RoundTripper {
if t.Base != nil {
return t.Base
}
return http.DefaultTransport
}
func (t *Transport) setModReq(orig, mod *http.Request) {
t.mu.Lock()
defer t.mu.Unlock()
if t.modReq == nil {
t.modReq = make(map[*http.Request]*http.Request)
}
if mod == nil {
delete(t.modReq, orig)
} else {
t.modReq[orig] = mod
}
}
// cloneRequest returns a clone of the provided *http.Request.
// The clone is a shallow copy of the struct and its Header map.
func cloneRequest(r *http.Request) *http.Request {
// shallow copy of the struct
r2 := new(http.Request)
*r2 = *r
// deep copy of the Header
r2.Header = make(http.Header, len(r.Header))
for k, s := range r.Header {
r2.Header[k] = append([]string(nil), s...)
}
return r2
}
type onEOFReader struct {
rc io.ReadCloser
fn func()
}
func (r *onEOFReader) Read(p []byte) (n int, err error) {
n, err = r.rc.Read(p)
if err == io.EOF {
r.runFunc()
}
return
}
func (r *onEOFReader) Close() error {
err := r.rc.Close()
r.runFunc()
return err
}
func (r *onEOFReader) runFunc() {
if fn := r.fn; fn != nil {
fn()
r.fn = nil
}
}

View File

@@ -4,6 +4,9 @@ go:
- 1.4
- 1.5
- 1.6
- 1.7
- 1.8
- 1.9
- tip
go_import_path: gopkg.in/yaml.v2

208
vendor/gopkg.in/yaml.v2/LICENSE generated vendored
View File

@@ -1,13 +1,201 @@
Copyright 2011-2016 Canonical Ltd.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
http://www.apache.org/licenses/LICENSE-2.0
1. Definitions.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

4
vendor/gopkg.in/yaml.v2/README.md generated vendored
View File

@@ -48,8 +48,6 @@ The yaml package is licensed under the Apache License 2.0. Please see the LICENS
Example
-------
Some more examples can be found in the "examples" folder.
```Go
package main
@@ -67,6 +65,8 @@ b:
d: [3, 4]
`
// Note: struct fields must be public in order for unmarshal to
// correctly populate the data.
type T struct {
A string
B struct {

55
vendor/gopkg.in/yaml.v2/apic.go generated vendored
View File

@@ -2,7 +2,6 @@ package yaml
import (
"io"
"os"
)
func yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) {
@@ -48,9 +47,9 @@ func yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err
return n, nil
}
// File read handler.
func yaml_file_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
return parser.input_file.Read(buffer)
// Reader read handler.
func yaml_reader_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
return parser.input_reader.Read(buffer)
}
// Set a string input.
@@ -64,12 +63,12 @@ func yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) {
}
// Set a file input.
func yaml_parser_set_input_file(parser *yaml_parser_t, file *os.File) {
func yaml_parser_set_input_reader(parser *yaml_parser_t, r io.Reader) {
if parser.read_handler != nil {
panic("must set the input source only once")
}
parser.read_handler = yaml_file_read_handler
parser.input_file = file
parser.read_handler = yaml_reader_read_handler
parser.input_reader = r
}
// Set the source encoding.
@@ -81,14 +80,13 @@ func yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) {
}
// Create a new emitter object.
func yaml_emitter_initialize(emitter *yaml_emitter_t) bool {
func yaml_emitter_initialize(emitter *yaml_emitter_t) {
*emitter = yaml_emitter_t{
buffer: make([]byte, output_buffer_size),
raw_buffer: make([]byte, 0, output_raw_buffer_size),
states: make([]yaml_emitter_state_t, 0, initial_stack_size),
events: make([]yaml_event_t, 0, initial_queue_size),
}
return true
}
// Destroy an emitter object.
@@ -102,9 +100,10 @@ func yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
return nil
}
// File write handler.
func yaml_file_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
_, err := emitter.output_file.Write(buffer)
// yaml_writer_write_handler uses emitter.output_writer to write the
// emitted text.
func yaml_writer_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
_, err := emitter.output_writer.Write(buffer)
return err
}
@@ -118,12 +117,12 @@ func yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]by
}
// Set a file output.
func yaml_emitter_set_output_file(emitter *yaml_emitter_t, file io.Writer) {
func yaml_emitter_set_output_writer(emitter *yaml_emitter_t, w io.Writer) {
if emitter.write_handler != nil {
panic("must set the output target only once")
}
emitter.write_handler = yaml_file_write_handler
emitter.output_file = file
emitter.write_handler = yaml_writer_write_handler
emitter.output_writer = w
}
// Set the output encoding.
@@ -252,41 +251,41 @@ func yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) {
//
// Create STREAM-START.
func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) bool {
func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) {
*event = yaml_event_t{
typ: yaml_STREAM_START_EVENT,
encoding: encoding,
}
return true
}
// Create STREAM-END.
func yaml_stream_end_event_initialize(event *yaml_event_t) bool {
func yaml_stream_end_event_initialize(event *yaml_event_t) {
*event = yaml_event_t{
typ: yaml_STREAM_END_EVENT,
}
return true
}
// Create DOCUMENT-START.
func yaml_document_start_event_initialize(event *yaml_event_t, version_directive *yaml_version_directive_t,
tag_directives []yaml_tag_directive_t, implicit bool) bool {
func yaml_document_start_event_initialize(
event *yaml_event_t,
version_directive *yaml_version_directive_t,
tag_directives []yaml_tag_directive_t,
implicit bool,
) {
*event = yaml_event_t{
typ: yaml_DOCUMENT_START_EVENT,
version_directive: version_directive,
tag_directives: tag_directives,
implicit: implicit,
}
return true
}
// Create DOCUMENT-END.
func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) bool {
func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) {
*event = yaml_event_t{
typ: yaml_DOCUMENT_END_EVENT,
implicit: implicit,
}
return true
}
///*
@@ -348,7 +347,7 @@ func yaml_sequence_end_event_initialize(event *yaml_event_t) bool {
}
// Create MAPPING-START.
func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) bool {
func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) {
*event = yaml_event_t{
typ: yaml_MAPPING_START_EVENT,
anchor: anchor,
@@ -356,15 +355,13 @@ func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte
implicit: implicit,
style: yaml_style_t(style),
}
return true
}
// Create MAPPING-END.
func yaml_mapping_end_event_initialize(event *yaml_event_t) bool {
func yaml_mapping_end_event_initialize(event *yaml_event_t) {
*event = yaml_event_t{
typ: yaml_MAPPING_END_EVENT,
}
return true
}
// Destroy an event object.
@@ -471,7 +468,7 @@ func yaml_event_delete(event *yaml_event_t) {
// } context
// tag_directive *yaml_tag_directive_t
//
// context.error = YAML_NO_ERROR // Eliminate a compliler warning.
// context.error = YAML_NO_ERROR // Eliminate a compiler warning.
//
// assert(document) // Non-NULL document object is expected.
//

282
vendor/gopkg.in/yaml.v2/decode.go generated vendored
View File

@@ -4,6 +4,7 @@ import (
"encoding"
"encoding/base64"
"fmt"
"io"
"math"
"reflect"
"strconv"
@@ -22,19 +23,22 @@ type node struct {
kind int
line, column int
tag string
value string
implicit bool
children []*node
anchors map[string]*node
// For an alias node, alias holds the resolved alias.
alias *node
value string
implicit bool
children []*node
anchors map[string]*node
}
// ----------------------------------------------------------------------------
// Parser, produces a node tree out of a libyaml event stream.
type parser struct {
parser yaml_parser_t
event yaml_event_t
doc *node
parser yaml_parser_t
event yaml_event_t
doc *node
doneInit bool
}
func newParser(b []byte) *parser {
@@ -42,21 +46,30 @@ func newParser(b []byte) *parser {
if !yaml_parser_initialize(&p.parser) {
panic("failed to initialize YAML emitter")
}
if len(b) == 0 {
b = []byte{'\n'}
}
yaml_parser_set_input_string(&p.parser, b)
p.skip()
if p.event.typ != yaml_STREAM_START_EVENT {
panic("expected stream start event, got " + strconv.Itoa(int(p.event.typ)))
}
p.skip()
return &p
}
func newParserFromReader(r io.Reader) *parser {
p := parser{}
if !yaml_parser_initialize(&p.parser) {
panic("failed to initialize YAML emitter")
}
yaml_parser_set_input_reader(&p.parser, r)
return &p
}
func (p *parser) init() {
if p.doneInit {
return
}
p.expect(yaml_STREAM_START_EVENT)
p.doneInit = true
}
func (p *parser) destroy() {
if p.event.typ != yaml_NO_EVENT {
yaml_event_delete(&p.event)
@@ -64,16 +77,35 @@ func (p *parser) destroy() {
yaml_parser_delete(&p.parser)
}
func (p *parser) skip() {
if p.event.typ != yaml_NO_EVENT {
if p.event.typ == yaml_STREAM_END_EVENT {
failf("attempted to go past the end of stream; corrupted value?")
// expect consumes an event from the event stream and
// checks that it's of the expected type.
func (p *parser) expect(e yaml_event_type_t) {
if p.event.typ == yaml_NO_EVENT {
if !yaml_parser_parse(&p.parser, &p.event) {
p.fail()
}
yaml_event_delete(&p.event)
}
if p.event.typ == yaml_STREAM_END_EVENT {
failf("attempted to go past the end of stream; corrupted value?")
}
if p.event.typ != e {
p.parser.problem = fmt.Sprintf("expected %s event but got %s", e, p.event.typ)
p.fail()
}
yaml_event_delete(&p.event)
p.event.typ = yaml_NO_EVENT
}
// peek peeks at the next event in the event stream,
// puts the results into p.event and returns the event type.
func (p *parser) peek() yaml_event_type_t {
if p.event.typ != yaml_NO_EVENT {
return p.event.typ
}
if !yaml_parser_parse(&p.parser, &p.event) {
p.fail()
}
return p.event.typ
}
func (p *parser) fail() {
@@ -81,6 +113,10 @@ func (p *parser) fail() {
var line int
if p.parser.problem_mark.line != 0 {
line = p.parser.problem_mark.line
// Scanner errors don't iterate line before returning error
if p.parser.error == yaml_SCANNER_ERROR {
line++
}
} else if p.parser.context_mark.line != 0 {
line = p.parser.context_mark.line
}
@@ -103,7 +139,8 @@ func (p *parser) anchor(n *node, anchor []byte) {
}
func (p *parser) parse() *node {
switch p.event.typ {
p.init()
switch p.peek() {
case yaml_SCALAR_EVENT:
return p.scalar()
case yaml_ALIAS_EVENT:
@@ -118,7 +155,7 @@ func (p *parser) parse() *node {
// Happens when attempting to decode an empty buffer.
return nil
default:
panic("attempted to parse unknown event: " + strconv.Itoa(int(p.event.typ)))
panic("attempted to parse unknown event: " + p.event.typ.String())
}
}
@@ -134,19 +171,20 @@ func (p *parser) document() *node {
n := p.node(documentNode)
n.anchors = make(map[string]*node)
p.doc = n
p.skip()
p.expect(yaml_DOCUMENT_START_EVENT)
n.children = append(n.children, p.parse())
if p.event.typ != yaml_DOCUMENT_END_EVENT {
panic("expected end of document event but got " + strconv.Itoa(int(p.event.typ)))
}
p.skip()
p.expect(yaml_DOCUMENT_END_EVENT)
return n
}
func (p *parser) alias() *node {
n := p.node(aliasNode)
n.value = string(p.event.anchor)
p.skip()
n.alias = p.doc.anchors[n.value]
if n.alias == nil {
failf("unknown anchor '%s' referenced", n.value)
}
p.expect(yaml_ALIAS_EVENT)
return n
}
@@ -156,29 +194,29 @@ func (p *parser) scalar() *node {
n.tag = string(p.event.tag)
n.implicit = p.event.implicit
p.anchor(n, p.event.anchor)
p.skip()
p.expect(yaml_SCALAR_EVENT)
return n
}
func (p *parser) sequence() *node {
n := p.node(sequenceNode)
p.anchor(n, p.event.anchor)
p.skip()
for p.event.typ != yaml_SEQUENCE_END_EVENT {
p.expect(yaml_SEQUENCE_START_EVENT)
for p.peek() != yaml_SEQUENCE_END_EVENT {
n.children = append(n.children, p.parse())
}
p.skip()
p.expect(yaml_SEQUENCE_END_EVENT)
return n
}
func (p *parser) mapping() *node {
n := p.node(mappingNode)
p.anchor(n, p.event.anchor)
p.skip()
for p.event.typ != yaml_MAPPING_END_EVENT {
p.expect(yaml_MAPPING_START_EVENT)
for p.peek() != yaml_MAPPING_END_EVENT {
n.children = append(n.children, p.parse(), p.parse())
}
p.skip()
p.expect(yaml_MAPPING_END_EVENT)
return n
}
@@ -187,10 +225,14 @@ func (p *parser) mapping() *node {
type decoder struct {
doc *node
aliases map[string]bool
aliases map[*node]bool
mapType reflect.Type
terrors []string
strict bool
decodeCount int
aliasCount int
aliasDepth int
}
var (
@@ -198,11 +240,13 @@ var (
durationType = reflect.TypeOf(time.Duration(0))
defaultMapType = reflect.TypeOf(map[interface{}]interface{}{})
ifaceType = defaultMapType.Elem()
timeType = reflect.TypeOf(time.Time{})
ptrTimeType = reflect.TypeOf(&time.Time{})
)
func newDecoder(strict bool) *decoder {
d := &decoder{mapType: defaultMapType, strict: strict}
d.aliases = make(map[string]bool)
d.aliases = make(map[*node]bool)
return d
}
@@ -251,7 +295,7 @@ func (d *decoder) callUnmarshaler(n *node, u Unmarshaler) (good bool) {
//
// If n holds a null value, prepare returns before doing anything.
func (d *decoder) prepare(n *node, out reflect.Value) (newout reflect.Value, unmarshaled, good bool) {
if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "" && n.implicit) {
if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "~" || n.value == "" && n.implicit) {
return out, false, false
}
again := true
@@ -274,7 +318,39 @@ func (d *decoder) prepare(n *node, out reflect.Value) (newout reflect.Value, unm
return out, false, false
}
const (
// 400,000 decode operations is ~500kb of dense object declarations, or ~5kb of dense object declarations with 10000% alias expansion
alias_ratio_range_low = 400000
// 4,000,000 decode operations is ~5MB of dense object declarations, or ~4.5MB of dense object declarations with 10% alias expansion
alias_ratio_range_high = 4000000
// alias_ratio_range is the range over which we scale allowed alias ratios
alias_ratio_range = float64(alias_ratio_range_high - alias_ratio_range_low)
)
func allowedAliasRatio(decodeCount int) float64 {
switch {
case decodeCount <= alias_ratio_range_low:
// allow 99% to come from alias expansion for small-to-medium documents
return 0.99
case decodeCount >= alias_ratio_range_high:
// allow 10% to come from alias expansion for very large documents
return 0.10
default:
// scale smoothly from 99% down to 10% over the range.
// this maps to 396,000 - 400,000 allowed alias-driven decodes over the range.
// 400,000 decode operations is ~100MB of allocations in worst-case scenarios (single-item maps).
return 0.99 - 0.89*(float64(decodeCount-alias_ratio_range_low)/alias_ratio_range)
}
}
func (d *decoder) unmarshal(n *node, out reflect.Value) (good bool) {
d.decodeCount++
if d.aliasDepth > 0 {
d.aliasCount++
}
if d.aliasCount > 100 && d.decodeCount > 1000 && float64(d.aliasCount)/float64(d.decodeCount) > allowedAliasRatio(d.decodeCount) {
failf("document contains excessive aliasing")
}
switch n.kind {
case documentNode:
return d.document(n, out)
@@ -308,16 +384,15 @@ func (d *decoder) document(n *node, out reflect.Value) (good bool) {
}
func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
an, ok := d.doc.anchors[n.value]
if !ok {
failf("unknown anchor '%s' referenced", n.value)
}
if d.aliases[n.value] {
if d.aliases[n] {
// TODO this could actually be allowed in some circumstances.
failf("anchor '%s' value contains itself", n.value)
}
d.aliases[n.value] = true
good = d.unmarshal(an, out)
delete(d.aliases, n.value)
d.aliases[n] = true
d.aliasDepth++
good = d.unmarshal(n.alias, out)
d.aliasDepth--
delete(d.aliases, n)
return good
}
@@ -329,7 +404,7 @@ func resetMap(out reflect.Value) {
}
}
func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
func (d *decoder) scalar(n *node, out reflect.Value) bool {
var tag string
var resolved interface{}
if n.tag == "" && !n.implicit {
@@ -353,9 +428,26 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
}
return true
}
if s, ok := resolved.(string); ok && out.CanAddr() {
if u, ok := out.Addr().Interface().(encoding.TextUnmarshaler); ok {
err := u.UnmarshalText([]byte(s))
if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {
// We've resolved to exactly the type we want, so use that.
out.Set(resolvedv)
return true
}
// Perhaps we can use the value as a TextUnmarshaler to
// set its value.
if out.CanAddr() {
u, ok := out.Addr().Interface().(encoding.TextUnmarshaler)
if ok {
var text []byte
if tag == yaml_BINARY_TAG {
text = []byte(resolved.(string))
} else {
// We let any value be unmarshaled into TextUnmarshaler.
// That might be more lax than we'd like, but the
// TextUnmarshaler itself should bowl out any dubious values.
text = []byte(n.value)
}
err := u.UnmarshalText(text)
if err != nil {
fail(err)
}
@@ -366,46 +458,54 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
case reflect.String:
if tag == yaml_BINARY_TAG {
out.SetString(resolved.(string))
good = true
} else if resolved != nil {
return true
}
if resolved != nil {
out.SetString(n.value)
good = true
return true
}
case reflect.Interface:
if resolved == nil {
out.Set(reflect.Zero(out.Type()))
} else if tag == yaml_TIMESTAMP_TAG {
// It looks like a timestamp but for backward compatibility
// reasons we set it as a string, so that code that unmarshals
// timestamp-like values into interface{} will continue to
// see a string and not a time.Time.
// TODO(v3) Drop this.
out.Set(reflect.ValueOf(n.value))
} else {
out.Set(reflect.ValueOf(resolved))
}
good = true
return true
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch resolved := resolved.(type) {
case int:
if !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
good = true
return true
}
case int64:
if !out.OverflowInt(resolved) {
out.SetInt(resolved)
good = true
return true
}
case uint64:
if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
good = true
return true
}
case float64:
if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
good = true
return true
}
case string:
if out.Type() == durationType {
d, err := time.ParseDuration(resolved)
if err == nil {
out.SetInt(int64(d))
good = true
return true
}
}
}
@@ -414,44 +514,49 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
case int:
if resolved >= 0 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
good = true
return true
}
case int64:
if resolved >= 0 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
good = true
return true
}
case uint64:
if !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
good = true
return true
}
case float64:
if resolved <= math.MaxUint64 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
good = true
return true
}
}
case reflect.Bool:
switch resolved := resolved.(type) {
case bool:
out.SetBool(resolved)
good = true
return true
}
case reflect.Float32, reflect.Float64:
switch resolved := resolved.(type) {
case int:
out.SetFloat(float64(resolved))
good = true
return true
case int64:
out.SetFloat(float64(resolved))
good = true
return true
case uint64:
out.SetFloat(float64(resolved))
good = true
return true
case float64:
out.SetFloat(resolved)
good = true
return true
}
case reflect.Struct:
if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {
out.Set(resolvedv)
return true
}
case reflect.Ptr:
if out.Type().Elem() == reflect.TypeOf(resolved) {
@@ -459,13 +564,11 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
elem := reflect.New(out.Type().Elem())
elem.Elem().Set(reflect.ValueOf(resolved))
out.Set(elem)
good = true
return true
}
}
if !good {
d.terror(n, tag, out)
}
return good
d.terror(n, tag, out)
return false
}
func settableValueOf(i interface{}) reflect.Value {
@@ -482,6 +585,10 @@ func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
switch out.Kind() {
case reflect.Slice:
out.Set(reflect.MakeSlice(out.Type(), l, l))
case reflect.Array:
if l != out.Len() {
failf("invalid array: want %d elements but got %d", out.Len(), l)
}
case reflect.Interface:
// No type hints. Will have to use a generic sequence.
iface = out
@@ -500,7 +607,9 @@ func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
j++
}
}
out.Set(out.Slice(0, j))
if out.Kind() != reflect.Array {
out.Set(out.Slice(0, j))
}
if iface.IsValid() {
iface.Set(out)
}
@@ -561,7 +670,7 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
}
e := reflect.New(et).Elem()
if d.unmarshal(n.children[i+1], e) {
out.SetMapIndex(k, e)
d.setMapIndex(n.children[i+1], out, k, e)
}
}
}
@@ -569,6 +678,14 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
return true
}
func (d *decoder) setMapIndex(n *node, out, k, v reflect.Value) {
if d.strict && out.MapIndex(k) != zeroValue {
d.terrors = append(d.terrors, fmt.Sprintf("line %d: key %#v already set in map", n.line+1, k.Interface()))
return
}
out.SetMapIndex(k, v)
}
func (d *decoder) mappingSlice(n *node, out reflect.Value) (good bool) {
outt := out.Type()
if outt.Elem() != mapItemType {
@@ -616,6 +733,10 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
elemType = inlineMap.Type().Elem()
}
var doneFields []bool
if d.strict {
doneFields = make([]bool, len(sinfo.FieldsList))
}
for i := 0; i < l; i += 2 {
ni := n.children[i]
if isMerge(ni) {
@@ -626,6 +747,13 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
continue
}
if info, ok := sinfo.FieldsMap[name.String()]; ok {
if d.strict {
if doneFields[info.Id] {
d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s already set in type %s", ni.line+1, name.String(), out.Type()))
continue
}
doneFields[info.Id] = true
}
var field reflect.Value
if info.Inline == nil {
field = out.Field(info.Num)
@@ -639,9 +767,9 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
}
value := reflect.New(elemType).Elem()
d.unmarshal(n.children[i+1], value)
inlineMap.SetMapIndex(name, value)
d.setMapIndex(n.children[i+1], inlineMap, name, value)
} else if d.strict {
d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s not found in struct %s", n.line+1, name.String(), out.Type()))
d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s not found in type %s", ni.line+1, name.String(), out.Type()))
}
}
return true

11
vendor/gopkg.in/yaml.v2/emitterc.go generated vendored
View File

@@ -2,6 +2,7 @@ package yaml
import (
"bytes"
"fmt"
)
// Flush the buffer if needed.
@@ -664,7 +665,7 @@ func yaml_emitter_emit_node(emitter *yaml_emitter_t, event *yaml_event_t,
return yaml_emitter_emit_mapping_start(emitter, event)
default:
return yaml_emitter_set_emitter_error(emitter,
"expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS")
fmt.Sprintf("expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS, but got %v", event.typ))
}
}
@@ -842,7 +843,7 @@ func yaml_emitter_select_scalar_style(emitter *yaml_emitter_t, event *yaml_event
return true
}
// Write an achor.
// Write an anchor.
func yaml_emitter_process_anchor(emitter *yaml_emitter_t) bool {
if emitter.anchor_data.anchor == nil {
return true
@@ -995,9 +996,9 @@ func yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool {
space_break = false
preceded_by_whitespace = false
followed_by_whitespace = false
previous_space = false
previous_break = false
followed_by_whitespace = false
previous_space = false
previous_break = false
)
emitter.scalar_data.value = value

164
vendor/gopkg.in/yaml.v2/encode.go generated vendored
View File

@@ -3,38 +3,67 @@ package yaml
import (
"encoding"
"fmt"
"io"
"reflect"
"regexp"
"sort"
"strconv"
"strings"
"time"
"unicode/utf8"
)
// jsonNumber is the interface of the encoding/json.Number datatype.
// Repeating the interface here avoids a dependency on encoding/json, and also
// supports other libraries like jsoniter, which use a similar datatype with
// the same interface. Detecting this interface is useful when dealing with
// structures containing json.Number, which is a string under the hood. The
// encoder should prefer the use of Int64(), Float64() and string(), in that
// order, when encoding this type.
type jsonNumber interface {
Float64() (float64, error)
Int64() (int64, error)
String() string
}
type encoder struct {
emitter yaml_emitter_t
event yaml_event_t
out []byte
flow bool
// doneInit holds whether the initial stream_start_event has been
// emitted.
doneInit bool
}
func newEncoder() (e *encoder) {
e = &encoder{}
e.must(yaml_emitter_initialize(&e.emitter))
func newEncoder() *encoder {
e := &encoder{}
yaml_emitter_initialize(&e.emitter)
yaml_emitter_set_output_string(&e.emitter, &e.out)
yaml_emitter_set_unicode(&e.emitter, true)
e.must(yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING))
e.emit()
e.must(yaml_document_start_event_initialize(&e.event, nil, nil, true))
e.emit()
return e
}
func (e *encoder) finish() {
e.must(yaml_document_end_event_initialize(&e.event, true))
func newEncoderWithWriter(w io.Writer) *encoder {
e := &encoder{}
yaml_emitter_initialize(&e.emitter)
yaml_emitter_set_output_writer(&e.emitter, w)
yaml_emitter_set_unicode(&e.emitter, true)
return e
}
func (e *encoder) init() {
if e.doneInit {
return
}
yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING)
e.emit()
e.doneInit = true
}
func (e *encoder) finish() {
e.emitter.open_ended = false
e.must(yaml_stream_end_event_initialize(&e.event))
yaml_stream_end_event_initialize(&e.event)
e.emit()
}
@@ -44,9 +73,7 @@ func (e *encoder) destroy() {
func (e *encoder) emit() {
// This will internally delete the e.event value.
if !yaml_emitter_emit(&e.emitter, &e.event) && e.event.typ != yaml_DOCUMENT_END_EVENT && e.event.typ != yaml_STREAM_END_EVENT {
e.must(false)
}
e.must(yaml_emitter_emit(&e.emitter, &e.event))
}
func (e *encoder) must(ok bool) {
@@ -59,13 +86,43 @@ func (e *encoder) must(ok bool) {
}
}
func (e *encoder) marshalDoc(tag string, in reflect.Value) {
e.init()
yaml_document_start_event_initialize(&e.event, nil, nil, true)
e.emit()
e.marshal(tag, in)
yaml_document_end_event_initialize(&e.event, true)
e.emit()
}
func (e *encoder) marshal(tag string, in reflect.Value) {
if !in.IsValid() {
if !in.IsValid() || in.Kind() == reflect.Ptr && in.IsNil() {
e.nilv()
return
}
iface := in.Interface()
if m, ok := iface.(Marshaler); ok {
switch m := iface.(type) {
case jsonNumber:
integer, err := m.Int64()
if err == nil {
// In this case the json.Number is a valid int64
in = reflect.ValueOf(integer)
break
}
float, err := m.Float64()
if err == nil {
// In this case the json.Number is a valid float64
in = reflect.ValueOf(float)
break
}
// fallback case - no number could be obtained
in = reflect.ValueOf(m.String())
case time.Time, *time.Time:
// Although time.Time implements TextMarshaler,
// we don't want to treat it as a string for YAML
// purposes because YAML has special support for
// timestamps.
case Marshaler:
v, err := m.MarshalYAML()
if err != nil {
fail(err)
@@ -75,31 +132,34 @@ func (e *encoder) marshal(tag string, in reflect.Value) {
return
}
in = reflect.ValueOf(v)
} else if m, ok := iface.(encoding.TextMarshaler); ok {
case encoding.TextMarshaler:
text, err := m.MarshalText()
if err != nil {
fail(err)
}
in = reflect.ValueOf(string(text))
case nil:
e.nilv()
return
}
switch in.Kind() {
case reflect.Interface:
if in.IsNil() {
e.nilv()
} else {
e.marshal(tag, in.Elem())
}
e.marshal(tag, in.Elem())
case reflect.Map:
e.mapv(tag, in)
case reflect.Ptr:
if in.IsNil() {
e.nilv()
if in.Type() == ptrTimeType {
e.timev(tag, in.Elem())
} else {
e.marshal(tag, in.Elem())
}
case reflect.Struct:
e.structv(tag, in)
case reflect.Slice:
if in.Type() == timeType {
e.timev(tag, in)
} else {
e.structv(tag, in)
}
case reflect.Slice, reflect.Array:
if in.Type().Elem() == mapItemType {
e.itemsv(tag, in)
} else {
@@ -191,10 +251,10 @@ func (e *encoder) mappingv(tag string, f func()) {
e.flow = false
style = yaml_FLOW_MAPPING_STYLE
}
e.must(yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style))
yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style)
e.emit()
f()
e.must(yaml_mapping_end_event_initialize(&e.event))
yaml_mapping_end_event_initialize(&e.event)
e.emit()
}
@@ -240,23 +300,36 @@ var base60float = regexp.MustCompile(`^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+(?:\.[0
func (e *encoder) stringv(tag string, in reflect.Value) {
var style yaml_scalar_style_t
s := in.String()
rtag, rs := resolve("", s)
if rtag == yaml_BINARY_TAG {
if tag == "" || tag == yaml_STR_TAG {
tag = rtag
s = rs.(string)
} else if tag == yaml_BINARY_TAG {
canUsePlain := true
switch {
case !utf8.ValidString(s):
if tag == yaml_BINARY_TAG {
failf("explicitly tagged !!binary data must be base64-encoded")
} else {
}
if tag != "" {
failf("cannot marshal invalid UTF-8 data as %s", shortTag(tag))
}
// It can't be encoded directly as YAML so use a binary tag
// and encode it as base64.
tag = yaml_BINARY_TAG
s = encodeBase64(s)
case tag == "":
// Check to see if it would resolve to a specific
// tag when encoded unquoted. If it doesn't,
// there's no need to quote it.
rtag, _ := resolve("", s)
canUsePlain = rtag == yaml_STR_TAG && !isBase60Float(s)
}
if tag == "" && (rtag != yaml_STR_TAG || isBase60Float(s)) {
style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
} else if strings.Contains(s, "\n") {
// Note: it's possible for user code to emit invalid YAML
// if they explicitly specify a tag and a string containing
// text that's incompatible with that tag.
switch {
case strings.Contains(s, "\n"):
style = yaml_LITERAL_SCALAR_STYLE
} else {
case canUsePlain:
style = yaml_PLAIN_SCALAR_STYLE
default:
style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
}
e.emitScalar(s, "", tag, style)
}
@@ -281,9 +354,20 @@ func (e *encoder) uintv(tag string, in reflect.Value) {
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
}
func (e *encoder) timev(tag string, in reflect.Value) {
t := in.Interface().(time.Time)
s := t.Format(time.RFC3339Nano)
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
}
func (e *encoder) floatv(tag string, in reflect.Value) {
// FIXME: Handle 64 bits here.
s := strconv.FormatFloat(float64(in.Float()), 'g', -1, 32)
// Issue #352: When formatting, use the precision of the underlying value
precision := 64
if in.Kind() == reflect.Float32 {
precision = 32
}
s := strconv.FormatFloat(in.Float(), 'g', -1, precision)
switch s {
case "+Inf":
s = ".inf"

5
vendor/gopkg.in/yaml.v2/go.mod generated vendored Normal file
View File

@@ -0,0 +1,5 @@
module "gopkg.in/yaml.v2"
require (
"gopkg.in/check.v1" v0.0.0-20161208181325-20d25e280405
)

20
vendor/gopkg.in/yaml.v2/readerc.go generated vendored
View File

@@ -93,9 +93,18 @@ func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {
panic("read handler must be set")
}
// [Go] This function was changed to guarantee the requested length size at EOF.
// The fact we need to do this is pretty awful, but the description above implies
// for that to be the case, and there are tests
// If the EOF flag is set and the raw buffer is empty, do nothing.
if parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) {
return true
// [Go] ACTUALLY! Read the documentation of this function above.
// This is just broken. To return true, we need to have the
// given length in the buffer. Not doing that means every single
// check that calls this function to make sure the buffer has a
// given length is Go) panicking; or C) accessing invalid memory.
//return true
}
// Return if the buffer contains enough characters.
@@ -389,6 +398,15 @@ func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {
break
}
}
// [Go] Read the documentation of this function above. To return true,
// we need to have the given length in the buffer. Not doing that means
// every single check that calls this function to make sure the buffer
// has a given length is Go) panicking; or C) accessing invalid memory.
// This happens here due to the EOF above breaking early.
for buffer_len < length {
parser.buffer[buffer_len] = 0
buffer_len++
}
parser.buffer = parser.buffer[:buffer_len]
return true
}

82
vendor/gopkg.in/yaml.v2/resolve.go generated vendored
View File

@@ -6,7 +6,7 @@ import (
"regexp"
"strconv"
"strings"
"unicode/utf8"
"time"
)
type resolveMapItem struct {
@@ -75,13 +75,13 @@ func longTag(tag string) string {
func resolvableTag(tag string) bool {
switch tag {
case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG:
case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG, yaml_TIMESTAMP_TAG:
return true
}
return false
}
var yamlStyleFloat = regexp.MustCompile(`^[-+]?[0-9]*\.?[0-9]+([eE][-+][0-9]+)?$`)
var yamlStyleFloat = regexp.MustCompile(`^[-+]?(\.[0-9]+|[0-9]+(\.[0-9]*)?)([eE][-+]?[0-9]+)?$`)
func resolve(tag string, in string) (rtag string, out interface{}) {
if !resolvableTag(tag) {
@@ -92,6 +92,19 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
switch tag {
case "", rtag, yaml_STR_TAG, yaml_BINARY_TAG:
return
case yaml_FLOAT_TAG:
if rtag == yaml_INT_TAG {
switch v := out.(type) {
case int64:
rtag = yaml_FLOAT_TAG
out = float64(v)
return
case int:
rtag = yaml_FLOAT_TAG
out = float64(v)
return
}
}
}
failf("cannot decode %s `%s` as a %s", shortTag(rtag), in, shortTag(tag))
}()
@@ -125,6 +138,15 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
case 'D', 'S':
// Int, float, or timestamp.
// Only try values as a timestamp if the value is unquoted or there's an explicit
// !!timestamp tag.
if tag == "" || tag == yaml_TIMESTAMP_TAG {
t, ok := parseTimestamp(in)
if ok {
return yaml_TIMESTAMP_TAG, t
}
}
plain := strings.Replace(in, "_", "", -1)
intv, err := strconv.ParseInt(plain, 0, 64)
if err == nil {
@@ -158,28 +180,20 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
return yaml_INT_TAG, uintv
}
} else if strings.HasPrefix(plain, "-0b") {
intv, err := strconv.ParseInt(plain[3:], 2, 64)
intv, err := strconv.ParseInt("-" + plain[3:], 2, 64)
if err == nil {
if intv == int64(int(intv)) {
return yaml_INT_TAG, -int(intv)
if true || intv == int64(int(intv)) {
return yaml_INT_TAG, int(intv)
} else {
return yaml_INT_TAG, -intv
return yaml_INT_TAG, intv
}
}
}
// XXX Handle timestamps here.
default:
panic("resolveTable item not yet handled: " + string(rune(hint)) + " (with " + in + ")")
}
}
if tag == yaml_BINARY_TAG {
return yaml_BINARY_TAG, in
}
if utf8.ValidString(in) {
return yaml_STR_TAG, in
}
return yaml_BINARY_TAG, encodeBase64(in)
return yaml_STR_TAG, in
}
// encodeBase64 encodes s as base64 that is broken up into multiple lines
@@ -206,3 +220,39 @@ func encodeBase64(s string) string {
}
return string(out[:k])
}
// This is a subset of the formats allowed by the regular expression
// defined at http://yaml.org/type/timestamp.html.
var allowedTimestampFormats = []string{
"2006-1-2T15:4:5.999999999Z07:00", // RCF3339Nano with short date fields.
"2006-1-2t15:4:5.999999999Z07:00", // RFC3339Nano with short date fields and lower-case "t".
"2006-1-2 15:4:5.999999999", // space separated with no time zone
"2006-1-2", // date only
// Notable exception: time.Parse cannot handle: "2001-12-14 21:59:43.10 -5"
// from the set of examples.
}
// parseTimestamp parses s as a timestamp string and
// returns the timestamp and reports whether it succeeded.
// Timestamp formats are defined at http://yaml.org/type/timestamp.html
func parseTimestamp(s string) (time.Time, bool) {
// TODO write code to check all the formats supported by
// http://yaml.org/type/timestamp.html instead of using time.Parse.
// Quick check: all date formats start with YYYY-.
i := 0
for ; i < len(s); i++ {
if c := s[i]; c < '0' || c > '9' {
break
}
}
if i != 4 || i == len(s) || s[i] != '-' {
return time.Time{}, false
}
for _, format := range allowedTimestampFormats {
if t, err := time.Parse(format, s); err == nil {
return t, true
}
}
return time.Time{}, false
}

56
vendor/gopkg.in/yaml.v2/scannerc.go generated vendored
View File

@@ -871,12 +871,6 @@ func yaml_parser_save_simple_key(parser *yaml_parser_t) bool {
required := parser.flow_level == 0 && parser.indent == parser.mark.column
// A simple key is required only when it is the first token in the current
// line. Therefore it is always allowed. But we add a check anyway.
if required && !parser.simple_key_allowed {
panic("should not happen")
}
//
// If the current position may start a simple key, save it.
//
@@ -912,6 +906,9 @@ func yaml_parser_remove_simple_key(parser *yaml_parser_t) bool {
return true
}
// max_flow_level limits the flow_level
const max_flow_level = 10000
// Increase the flow level and resize the simple key list if needed.
func yaml_parser_increase_flow_level(parser *yaml_parser_t) bool {
// Reset the simple key on the next level.
@@ -919,6 +916,11 @@ func yaml_parser_increase_flow_level(parser *yaml_parser_t) bool {
// Increase the flow level.
parser.flow_level++
if parser.flow_level > max_flow_level {
return yaml_parser_set_scanner_error(parser,
"while increasing flow level", parser.simple_keys[len(parser.simple_keys)-1].mark,
fmt.Sprintf("exceeded max depth of %d", max_flow_level))
}
return true
}
@@ -931,6 +933,9 @@ func yaml_parser_decrease_flow_level(parser *yaml_parser_t) bool {
return true
}
// max_indents limits the indents stack size
const max_indents = 10000
// Push the current indentation level to the stack and set the new level
// the current column is greater than the indentation level. In this case,
// append or insert the specified token into the token queue.
@@ -945,6 +950,11 @@ func yaml_parser_roll_indent(parser *yaml_parser_t, column, number int, typ yaml
// indentation level.
parser.indents = append(parser.indents, parser.indent)
parser.indent = column
if len(parser.indents) > max_indents {
return yaml_parser_set_scanner_error(parser,
"while increasing indent level", parser.simple_keys[len(parser.simple_keys)-1].mark,
fmt.Sprintf("exceeded max depth of %d", max_indents))
}
// Create a token and insert it into the queue.
token := yaml_token_t{
@@ -1944,7 +1954,7 @@ func yaml_parser_scan_tag_handle(parser *yaml_parser_t, directive bool, start_ma
} else {
// It's either the '!' tag or not really a tag handle. If it's a %TAG
// directive, it's an error. If it's a tag token, it must be a part of URI.
if directive && !(s[0] == '!' && s[1] == 0) {
if directive && string(s) != "!" {
yaml_parser_set_scanner_tag_error(parser, directive,
start_mark, "did not find expected '!'")
return false
@@ -1959,12 +1969,12 @@ func yaml_parser_scan_tag_handle(parser *yaml_parser_t, directive bool, start_ma
func yaml_parser_scan_tag_uri(parser *yaml_parser_t, directive bool, head []byte, start_mark yaml_mark_t, uri *[]byte) bool {
//size_t length = head ? strlen((char *)head) : 0
var s []byte
length := len(head)
hasTag := len(head) > 0
// Copy the head if needed.
//
// Note that we don't copy the leading '!' character.
if length > 0 {
if len(head) > 1 {
s = append(s, head[1:]...)
}
@@ -1997,15 +2007,14 @@ func yaml_parser_scan_tag_uri(parser *yaml_parser_t, directive bool, head []byte
}
} else {
s = read(parser, s)
length++
}
if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
return false
}
hasTag = true
}
// Check if the tag is non-empty.
if length == 0 {
if !hasTag {
yaml_parser_set_scanner_tag_error(parser, directive,
start_mark, "did not find expected tag URI")
return false
@@ -2476,6 +2485,10 @@ func yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, si
}
}
if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
return false
}
// Check if we are at the end of the scalar.
if single {
if parser.buffer[parser.buffer_pos] == '\'' {
@@ -2488,10 +2501,6 @@ func yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, si
}
// Consume blank characters.
if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
return false
}
for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) {
if is_blank(parser.buffer, parser.buffer_pos) {
// Consume a space or a tab character.
@@ -2593,19 +2602,10 @@ func yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) b
// Consume non-blank characters.
for !is_blankz(parser.buffer, parser.buffer_pos) {
// Check for 'x:x' in the flow context. TODO: Fix the test "spec-08-13".
if parser.flow_level > 0 &&
parser.buffer[parser.buffer_pos] == ':' &&
!is_blankz(parser.buffer, parser.buffer_pos+1) {
yaml_parser_set_scanner_error(parser, "while scanning a plain scalar",
start_mark, "found unexpected ':'")
return false
}
// Check for indicators that may end a plain scalar.
if (parser.buffer[parser.buffer_pos] == ':' && is_blankz(parser.buffer, parser.buffer_pos+1)) ||
(parser.flow_level > 0 &&
(parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == ':' ||
(parser.buffer[parser.buffer_pos] == ',' ||
parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == '[' ||
parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' ||
parser.buffer[parser.buffer_pos] == '}')) {
@@ -2657,10 +2657,10 @@ func yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) b
for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) {
if is_blank(parser.buffer, parser.buffer_pos) {
// Check for tab character that abuse indentation.
// Check for tab characters that abuse indentation.
if leading_blanks && parser.mark.column < indent && is_tab(parser.buffer, parser.buffer_pos) {
yaml_parser_set_scanner_error(parser, "while scanning a plain scalar",
start_mark, "found a tab character that violate indentation")
start_mark, "found a tab character that violates indentation")
return false
}

9
vendor/gopkg.in/yaml.v2/sorter.go generated vendored
View File

@@ -51,6 +51,15 @@ func (l keyList) Less(i, j int) bool {
}
var ai, bi int
var an, bn int64
if ar[i] == '0' || br[i] == '0' {
for j := i-1; j >= 0 && unicode.IsDigit(ar[j]); j-- {
if ar[j] != '0' {
an = 1
bn = 1
break
}
}
}
for ai = i; ai < len(ar) && unicode.IsDigit(ar[ai]); ai++ {
an = an*10 + int64(ar[ai]-'0')
}

65
vendor/gopkg.in/yaml.v2/writerc.go generated vendored
View File

@@ -18,72 +18,9 @@ func yaml_emitter_flush(emitter *yaml_emitter_t) bool {
return true
}
// If the output encoding is UTF-8, we don't need to recode the buffer.
if emitter.encoding == yaml_UTF8_ENCODING {
if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil {
return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error())
}
emitter.buffer_pos = 0
return true
}
// Recode the buffer into the raw buffer.
var low, high int
if emitter.encoding == yaml_UTF16LE_ENCODING {
low, high = 0, 1
} else {
high, low = 1, 0
}
pos := 0
for pos < emitter.buffer_pos {
// See the "reader.c" code for more details on UTF-8 encoding. Note
// that we assume that the buffer contains a valid UTF-8 sequence.
// Read the next UTF-8 character.
octet := emitter.buffer[pos]
var w int
var value rune
switch {
case octet&0x80 == 0x00:
w, value = 1, rune(octet&0x7F)
case octet&0xE0 == 0xC0:
w, value = 2, rune(octet&0x1F)
case octet&0xF0 == 0xE0:
w, value = 3, rune(octet&0x0F)
case octet&0xF8 == 0xF0:
w, value = 4, rune(octet&0x07)
}
for k := 1; k < w; k++ {
octet = emitter.buffer[pos+k]
value = (value << 6) + (rune(octet) & 0x3F)
}
pos += w
// Write the character.
if value < 0x10000 {
var b [2]byte
b[high] = byte(value >> 8)
b[low] = byte(value & 0xFF)
emitter.raw_buffer = append(emitter.raw_buffer, b[0], b[1])
} else {
// Write the character using a surrogate pair (check "reader.c").
var b [4]byte
value -= 0x10000
b[high] = byte(0xD8 + (value >> 18))
b[low] = byte((value >> 10) & 0xFF)
b[high+2] = byte(0xDC + ((value >> 8) & 0xFF))
b[low+2] = byte(value & 0xFF)
emitter.raw_buffer = append(emitter.raw_buffer, b[0], b[1], b[2], b[3])
}
}
// Write the raw buffer.
if err := emitter.write_handler(emitter, emitter.raw_buffer); err != nil {
if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil {
return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error())
}
emitter.buffer_pos = 0
emitter.raw_buffer = emitter.raw_buffer[:0]
return true
}

125
vendor/gopkg.in/yaml.v2/yaml.go generated vendored
View File

@@ -9,6 +9,7 @@ package yaml
import (
"errors"
"fmt"
"io"
"reflect"
"strings"
"sync"
@@ -81,12 +82,58 @@ func Unmarshal(in []byte, out interface{}) (err error) {
}
// UnmarshalStrict is like Unmarshal except that any fields that are found
// in the data that do not have corresponding struct members will result in
// in the data that do not have corresponding struct members, or mapping
// keys that are duplicates, will result in
// an error.
func UnmarshalStrict(in []byte, out interface{}) (err error) {
return unmarshal(in, out, true)
}
// A Decorder reads and decodes YAML values from an input stream.
type Decoder struct {
strict bool
parser *parser
}
// NewDecoder returns a new decoder that reads from r.
//
// The decoder introduces its own buffering and may read
// data from r beyond the YAML values requested.
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{
parser: newParserFromReader(r),
}
}
// SetStrict sets whether strict decoding behaviour is enabled when
// decoding items in the data (see UnmarshalStrict). By default, decoding is not strict.
func (dec *Decoder) SetStrict(strict bool) {
dec.strict = strict
}
// Decode reads the next YAML-encoded value from its input
// and stores it in the value pointed to by v.
//
// See the documentation for Unmarshal for details about the
// conversion of YAML into a Go value.
func (dec *Decoder) Decode(v interface{}) (err error) {
d := newDecoder(dec.strict)
defer handleErr(&err)
node := dec.parser.parse()
if node == nil {
return io.EOF
}
out := reflect.ValueOf(v)
if out.Kind() == reflect.Ptr && !out.IsNil() {
out = out.Elem()
}
d.unmarshal(node, out)
if len(d.terrors) > 0 {
return &TypeError{d.terrors}
}
return nil
}
func unmarshal(in []byte, out interface{}, strict bool) (err error) {
defer handleErr(&err)
d := newDecoder(strict)
@@ -110,8 +157,8 @@ func unmarshal(in []byte, out interface{}, strict bool) (err error) {
// of the generated document will reflect the structure of the value itself.
// Maps and pointers (to struct, string, int, etc) are accepted as the in value.
//
// Struct fields are only unmarshalled if they are exported (have an upper case
// first letter), and are unmarshalled using the field name lowercased as the
// Struct fields are only marshalled if they are exported (have an upper case
// first letter), and are marshalled using the field name lowercased as the
// default key. Custom keys may be defined via the "yaml" name in the field
// tag: the content preceding the first comma is used as the key, and the
// following comma-separated options are used to tweak the marshalling process.
@@ -125,7 +172,10 @@ func unmarshal(in []byte, out interface{}, strict bool) (err error) {
//
// omitempty Only include the field if it's not set to the zero
// value for the type or to empty slices or maps.
// Does not apply to zero valued structs.
// Zero valued structs will be omitted if all their public
// fields are zero, unless they implement an IsZero
// method (see the IsZeroer interface type), in which
// case the field will be included if that method returns true.
//
// flow Marshal using a flow style (useful for structs,
// sequences and maps).
@@ -140,7 +190,7 @@ func unmarshal(in []byte, out interface{}, strict bool) (err error) {
// For example:
//
// type T struct {
// F int "a,omitempty"
// F int `yaml:"a,omitempty"`
// B int
// }
// yaml.Marshal(&T{B: 2}) // Returns "b: 2\n"
@@ -150,12 +200,47 @@ func Marshal(in interface{}) (out []byte, err error) {
defer handleErr(&err)
e := newEncoder()
defer e.destroy()
e.marshal("", reflect.ValueOf(in))
e.marshalDoc("", reflect.ValueOf(in))
e.finish()
out = e.out
return
}
// An Encoder writes YAML values to an output stream.
type Encoder struct {
encoder *encoder
}
// NewEncoder returns a new encoder that writes to w.
// The Encoder should be closed after use to flush all data
// to w.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
encoder: newEncoderWithWriter(w),
}
}
// Encode writes the YAML encoding of v to the stream.
// If multiple items are encoded to the stream, the
// second and subsequent document will be preceded
// with a "---" document separator, but the first will not.
//
// See the documentation for Marshal for details about the conversion of Go
// values to YAML.
func (e *Encoder) Encode(v interface{}) (err error) {
defer handleErr(&err)
e.encoder.marshalDoc("", reflect.ValueOf(v))
return nil
}
// Close closes the encoder by writing any remaining data.
// It does not write a stream terminating string "...".
func (e *Encoder) Close() (err error) {
defer handleErr(&err)
e.encoder.finish()
return nil
}
func handleErr(err *error) {
if v := recover(); v != nil {
if e, ok := v.(yamlError); ok {
@@ -211,6 +296,9 @@ type fieldInfo struct {
Num int
OmitEmpty bool
Flow bool
// Id holds the unique field identifier, so we can cheaply
// check for field duplicates without maintaining an extra map.
Id int
// Inline holds the field index if the field is part of an inlined struct.
Inline []int
@@ -290,6 +378,7 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
} else {
finfo.Inline = append([]int{i}, finfo.Inline...)
}
finfo.Id = len(fieldsList)
fieldsMap[finfo.Key] = finfo
fieldsList = append(fieldsList, finfo)
}
@@ -311,11 +400,16 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
return nil, errors.New(msg)
}
info.Id = len(fieldsList)
fieldsList = append(fieldsList, info)
fieldsMap[info.Key] = info
}
sinfo = &structInfo{fieldsMap, fieldsList, inlineMap}
sinfo = &structInfo{
FieldsMap: fieldsMap,
FieldsList: fieldsList,
InlineMap: inlineMap,
}
fieldMapMutex.Lock()
structMap[st] = sinfo
@@ -323,8 +417,23 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
return sinfo, nil
}
// IsZeroer is used to check whether an object is zero to
// determine whether it should be omitted when marshaling
// with the omitempty flag. One notable implementation
// is time.Time.
type IsZeroer interface {
IsZero() bool
}
func isZero(v reflect.Value) bool {
switch v.Kind() {
kind := v.Kind()
if z, ok := v.Interface().(IsZeroer); ok {
if (kind == reflect.Ptr || kind == reflect.Interface) && v.IsNil() {
return true
}
return z.IsZero()
}
switch kind {
case reflect.String:
return len(v.String()) == 0
case reflect.Interface, reflect.Ptr:

30
vendor/gopkg.in/yaml.v2/yamlh.go generated vendored
View File

@@ -1,6 +1,7 @@
package yaml
import (
"fmt"
"io"
)
@@ -239,6 +240,27 @@ const (
yaml_MAPPING_END_EVENT // A MAPPING-END event.
)
var eventStrings = []string{
yaml_NO_EVENT: "none",
yaml_STREAM_START_EVENT: "stream start",
yaml_STREAM_END_EVENT: "stream end",
yaml_DOCUMENT_START_EVENT: "document start",
yaml_DOCUMENT_END_EVENT: "document end",
yaml_ALIAS_EVENT: "alias",
yaml_SCALAR_EVENT: "scalar",
yaml_SEQUENCE_START_EVENT: "sequence start",
yaml_SEQUENCE_END_EVENT: "sequence end",
yaml_MAPPING_START_EVENT: "mapping start",
yaml_MAPPING_END_EVENT: "mapping end",
}
func (e yaml_event_type_t) String() string {
if e < 0 || int(e) >= len(eventStrings) {
return fmt.Sprintf("unknown event %d", e)
}
return eventStrings[e]
}
// The event structure.
type yaml_event_t struct {
@@ -521,9 +543,9 @@ type yaml_parser_t struct {
read_handler yaml_read_handler_t // Read handler.
input_file io.Reader // File input data.
input []byte // String input data.
input_pos int
input_reader io.Reader // File input data.
input []byte // String input data.
input_pos int
eof bool // EOF flag
@@ -632,7 +654,7 @@ type yaml_emitter_t struct {
write_handler yaml_write_handler_t // Write handler.
output_buffer *[]byte // String output data.
output_file io.Writer // File output data.
output_writer io.Writer // File output data.
buffer []byte // The working buffer.
buffer_pos int // The current position of the buffer.

View File

@@ -16,10 +16,10 @@ limitations under the License.
// +k8s:deepcopy-gen=package
// +k8s:openapi-gen=true
// +groupName=admissionregistration.k8s.io
// Package v1alpha1 is the v1alpha1 version of the API.
// AdmissionConfiguration and AdmissionPluginConfiguration are legacy static admission plugin configuration
// InitializerConfiguration and validatingWebhookConfiguration is for the
// new dynamic admission controller configuration.
// +groupName=admissionregistration.k8s.io
package v1alpha1

View File

@@ -14,9 +14,8 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-gogo.
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: k8s.io/kubernetes/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.proto
// DO NOT EDIT!
/*
Package v1alpha1 is a generated protocol buffer package.
@@ -32,14 +31,19 @@ limitations under the License.
*/
package v1alpha1
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import (
fmt "fmt"
import strings "strings"
import reflect "reflect"
proto "github.com/gogo/protobuf/proto"
import io "io"
math "math"
strings "strings"
reflect "reflect"
io "io"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -251,24 +255,6 @@ func (m *Rule) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func encodeFixed64Generated(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Generated(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
@@ -989,40 +975,39 @@ func init() {
}
var fileDescriptorGenerated = []byte{
// 545 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x51, 0x4d, 0x8b, 0x13, 0x3f,
0x18, 0x6f, 0xfe, 0xdb, 0x42, 0x9b, 0x76, 0xf9, 0xcb, 0xe0, 0xa1, 0x14, 0x99, 0x96, 0x9e, 0x2a,
0x62, 0x62, 0x57, 0x59, 0xbc, 0xee, 0xec, 0x41, 0x0a, 0xbe, 0x2c, 0x41, 0x3c, 0x88, 0x07, 0xd3,
0xf6, 0xd9, 0x69, 0x6c, 0x27, 0x19, 0x92, 0x4c, 0x41, 0x4f, 0x5e, 0xbc, 0x0b, 0x7e, 0xa9, 0x1e,
0xf7, 0xb8, 0xa7, 0x62, 0x47, 0xf0, 0xe8, 0x67, 0x90, 0x99, 0xe9, 0xec, 0xcc, 0x5a, 0x8b, 0xab,
0xb7, 0x3c, 0xbf, 0x27, 0xbf, 0xb7, 0x04, 0xb3, 0xf9, 0x63, 0x43, 0x84, 0xa2, 0xf3, 0x68, 0x0c,
0x5a, 0x82, 0x05, 0x43, 0x97, 0x20, 0xa7, 0x4a, 0xd3, 0xed, 0x82, 0x87, 0x82, 0xf2, 0x69, 0x20,
0x8c, 0x11, 0x4a, 0x6a, 0xf0, 0x85, 0xb1, 0x9a, 0x5b, 0xa1, 0x24, 0x5d, 0x0e, 0xf9, 0x22, 0x9c,
0xf1, 0x21, 0xf5, 0x41, 0x82, 0xe6, 0x16, 0xa6, 0x24, 0xd4, 0xca, 0x2a, 0xe7, 0x6e, 0x46, 0x25,
0x3c, 0x14, 0xe4, 0xb7, 0x54, 0x92, 0x53, 0x3b, 0xf7, 0x7d, 0x61, 0x67, 0xd1, 0x98, 0x4c, 0x54,
0x40, 0x7d, 0xe5, 0x2b, 0x9a, 0x2a, 0x8c, 0xa3, 0xf3, 0x74, 0x4a, 0x87, 0xf4, 0x94, 0x29, 0x77,
0x1e, 0x15, 0xa1, 0x02, 0x3e, 0x99, 0x09, 0x09, 0xfa, 0x3d, 0x0d, 0xe7, 0x7e, 0x02, 0x18, 0x1a,
0x80, 0xe5, 0x74, 0xb9, 0x93, 0xa7, 0x43, 0xf7, 0xb1, 0x74, 0x24, 0xad, 0x08, 0x60, 0x87, 0x70,
0xfc, 0x27, 0x82, 0x99, 0xcc, 0x20, 0xe0, 0x3b, 0xbc, 0x87, 0xfb, 0x78, 0x91, 0x15, 0x0b, 0x2a,
0xa4, 0x35, 0x56, 0xff, 0x4a, 0xea, 0x7f, 0x42, 0xb8, 0x39, 0x92, 0xc2, 0x0a, 0xbe, 0x10, 0x1f,
0x40, 0x3b, 0x3d, 0x5c, 0x95, 0x3c, 0x80, 0x36, 0xea, 0xa1, 0x41, 0xc3, 0x6b, 0xad, 0xd6, 0xdd,
0x4a, 0xbc, 0xee, 0x56, 0x9f, 0xf3, 0x00, 0x58, 0xba, 0x71, 0x5e, 0xe2, 0x9a, 0x8e, 0x16, 0x60,
0xda, 0xff, 0xf5, 0x0e, 0x06, 0xcd, 0x23, 0x4a, 0x6e, 0xfc, 0xde, 0x84, 0x45, 0x0b, 0xf0, 0x0e,
0xb7, 0x9a, 0xb5, 0x64, 0x32, 0x2c, 0x13, 0xeb, 0xff, 0x40, 0xb8, 0x5d, 0xca, 0x71, 0xaa, 0xe4,
0xb9, 0xf0, 0xa3, 0x4c, 0xc0, 0x79, 0x8b, 0xeb, 0xc9, 0xeb, 0x4e, 0xb9, 0xe5, 0x69, 0xb0, 0xe6,
0xd1, 0x83, 0x92, 0xeb, 0x55, 0x59, 0x12, 0xce, 0xfd, 0x04, 0x30, 0x24, 0xb9, 0x4d, 0x96, 0x43,
0xf2, 0x62, 0xfc, 0x0e, 0x26, 0xf6, 0x19, 0x58, 0xee, 0x39, 0x5b, 0x5b, 0x5c, 0x60, 0xec, 0x4a,
0xd5, 0x09, 0x71, 0x4b, 0x14, 0xee, 0x79, 0xb7, 0xe3, 0xbf, 0xe8, 0x56, 0x0a, 0xef, 0xdd, 0xde,
0x7a, 0xb5, 0x4a, 0xa0, 0x61, 0xd7, 0x1c, 0xfa, 0xdf, 0x11, 0xbe, 0xb3, 0xaf, 0xf0, 0x53, 0x61,
0xac, 0xf3, 0x66, 0xa7, 0x34, 0xb9, 0x59, 0xe9, 0x84, 0x9d, 0x56, 0xbe, 0xb5, 0x8d, 0x51, 0xcf,
0x91, 0x52, 0xe1, 0x19, 0xae, 0x09, 0x0b, 0x41, 0xde, 0xf4, 0xf4, 0xdf, 0x9a, 0x5e, 0x4b, 0x5d,
0xfc, 0xec, 0x28, 0x51, 0x66, 0x99, 0x41, 0xff, 0x0b, 0xc2, 0xd5, 0xe4, 0xab, 0x9d, 0x7b, 0xb8,
0xc1, 0x43, 0xf1, 0x44, 0xab, 0x28, 0x34, 0x6d, 0xd4, 0x3b, 0x18, 0x34, 0xbc, 0xc3, 0x78, 0xdd,
0x6d, 0x9c, 0x9c, 0x8d, 0x32, 0x90, 0x15, 0x7b, 0x67, 0x88, 0x9b, 0x3c, 0x14, 0xaf, 0x40, 0x27,
0x39, 0xb2, 0x94, 0x0d, 0xef, 0xff, 0x78, 0xdd, 0x6d, 0x9e, 0x9c, 0x8d, 0x72, 0x98, 0x95, 0xef,
0x24, 0xfa, 0x1a, 0x8c, 0x8a, 0xf4, 0x04, 0x4c, 0xfb, 0xa0, 0xd0, 0x67, 0x39, 0xc8, 0x8a, 0xbd,
0x47, 0x56, 0x1b, 0xb7, 0x72, 0xb1, 0x71, 0x2b, 0x97, 0x1b, 0xb7, 0xf2, 0x31, 0x76, 0xd1, 0x2a,
0x76, 0xd1, 0x45, 0xec, 0xa2, 0xcb, 0xd8, 0x45, 0x5f, 0x63, 0x17, 0x7d, 0xfe, 0xe6, 0x56, 0x5e,
0xd7, 0xf3, 0xd2, 0x3f, 0x03, 0x00, 0x00, 0xff, 0xff, 0x1d, 0xfb, 0x23, 0x89, 0xaa, 0x04, 0x00,
0x00,
// 531 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x51, 0x4d, 0x8b, 0x13, 0x31,
0x18, 0x6e, 0x6c, 0x0b, 0x6d, 0xda, 0x45, 0x19, 0x3c, 0x94, 0x22, 0xd3, 0xd2, 0x53, 0x45, 0x4c,
0xec, 0x22, 0x8b, 0xd7, 0x9d, 0x3d, 0x48, 0xc1, 0x8f, 0x25, 0x88, 0x07, 0xf1, 0x60, 0xda, 0xbe,
0x3b, 0x8d, 0xed, 0x4c, 0x86, 0x24, 0x53, 0xd0, 0x93, 0x17, 0xef, 0x82, 0x7f, 0xaa, 0xc7, 0x3d,
0xee, 0xa9, 0xd8, 0x11, 0x3c, 0xfa, 0x1b, 0x24, 0x33, 0x9d, 0x9d, 0x59, 0xeb, 0xe2, 0xea, 0x2d,
0xef, 0xf3, 0xe6, 0xf9, 0x4a, 0x30, 0x5b, 0x3c, 0xd1, 0x44, 0x48, 0xba, 0x88, 0x27, 0xa0, 0x42,
0x30, 0xa0, 0xe9, 0x0a, 0xc2, 0x99, 0x54, 0x74, 0xb7, 0xe0, 0x91, 0xa0, 0x7c, 0x16, 0x08, 0xad,
0x85, 0x0c, 0x15, 0xf8, 0x42, 0x1b, 0xc5, 0x8d, 0x90, 0x21, 0x5d, 0x8d, 0xf8, 0x32, 0x9a, 0xf3,
0x11, 0xf5, 0x21, 0x04, 0xc5, 0x0d, 0xcc, 0x48, 0xa4, 0xa4, 0x91, 0xce, 0xfd, 0x8c, 0x4a, 0x78,
0x24, 0xc8, 0x1f, 0xa9, 0x24, 0xa7, 0x76, 0x1f, 0xfa, 0xc2, 0xcc, 0xe3, 0x09, 0x99, 0xca, 0x80,
0xfa, 0xd2, 0x97, 0x34, 0x55, 0x98, 0xc4, 0x67, 0xe9, 0x94, 0x0e, 0xe9, 0x29, 0x53, 0xee, 0x3e,
0x2e, 0x42, 0x05, 0x7c, 0x3a, 0x17, 0x21, 0xa8, 0x0f, 0x34, 0x5a, 0xf8, 0x16, 0xd0, 0x34, 0x00,
0xc3, 0xe9, 0x6a, 0x2f, 0x4f, 0x97, 0x5e, 0xc7, 0x52, 0x71, 0x68, 0x44, 0x00, 0x7b, 0x84, 0xa3,
0xbf, 0x11, 0xf4, 0x74, 0x0e, 0x01, 0xff, 0x9d, 0x37, 0xf8, 0x8c, 0x70, 0x6b, 0x1c, 0x0a, 0x23,
0xf8, 0x52, 0x7c, 0x04, 0xe5, 0xf4, 0x71, 0x2d, 0xe4, 0x01, 0x74, 0x50, 0x1f, 0x0d, 0x9b, 0x5e,
0x7b, 0xbd, 0xe9, 0x55, 0x92, 0x4d, 0xaf, 0xf6, 0x82, 0x07, 0xc0, 0xd2, 0x8d, 0xf3, 0x0a, 0xd7,
0x55, 0xbc, 0x04, 0xdd, 0xb9, 0xd5, 0xaf, 0x0e, 0x5b, 0x87, 0x94, 0xdc, 0xf8, 0xe9, 0x08, 0x8b,
0x97, 0xe0, 0x1d, 0xec, 0x34, 0xeb, 0x76, 0xd2, 0x2c, 0x13, 0x1b, 0xfc, 0x44, 0xb8, 0x53, 0xca,
0x71, 0x22, 0xc3, 0x33, 0xe1, 0xc7, 0x99, 0x80, 0xf3, 0x0e, 0x37, 0xec, 0x43, 0xcd, 0xb8, 0xe1,
0x69, 0xb0, 0xd6, 0xe1, 0xa3, 0x92, 0xeb, 0x65, 0x5f, 0x12, 0x2d, 0x7c, 0x0b, 0x68, 0x62, 0x6f,
0x93, 0xd5, 0x88, 0xbc, 0x9c, 0xbc, 0x87, 0xa9, 0x79, 0x0e, 0x86, 0x7b, 0xce, 0xce, 0x16, 0x17,
0x18, 0xbb, 0x54, 0x75, 0x22, 0xdc, 0x16, 0x85, 0x7b, 0xde, 0xed, 0xe8, 0x1f, 0xba, 0x95, 0xc2,
0x7b, 0x77, 0x77, 0x5e, 0xed, 0x12, 0xa8, 0xd9, 0x15, 0x87, 0xc1, 0x0f, 0x84, 0xef, 0x5d, 0x57,
0xf8, 0x99, 0xd0, 0xc6, 0x79, 0xbb, 0x57, 0x9a, 0xdc, 0xac, 0xb4, 0x65, 0xa7, 0x95, 0xef, 0xec,
0x62, 0x34, 0x72, 0xa4, 0x54, 0x78, 0x8e, 0xeb, 0xc2, 0x40, 0x90, 0x37, 0x3d, 0xf9, 0xbf, 0xa6,
0x57, 0x52, 0x17, 0x3f, 0x3b, 0xb6, 0xca, 0x2c, 0x33, 0x18, 0x7c, 0x45, 0xb8, 0x66, 0xbf, 0xda,
0x79, 0x80, 0x9b, 0x3c, 0x12, 0x4f, 0x95, 0x8c, 0x23, 0xdd, 0x41, 0xfd, 0xea, 0xb0, 0xe9, 0x1d,
0x24, 0x9b, 0x5e, 0xf3, 0xf8, 0x74, 0x9c, 0x81, 0xac, 0xd8, 0x3b, 0x23, 0xdc, 0xe2, 0x91, 0x78,
0x0d, 0xca, 0xe6, 0xc8, 0x52, 0x36, 0xbd, 0xdb, 0xc9, 0xa6, 0xd7, 0x3a, 0x3e, 0x1d, 0xe7, 0x30,
0x2b, 0xdf, 0xb1, 0xfa, 0x0a, 0xb4, 0x8c, 0xd5, 0x14, 0x74, 0xa7, 0x5a, 0xe8, 0xb3, 0x1c, 0x64,
0xc5, 0xde, 0x23, 0xeb, 0xad, 0x5b, 0x39, 0xdf, 0xba, 0x95, 0x8b, 0xad, 0x5b, 0xf9, 0x94, 0xb8,
0x68, 0x9d, 0xb8, 0xe8, 0x3c, 0x71, 0xd1, 0x45, 0xe2, 0xa2, 0x6f, 0x89, 0x8b, 0xbe, 0x7c, 0x77,
0x2b, 0x6f, 0x1a, 0x79, 0xe9, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xa2, 0x06, 0xa3, 0xcb, 0x75,
0x04, 0x00, 0x00,
}

View File

@@ -24,7 +24,6 @@ package k8s.io.api.admissionregistration.v1alpha1;
import "k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto";
import "k8s.io/apimachinery/pkg/runtime/generated.proto";
import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto";
import "k8s.io/apimachinery/pkg/util/intstr/generated.proto";
// Package-wide variables from generator "generated".
option go_package = "v1alpha1";
@@ -89,7 +88,7 @@ message Rule {
repeated string apiVersions = 2;
// Resources is a list of resources this rule applies to.
//
//
// For example:
// 'pods' means pods.
// 'pods/log' means the log subresource of pods.
@@ -97,10 +96,10 @@ message Rule {
// 'pods/*' means all subresources of pods.
// '*/scale' means all scale subresources.
// '*/*' means all resources and their subresources.
//
//
// If wildcard is present, the validation rule will ensure resources do not
// overlap with each other.
//
//
// Depending on the enclosing object, subresources might not be allowed.
// Required.
repeated string resources = 3;

View File

@@ -16,10 +16,10 @@ limitations under the License.
// +k8s:deepcopy-gen=package
// +k8s:openapi-gen=true
// +groupName=admissionregistration.k8s.io
// Package v1beta1 is the v1beta1 version of the API.
// AdmissionConfiguration and AdmissionPluginConfiguration are legacy static admission plugin configuration
// InitializerConfiguration and validatingWebhookConfiguration is for the
// new dynamic admission controller configuration.
// +groupName=admissionregistration.k8s.io
package v1beta1

View File

@@ -14,9 +14,8 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-gogo.
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: k8s.io/kubernetes/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto
// DO NOT EDIT!
/*
Package v1beta1 is a generated protocol buffer package.
@@ -37,16 +36,21 @@ limitations under the License.
*/
package v1beta1
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import (
fmt "fmt"
import k8s_io_apimachinery_pkg_apis_meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
proto "github.com/gogo/protobuf/proto"
import strings "strings"
import reflect "reflect"
math "math"
import io "io"
k8s_io_apimachinery_pkg_apis_meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
strings "strings"
reflect "reflect"
io "io"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -457,6 +461,12 @@ func (m *Webhook) MarshalTo(dAtA []byte) (int, error) {
}
i += n7
}
if m.SideEffects != nil {
dAtA[i] = 0x32
i++
i = encodeVarintGenerated(dAtA, i, uint64(len(*m.SideEffects)))
i += copy(dAtA[i:], *m.SideEffects)
}
return i, nil
}
@@ -500,24 +510,6 @@ func (m *WebhookClientConfig) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func encodeFixed64Generated(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Generated(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
@@ -656,6 +648,10 @@ func (m *Webhook) Size() (n int) {
l = m.NamespaceSelector.Size()
n += 1 + l + sovGenerated(uint64(l))
}
if m.SideEffects != nil {
l = len(*m.SideEffects)
n += 1 + l + sovGenerated(uint64(l))
}
return n
}
@@ -779,6 +775,7 @@ func (this *Webhook) String() string {
`Rules:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Rules), "RuleWithOperations", "RuleWithOperations", 1), `&`, ``, 1) + `,`,
`FailurePolicy:` + valueToStringGenerated(this.FailurePolicy) + `,`,
`NamespaceSelector:` + strings.Replace(fmt.Sprintf("%v", this.NamespaceSelector), "LabelSelector", "k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector", 1) + `,`,
`SideEffects:` + valueToStringGenerated(this.SideEffects) + `,`,
`}`,
}, "")
return s
@@ -1813,6 +1810,36 @@ func (m *Webhook) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 6:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field SideEffects", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGenerated
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
s := SideEffectClass(dAtA[iNdEx:postIndex])
m.SideEffects = &s
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
@@ -2088,63 +2115,62 @@ func init() {
}
var fileDescriptorGenerated = []byte{
// 916 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x54, 0xcf, 0x6f, 0x1b, 0x45,
0x14, 0xf6, 0xd6, 0x8e, 0x6c, 0x8f, 0x6d, 0xd1, 0x0c, 0x20, 0x99, 0xa8, 0xda, 0xb5, 0x7c, 0x40,
0x96, 0x50, 0x76, 0x71, 0x8a, 0x10, 0x42, 0x20, 0x94, 0x8d, 0x54, 0x88, 0x94, 0xb4, 0x66, 0x02,
0xad, 0x84, 0x38, 0x30, 0x5e, 0xbf, 0xd8, 0x83, 0xf7, 0x97, 0x66, 0x66, 0xdd, 0xe6, 0x86, 0xc4,
0x3f, 0x80, 0xc4, 0x1f, 0xc1, 0x5f, 0xc1, 0x3d, 0x37, 0x7a, 0x41, 0xf4, 0xb4, 0x22, 0xcb, 0x99,
0x03, 0xd7, 0x9e, 0xd0, 0xce, 0xae, 0xbd, 0x76, 0x1c, 0xa7, 0xee, 0x85, 0x03, 0x37, 0xcf, 0xf7,
0xde, 0xf7, 0xbd, 0xf7, 0x3d, 0xbf, 0xb7, 0xe8, 0xcb, 0xe9, 0x47, 0xc2, 0x64, 0x81, 0x35, 0x8d,
0x86, 0xc0, 0x7d, 0x90, 0x20, 0xac, 0x19, 0xf8, 0xa3, 0x80, 0x5b, 0x79, 0x80, 0x86, 0xcc, 0xa2,
0x23, 0x8f, 0x09, 0xc1, 0x02, 0x9f, 0xc3, 0x98, 0x09, 0xc9, 0xa9, 0x64, 0x81, 0x6f, 0xcd, 0xfa,
0x43, 0x90, 0xb4, 0x6f, 0x8d, 0xc1, 0x07, 0x4e, 0x25, 0x8c, 0xcc, 0x90, 0x07, 0x32, 0xc0, 0xbd,
0x8c, 0x69, 0xd2, 0x90, 0x99, 0x37, 0x32, 0xcd, 0x9c, 0xb9, 0xb7, 0x3f, 0x66, 0x72, 0x12, 0x0d,
0x4d, 0x27, 0xf0, 0xac, 0x71, 0x30, 0x0e, 0x2c, 0x25, 0x30, 0x8c, 0xce, 0xd5, 0x4b, 0x3d, 0xd4,
0xaf, 0x4c, 0x78, 0xaf, 0xbb, 0xd4, 0x92, 0x13, 0x70, 0xb0, 0x66, 0x6b, 0xc5, 0xf7, 0x4e, 0x8b,
0x1c, 0x78, 0x26, 0xc1, 0x4f, 0x6b, 0x8b, 0x7d, 0x1a, 0x32, 0x01, 0x7c, 0x06, 0xdc, 0x0a, 0xa7,
0xe3, 0x34, 0x26, 0x56, 0x13, 0x36, 0x79, 0xd9, 0xfb, 0xa0, 0x90, 0xf3, 0xa8, 0x33, 0x61, 0x3e,
0xf0, 0x8b, 0x42, 0xc3, 0x03, 0x49, 0x6f, 0x6a, 0xc2, 0xda, 0xc4, 0xe2, 0x91, 0x2f, 0x99, 0x07,
0x6b, 0x84, 0x0f, 0x5f, 0x45, 0x10, 0xce, 0x04, 0x3c, 0xba, 0xc6, 0xbb, 0xbf, 0x89, 0x17, 0x49,
0xe6, 0x5a, 0xcc, 0x97, 0x42, 0xf2, 0xeb, 0xa4, 0xee, 0xef, 0x1a, 0xba, 0x77, 0x1a, 0x49, 0x2a,
0x99, 0x3f, 0x7e, 0x02, 0xc3, 0x49, 0x10, 0x4c, 0x8f, 0x02, 0xff, 0x9c, 0x8d, 0xa3, 0xec, 0xef,
0xc1, 0xdf, 0xa1, 0x5a, 0xea, 0x6c, 0x44, 0x25, 0x6d, 0x6b, 0x1d, 0xad, 0xd7, 0x38, 0x78, 0xdf,
0x2c, 0xfe, 0xd3, 0x45, 0x21, 0x33, 0x9c, 0x8e, 0x53, 0x40, 0x98, 0x69, 0xb6, 0x39, 0xeb, 0x9b,
0x8f, 0x86, 0xdf, 0x83, 0x23, 0x4f, 0x41, 0x52, 0x1b, 0x5f, 0xc6, 0x46, 0x29, 0x89, 0x0d, 0x54,
0x60, 0x64, 0xa1, 0x8a, 0xcf, 0x50, 0x2d, 0xaf, 0x2c, 0xda, 0x77, 0x3a, 0xe5, 0x5e, 0xe3, 0xa0,
0x6f, 0x6e, 0xbb, 0x35, 0x66, 0xce, 0xb4, 0x2b, 0x69, 0x09, 0x52, 0x7b, 0x9a, 0x0b, 0x75, 0xff,
0xd6, 0x50, 0xe7, 0x36, 0x5f, 0x27, 0x4c, 0x48, 0xfc, 0xed, 0x9a, 0x37, 0x73, 0x3b, 0x6f, 0x29,
0x5b, 0x39, 0xbb, 0x9b, 0x3b, 0xab, 0xcd, 0x91, 0x25, 0x5f, 0x53, 0xb4, 0xc3, 0x24, 0x78, 0x73,
0x53, 0x0f, 0xb6, 0x37, 0x75, 0x5b, 0xe3, 0x76, 0x2b, 0x2f, 0xb9, 0x73, 0x9c, 0x8a, 0x93, 0xac,
0x46, 0xf7, 0x67, 0x0d, 0x55, 0x48, 0xe4, 0x02, 0x7e, 0x0f, 0xd5, 0x69, 0xc8, 0x3e, 0xe7, 0x41,
0x14, 0x8a, 0xb6, 0xd6, 0x29, 0xf7, 0xea, 0x76, 0x2b, 0x89, 0x8d, 0xfa, 0xe1, 0xe0, 0x38, 0x03,
0x49, 0x11, 0xc7, 0x7d, 0xd4, 0xa0, 0x21, 0x7b, 0x0c, 0x5c, 0x2d, 0xbe, 0x6a, 0xb4, 0x6e, 0xbf,
0x91, 0xc4, 0x46, 0xe3, 0x70, 0x70, 0x3c, 0x87, 0xc9, 0x72, 0x4e, 0xaa, 0xcf, 0x41, 0x04, 0x11,
0x77, 0x40, 0xb4, 0xcb, 0x85, 0x3e, 0x99, 0x83, 0xa4, 0x88, 0x77, 0x7f, 0xd1, 0x10, 0x4e, 0xbb,
0x7a, 0xc2, 0xe4, 0xe4, 0x51, 0x08, 0x99, 0x03, 0x81, 0x3f, 0x43, 0x28, 0x58, 0xbc, 0xf2, 0x26,
0x0d, 0xb5, 0x1f, 0x0b, 0xf4, 0x65, 0x6c, 0xb4, 0x16, 0xaf, 0xaf, 0x2e, 0x42, 0x20, 0x4b, 0x14,
0x3c, 0x40, 0x15, 0x1e, 0xb9, 0xd0, 0xbe, 0xb3, 0xf6, 0xa7, 0xbd, 0x62, 0xb2, 0x69, 0x33, 0x76,
0x33, 0x9f, 0xa0, 0x1a, 0x18, 0x51, 0x4a, 0xdd, 0x1f, 0x35, 0x74, 0xf7, 0x0c, 0xf8, 0x8c, 0x39,
0x40, 0xe0, 0x1c, 0x38, 0xf8, 0x0e, 0x60, 0x0b, 0xd5, 0x7d, 0xea, 0x81, 0x08, 0xa9, 0x03, 0x6a,
0x41, 0xea, 0xf6, 0x6e, 0xce, 0xad, 0x3f, 0x9c, 0x07, 0x48, 0x91, 0x83, 0x3b, 0xa8, 0x92, 0x3e,
0x54, 0x5f, 0xf5, 0xa2, 0x4e, 0x9a, 0x4b, 0x54, 0x04, 0xdf, 0x43, 0x95, 0x90, 0xca, 0x49, 0xbb,
0xac, 0x32, 0x6a, 0x69, 0x74, 0x40, 0xe5, 0x84, 0x28, 0xb4, 0xfb, 0x87, 0x86, 0xf4, 0xc7, 0xd4,
0x65, 0xa3, 0xff, 0xdd, 0x3d, 0xfe, 0xa3, 0xa1, 0xee, 0xed, 0xce, 0xfe, 0x83, 0x8b, 0xf4, 0x56,
0x2f, 0xf2, 0x8b, 0xed, 0x6d, 0xdd, 0xde, 0xfa, 0x86, 0x9b, 0xfc, 0xad, 0x8c, 0xaa, 0x79, 0xfa,
0x62, 0x33, 0xb4, 0x8d, 0x9b, 0xf1, 0x14, 0x35, 0x1d, 0x97, 0x81, 0x2f, 0x33, 0xe9, 0x7c, 0xb7,
0x3f, 0x7d, 0xed, 0xd1, 0x1f, 0x2d, 0x89, 0xd8, 0x6f, 0xe5, 0x85, 0x9a, 0xcb, 0x28, 0x59, 0x29,
0x84, 0x29, 0xda, 0x49, 0x4f, 0x20, 0xbb, 0xe6, 0xc6, 0xc1, 0x27, 0xaf, 0x77, 0x4d, 0xab, 0xa7,
0x5d, 0x4c, 0x22, 0x8d, 0x09, 0x92, 0x29, 0xe3, 0x13, 0xd4, 0x3a, 0xa7, 0xcc, 0x8d, 0x38, 0x0c,
0x02, 0x97, 0x39, 0x17, 0xed, 0x8a, 0x1a, 0xc3, 0xbb, 0x49, 0x6c, 0xb4, 0x1e, 0x2c, 0x07, 0x5e,
0xc6, 0xc6, 0xee, 0x0a, 0xa0, 0x4e, 0x7f, 0x95, 0x8c, 0x9f, 0xa1, 0xdd, 0xc5, 0xc9, 0x9d, 0x81,
0x0b, 0x8e, 0x0c, 0x78, 0x7b, 0x47, 0x8d, 0xeb, 0xfe, 0x96, 0xdb, 0x42, 0x87, 0xe0, 0xce, 0xa9,
0xf6, 0xdb, 0x49, 0x6c, 0xec, 0x3e, 0xbc, 0xae, 0x48, 0xd6, 0x8b, 0x74, 0x7f, 0xd5, 0xd0, 0x9b,
0x37, 0x8c, 0x19, 0x53, 0x54, 0x15, 0xd9, 0xc7, 0x23, 0xdf, 0xda, 0x8f, 0xb7, 0x1f, 0xe2, 0xf5,
0xaf, 0x8e, 0xdd, 0x48, 0x62, 0xa3, 0x3a, 0x47, 0xe7, 0xba, 0xb8, 0x87, 0x6a, 0x0e, 0xb5, 0x23,
0x7f, 0x94, 0x7f, 0xf6, 0x9a, 0x76, 0x33, 0xdd, 0xf2, 0xa3, 0xc3, 0x0c, 0x23, 0x8b, 0x28, 0x7e,
0x07, 0x95, 0x23, 0xee, 0xe6, 0x5f, 0x98, 0x6a, 0x12, 0x1b, 0xe5, 0xaf, 0xc9, 0x09, 0x49, 0x31,
0x7b, 0xff, 0xf2, 0x4a, 0x2f, 0x3d, 0xbf, 0xd2, 0x4b, 0x2f, 0xae, 0xf4, 0xd2, 0x0f, 0x89, 0xae,
0x5d, 0x26, 0xba, 0xf6, 0x3c, 0xd1, 0xb5, 0x17, 0x89, 0xae, 0xfd, 0x99, 0xe8, 0xda, 0x4f, 0x7f,
0xe9, 0xa5, 0x6f, 0xaa, 0x79, 0x6b, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0x78, 0x57, 0x76, 0x28,
0x10, 0x0a, 0x00, 0x00,
// 906 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x54, 0xcf, 0x6f, 0xe3, 0x44,
0x14, 0x8e, 0x37, 0x29, 0x49, 0x26, 0x89, 0x76, 0x3b, 0x80, 0x14, 0xaa, 0x95, 0x1d, 0xe5, 0x80,
0x22, 0xa1, 0xb5, 0x49, 0x41, 0x08, 0x21, 0x10, 0xaa, 0x0b, 0x0b, 0x95, 0xba, 0xbb, 0x61, 0x0a,
0xbb, 0x12, 0xe2, 0xc0, 0xc4, 0x79, 0x49, 0x86, 0xf8, 0x97, 0x66, 0xc6, 0x59, 0x7a, 0x43, 0xe2,
0x1f, 0x40, 0x42, 0xfc, 0x0d, 0xfc, 0x15, 0xdc, 0x7b, 0xdc, 0x0b, 0x62, 0x4f, 0x16, 0x35, 0x67,
0x0e, 0x5c, 0x7b, 0x42, 0x63, 0x3b, 0x71, 0xd2, 0x6c, 0xbb, 0xe9, 0x85, 0x03, 0x37, 0xcf, 0xf7,
0xe6, 0xfb, 0xde, 0xfb, 0x9e, 0xdf, 0x1b, 0xf4, 0xc5, 0xec, 0x7d, 0x61, 0xb2, 0xc0, 0x9a, 0x45,
0x43, 0xe0, 0x3e, 0x48, 0x10, 0xd6, 0x1c, 0xfc, 0x51, 0xc0, 0xad, 0x3c, 0x40, 0x43, 0x66, 0xd1,
0x91, 0xc7, 0x84, 0x60, 0x81, 0xcf, 0x61, 0xc2, 0x84, 0xe4, 0x54, 0xb2, 0xc0, 0xb7, 0xe6, 0xfd,
0x21, 0x48, 0xda, 0xb7, 0x26, 0xe0, 0x03, 0xa7, 0x12, 0x46, 0x66, 0xc8, 0x03, 0x19, 0xe0, 0x5e,
0xc6, 0x34, 0x69, 0xc8, 0xcc, 0x17, 0x32, 0xcd, 0x9c, 0xb9, 0x77, 0x6f, 0xc2, 0xe4, 0x34, 0x1a,
0x9a, 0x4e, 0xe0, 0x59, 0x93, 0x60, 0x12, 0x58, 0xa9, 0xc0, 0x30, 0x1a, 0xa7, 0xa7, 0xf4, 0x90,
0x7e, 0x65, 0xc2, 0x7b, 0xef, 0x16, 0x25, 0x79, 0xd4, 0x99, 0x32, 0x1f, 0xf8, 0xa9, 0x15, 0xce,
0x26, 0x0a, 0x10, 0x96, 0x07, 0x92, 0x5a, 0xf3, 0x8d, 0x72, 0xf6, 0xac, 0xab, 0x58, 0x3c, 0xf2,
0x25, 0xf3, 0x60, 0x83, 0xf0, 0xde, 0xcb, 0x08, 0xc2, 0x99, 0x82, 0x47, 0x2f, 0xf3, 0xba, 0xbf,
0x6b, 0xe8, 0xee, 0x83, 0x48, 0x52, 0xc9, 0xfc, 0xc9, 0x13, 0x18, 0x4e, 0x83, 0x60, 0x76, 0x18,
0xf8, 0x63, 0x36, 0x89, 0x32, 0xdb, 0xf8, 0x5b, 0x54, 0x53, 0x45, 0x8e, 0xa8, 0xa4, 0x6d, 0xad,
0xa3, 0xf5, 0x1a, 0xfb, 0x6f, 0x9b, 0x45, 0xaf, 0x96, 0xb9, 0xcc, 0x70, 0x36, 0x51, 0x80, 0x30,
0xd5, 0x6d, 0x73, 0xde, 0x37, 0x1f, 0x0d, 0xbf, 0x03, 0x47, 0x3e, 0x00, 0x49, 0x6d, 0x7c, 0x16,
0x1b, 0xa5, 0x24, 0x36, 0x50, 0x81, 0x91, 0xa5, 0x2a, 0x3e, 0x41, 0xb5, 0x3c, 0xb3, 0x68, 0xdf,
0xea, 0x94, 0x7b, 0x8d, 0xfd, 0xbe, 0xb9, 0xed, 0xdf, 0x30, 0x73, 0xa6, 0x5d, 0x51, 0x29, 0x48,
0xed, 0x69, 0x2e, 0xd4, 0xfd, 0x5b, 0x43, 0x9d, 0xeb, 0x7c, 0x1d, 0x33, 0x21, 0xf1, 0x37, 0x1b,
0xde, 0xcc, 0xed, 0xbc, 0x29, 0x76, 0xea, 0xec, 0x4e, 0xee, 0xac, 0xb6, 0x40, 0x56, 0x7c, 0xcd,
0xd0, 0x0e, 0x93, 0xe0, 0x2d, 0x4c, 0xdd, 0xdf, 0xde, 0xd4, 0x75, 0x85, 0xdb, 0xad, 0x3c, 0xe5,
0xce, 0x91, 0x12, 0x27, 0x59, 0x8e, 0xee, 0xcf, 0x1a, 0xaa, 0x90, 0xc8, 0x05, 0xfc, 0x16, 0xaa,
0xd3, 0x90, 0x7d, 0xc6, 0x83, 0x28, 0x14, 0x6d, 0xad, 0x53, 0xee, 0xd5, 0xed, 0x56, 0x12, 0x1b,
0xf5, 0x83, 0xc1, 0x51, 0x06, 0x92, 0x22, 0x8e, 0xfb, 0xa8, 0x41, 0x43, 0xf6, 0x18, 0xb8, 0x2a,
0x25, 0x2b, 0xb4, 0x6e, 0xdf, 0x4e, 0x62, 0xa3, 0x71, 0x30, 0x38, 0x5a, 0xc0, 0x64, 0xf5, 0x8e,
0xd2, 0xe7, 0x20, 0x82, 0x88, 0x3b, 0x20, 0xda, 0xe5, 0x42, 0x9f, 0x2c, 0x40, 0x52, 0xc4, 0xbb,
0xbf, 0x6a, 0x08, 0xab, 0xaa, 0x9e, 0x30, 0x39, 0x7d, 0x14, 0x42, 0xe6, 0x40, 0xe0, 0x8f, 0x11,
0x0a, 0x96, 0xa7, 0xbc, 0x48, 0x23, 0x9d, 0x8f, 0x25, 0x7a, 0x11, 0x1b, 0xad, 0xe5, 0xe9, 0xcb,
0xd3, 0x10, 0xc8, 0x0a, 0x05, 0x0f, 0x50, 0x85, 0x47, 0x2e, 0xb4, 0x6f, 0x6d, 0xfc, 0xb4, 0x97,
0x74, 0x56, 0x15, 0x63, 0x37, 0xf3, 0x0e, 0xa6, 0x0d, 0x23, 0xa9, 0x52, 0xf7, 0x47, 0x0d, 0xdd,
0x39, 0x01, 0x3e, 0x67, 0x0e, 0x10, 0x18, 0x03, 0x07, 0xdf, 0x01, 0x6c, 0xa1, 0xba, 0x4f, 0x3d,
0x10, 0x21, 0x75, 0x20, 0x1d, 0x90, 0xba, 0xbd, 0x9b, 0x73, 0xeb, 0x0f, 0x17, 0x01, 0x52, 0xdc,
0xc1, 0x1d, 0x54, 0x51, 0x87, 0xb4, 0xae, 0x7a, 0x91, 0x47, 0xdd, 0x25, 0x69, 0x04, 0xdf, 0x45,
0x95, 0x90, 0xca, 0x69, 0xbb, 0x9c, 0xde, 0xa8, 0xa9, 0xe8, 0x80, 0xca, 0x29, 0x49, 0xd1, 0xee,
0x1f, 0x1a, 0xd2, 0x1f, 0x53, 0x97, 0x8d, 0xfe, 0x77, 0xfb, 0xf8, 0x8f, 0x86, 0xba, 0xd7, 0x3b,
0xfb, 0x0f, 0x36, 0xd2, 0x5b, 0xdf, 0xc8, 0xcf, 0xb7, 0xb7, 0x75, 0x7d, 0xe9, 0x57, 0xec, 0xe4,
0x2f, 0x15, 0x54, 0xcd, 0xaf, 0x2f, 0x27, 0x43, 0xbb, 0x72, 0x32, 0x9e, 0xa2, 0xa6, 0xe3, 0x32,
0xf0, 0x65, 0x26, 0x9d, 0xcf, 0xf6, 0x47, 0x37, 0x6e, 0xfd, 0xe1, 0x8a, 0x88, 0xfd, 0x5a, 0x9e,
0xa8, 0xb9, 0x8a, 0x92, 0xb5, 0x44, 0x98, 0xa2, 0x1d, 0xb5, 0x02, 0xd9, 0x36, 0x37, 0xf6, 0x3f,
0xbc, 0xd9, 0x36, 0xad, 0xaf, 0x76, 0xd1, 0x09, 0x15, 0x13, 0x24, 0x53, 0xc6, 0xc7, 0xa8, 0x35,
0xa6, 0xcc, 0x8d, 0x38, 0x0c, 0x02, 0x97, 0x39, 0xa7, 0xed, 0x4a, 0xda, 0x86, 0x37, 0x93, 0xd8,
0x68, 0xdd, 0x5f, 0x0d, 0x5c, 0xc4, 0xc6, 0xee, 0x1a, 0x90, 0xae, 0xfe, 0x3a, 0x19, 0x7f, 0x8f,
0x76, 0x97, 0x2b, 0x77, 0x02, 0x2e, 0x38, 0x32, 0xe0, 0xed, 0x9d, 0xb4, 0x5d, 0xef, 0x6c, 0x39,
0x2d, 0x74, 0x08, 0xee, 0x82, 0x6a, 0xbf, 0x9e, 0xc4, 0xc6, 0xee, 0xc3, 0xcb, 0x8a, 0x64, 0x33,
0x09, 0xfe, 0x04, 0x35, 0x04, 0x1b, 0xc1, 0xa7, 0xe3, 0x31, 0x38, 0x52, 0xb4, 0x5f, 0x49, 0x5d,
0x74, 0xd5, 0x7b, 0x79, 0x52, 0xc0, 0x17, 0xb1, 0x71, 0xbb, 0x38, 0x1e, 0xba, 0x54, 0x08, 0xb2,
0x4a, 0xeb, 0xfe, 0xa6, 0xa1, 0x57, 0x5f, 0xf0, 0xb3, 0x30, 0x45, 0x55, 0x91, 0x3d, 0x41, 0xf9,
0xec, 0x7f, 0xb0, 0xfd, 0xaf, 0xb8, 0xfc, 0x76, 0xd9, 0x8d, 0x24, 0x36, 0xaa, 0x0b, 0x74, 0xa1,
0x8b, 0x7b, 0xa8, 0xe6, 0x50, 0x3b, 0xf2, 0x47, 0xf9, 0xe3, 0xd9, 0xb4, 0x9b, 0x6a, 0x57, 0x0e,
0x0f, 0x32, 0x8c, 0x2c, 0xa3, 0xf8, 0x0d, 0x54, 0x8e, 0xb8, 0x9b, 0xbf, 0x53, 0xd5, 0x24, 0x36,
0xca, 0x5f, 0x91, 0x63, 0xa2, 0x30, 0xfb, 0xde, 0xd9, 0xb9, 0x5e, 0x7a, 0x76, 0xae, 0x97, 0x9e,
0x9f, 0xeb, 0xa5, 0x1f, 0x12, 0x5d, 0x3b, 0x4b, 0x74, 0xed, 0x59, 0xa2, 0x6b, 0xcf, 0x13, 0x5d,
0xfb, 0x33, 0xd1, 0xb5, 0x9f, 0xfe, 0xd2, 0x4b, 0x5f, 0x57, 0xf3, 0xd2, 0xfe, 0x0d, 0x00, 0x00,
0xff, 0xff, 0x85, 0x06, 0x8c, 0x7f, 0xae, 0x09, 0x00, 0x00,
}

View File

@@ -21,12 +21,9 @@ syntax = 'proto2';
package k8s.io.api.admissionregistration.v1beta1;
import "k8s.io/api/core/v1/generated.proto";
import "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.proto";
import "k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto";
import "k8s.io/apimachinery/pkg/runtime/generated.proto";
import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto";
import "k8s.io/apimachinery/pkg/util/intstr/generated.proto";
// Package-wide variables from generator "generated".
option go_package = "v1beta1";
@@ -69,7 +66,7 @@ message Rule {
repeated string apiVersions = 2;
// Resources is a list of resources this rule applies to.
//
//
// For example:
// 'pods' means pods.
// 'pods/log' means the log subresource of pods.
@@ -77,10 +74,10 @@ message Rule {
// 'pods/*' means all subresources of pods.
// '*/scale' means all scale subresources.
// '*/*' means all resources and their subresources.
//
//
// If wildcard is present, the validation rule will ensure resources do not
// overlap with each other.
//
//
// Depending on the enclosing object, subresources might not be allowed.
// Required.
repeated string resources = 3;
@@ -171,7 +168,7 @@ message Webhook {
// object itself is a namespace, the matching is performed on
// object.metadata.labels. If the object is another cluster scoped resource,
// it never skips the webhook.
//
//
// For example, to run the webhook on any objects whose namespace is not
// associated with "runlevel" of "0" or "1"; you will set the selector as
// follows:
@@ -187,7 +184,7 @@ message Webhook {
// }
// ]
// }
//
//
// If instead you want to only run the webhook on any objects whose
// namespace is associated with the "environment" of "prod" or "staging";
// you will set the selector as follows:
@@ -203,61 +200,70 @@ message Webhook {
// }
// ]
// }
//
//
// See
// https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
// for more examples of label selectors.
//
//
// Default to the empty LabelSelector, which matches everything.
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.LabelSelector namespaceSelector = 5;
// SideEffects states whether this webhookk has side effects.
// Acceptable values are: Unknown, None, Some, NoneOnDryRun
// Webhooks with side effects MUST implement a reconciliation system, since a request may be
// rejected by a future step in the admission change and the side effects therefore need to be undone.
// Requests with the dryRun attribute will be auto-rejected if they match a webhook with
// sideEffects == Unknown or Some. Defaults to Unknown.
// +optional
optional string sideEffects = 6;
}
// WebhookClientConfig contains the information to make a TLS
// connection with the webhook
message WebhookClientConfig {
// `url` gives the location of the webhook, in standard URL form
// (`[scheme://]host:port/path`). Exactly one of `url` or `service`
// (`scheme://host:port/path`). Exactly one of `url` or `service`
// must be specified.
//
//
// The `host` should not refer to a service running in the cluster; use
// the `service` field instead. The host might be resolved via external
// DNS in some apiservers (e.g., `kube-apiserver` cannot resolve
// in-cluster DNS as that would be a layering violation). `host` may
// also be an IP address.
//
//
// Please note that using `localhost` or `127.0.0.1` as a `host` is
// risky unless you take great care to run this webhook on all hosts
// which run an apiserver which might need to make calls to this
// webhook. Such installs are likely to be non-portable, i.e., not easy
// to turn up in a new cluster.
//
//
// The scheme must be "https"; the URL must begin with "https://".
//
//
// A path is optional, and if present may be any string permissible in
// a URL. You may use the path to pass an arbitrary string to the
// webhook, for example, a cluster identifier.
//
//
// Attempting to use a user or basic auth e.g. "user:password@" is not
// allowed. Fragments ("#...") and query parameters ("?...") are not
// allowed, either.
//
//
// +optional
optional string url = 3;
// `service` is a reference to the service for this webhook. Either
// `service` or `url` must be specified.
//
//
// If the webhook is running within the cluster, then you should use `service`.
//
//
// Port 443 will be used if it is open, otherwise it is an error.
//
//
// +optional
optional ServiceReference service = 1;
// `caBundle` is a PEM encoded CA bundle which will be used to validate
// the webhook's server certificate.
// Required.
// `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate.
// If unspecified, system trust roots on the apiserver are used.
// +optional
optional bytes caBundle = 2;
}

View File

@@ -60,6 +60,22 @@ const (
Fail FailurePolicyType = "Fail"
)
type SideEffectClass string
const (
// SideEffectClassUnknown means that no information is known about the side effects of calling the webhook.
// If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail.
SideEffectClassUnknown SideEffectClass = "Unknown"
// SideEffectClassNone means that calling the webhook will have no side effects.
SideEffectClassNone SideEffectClass = "None"
// SideEffectClassSome means that calling the webhook will possibly have side effects.
// If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail.
SideEffectClassSome SideEffectClass = "Some"
// SideEffectClassNoneOnDryRun means that calling the webhook will possibly have side effects, but if the
// request being reviewed has the dry-run attribute, the side effects will be suppressed.
SideEffectClassNoneOnDryRun SideEffectClass = "NoneOnDryRun"
)
// +genclient
// +genclient:nonNamespaced
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
@@ -191,6 +207,15 @@ type Webhook struct {
// Default to the empty LabelSelector, which matches everything.
// +optional
NamespaceSelector *metav1.LabelSelector `json:"namespaceSelector,omitempty" protobuf:"bytes,5,opt,name=namespaceSelector"`
// SideEffects states whether this webhookk has side effects.
// Acceptable values are: Unknown, None, Some, NoneOnDryRun
// Webhooks with side effects MUST implement a reconciliation system, since a request may be
// rejected by a future step in the admission change and the side effects therefore need to be undone.
// Requests with the dryRun attribute will be auto-rejected if they match a webhook with
// sideEffects == Unknown or Some. Defaults to Unknown.
// +optional
SideEffects *SideEffectClass `json:"sideEffects,omitempty" protobuf:"bytes,6,opt,name=sideEffects,casttype=SideEffectClass"`
}
// RuleWithOperations is a tuple of Operations and Resources. It is recommended to make
@@ -221,7 +246,7 @@ const (
// connection with the webhook
type WebhookClientConfig struct {
// `url` gives the location of the webhook, in standard URL form
// (`[scheme://]host:port/path`). Exactly one of `url` or `service`
// (`scheme://host:port/path`). Exactly one of `url` or `service`
// must be specified.
//
// The `host` should not refer to a service running in the cluster; use
@@ -257,12 +282,12 @@ type WebhookClientConfig struct {
// Port 443 will be used if it is open, otherwise it is an error.
//
// +optional
Service *ServiceReference `json:"service" protobuf:"bytes,1,opt,name=service"`
Service *ServiceReference `json:"service,omitempty" protobuf:"bytes,1,opt,name=service"`
// `caBundle` is a PEM encoded CA bundle which will be used to validate
// the webhook's server certificate.
// Required.
CABundle []byte `json:"caBundle" protobuf:"bytes,2,opt,name=caBundle"`
// `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate.
// If unspecified, system trust roots on the apiserver are used.
// +optional
CABundle []byte `json:"caBundle,omitempty" protobuf:"bytes,2,opt,name=caBundle"`
}
// ServiceReference holds a reference to Service.legacy.k8s.io

Some files were not shown because too many files have changed in this diff Show More