Compare commits

..

563 Commits

Author SHA1 Message Date
Kubernetes Publisher
7d04ecc65b Merge pull request #72860 from liggitt/automated-cherry-pick-of-#72856-upstream-release-1.10
Automated cherry pick of #72856: Fix nil panic propagation

Kubernetes-commit: 954ff68d59e9dc62fa8252ffa9023a90ff8a358c
2019-01-16 21:10:30 +00:00
Jordan Liggitt
b08a81711d one more time around the sun
Change-Id: I43df65fa4571609423785c334c855cfd21a3d0b8

Kubernetes-commit: b761b8a9e97a2279f1a2b1298a38c76392dd4bc2
2019-01-14 14:58:37 -05:00
Kubernetes Publisher
e52f649fe5 Merge pull request #65157 from caesarxuchao/cherrypick-65034-1.10
Automatic merge from submit-queue.

Manually cherrypick #65034 to 1.10

Manually cherrypicking #65034. Using hack/cherry_pick_pull.sh to cherrypick is difficult because that requires cherrypicking #63059 first.

This PR imported the latest jsoniterator library so that case sensitivity during unmarhsaling is optional. The PR also set Kubernetes json serializer to be case sensitive.

Fix #64612.

```release-notes
Kubernetes json deserializer is now case-sensitive to restore compatibility with pre-1.8 servers.
If your config files contains fields with wrong case, the config files will be now invalid.
```

Kubernetes-commit: 32ac1c9073b132b8ba18aa830f46b77dcceb0723
2018-06-20 00:43:12 +00:00
Chao Xu
e89372412f use the latest json-iter
Kubernetes-commit: 0bf82f28ff092cd2a2efea324a139b4a1bd9f436
2018-06-15 10:46:42 -07:00
Kubernetes Publisher
36dfe2e210 Merge pull request #63448 from dims/automated-cherry-pick-of-#62505-upstream-release-1.10
Automatic merge from submit-queue.

Automated cherry pick of #62505: update godeps to use latest pflag

Cherry pick of #62505 on release-1.10.

#62505: update godeps to use latest pflag

```release-note
Show help for deprecated Kubelet flags
```

Kubernetes-commit: 8959a0aa87adf07c4ff821bf6d79714b3d615e8a
2018-06-01 20:10:07 +00:00
Kubernetes Publisher
c8c875a7f5 Merge pull request #63627 from roycaihw/release-1.10
Automatic merge from submit-queue.

Manual cherrypick of kube-openapi changes for release-1.10

**What this PR does / why we need it**:
Cherry-picks kubernetes/kube-openapi#64 and kubernetes/kube-openapi#67
Fixes bugs that make apiserver panic when aggregating valid but not well formed OpenAPI spec (with empty `Paths`/`Definitions`)

**Release note**:

```release-note
Fixes bugs that make apiserver panic when aggregating valid but not well formed OpenAPI spec
```

/cc @MaciekPytel
/sig api-machinery

Kubernetes-commit: 42b63c8b19d1ad96399ec3f5a409da67e2fd19bd
2018-05-15 19:51:01 +00:00
Haowei Cai
7db0ad7e48 generated
Kubernetes-commit: 56d903a426f6cdaf420a507f0c36d45058a5bcc0
2018-05-09 14:46:14 -07:00
Kubernetes Publisher
e17af2c44c sync: update godeps 2018-04-17 15:32:41 +00:00
Michael Taufen
e45527e48e update godeps to use latest pflag
Kubernetes-commit: a58a84cfc006401910396b049b410cfb80676169
2018-04-12 17:12:43 -07:00
Kubernetes Publisher
ef4b908702 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-09 11:27:42 +00:00
Kubernetes Publisher
703bf442f7 sync: update required packages 2018-04-09 04:00:04 +00:00
Kubernetes Publisher
f2f0db5efa sync: update godeps 2018-04-09 04:00:04 +00:00
Kubernetes Publisher
2e8165f1de sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-09 03:59:46 +00:00
Kubernetes Publisher
101d823aa3 sync: update required packages 2018-04-09 00:00:37 +00:00
Kubernetes Publisher
26467c4f71 sync: update godeps 2018-04-09 00:00:36 +00:00
Kubernetes Publisher
b54ca2b2ed sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-09 00:00:08 +00:00
Kubernetes Publisher
a4f6488158 sync: update required packages 2018-04-08 20:00:43 +00:00
Kubernetes Publisher
83b9b3e620 sync: update godeps 2018-04-08 20:00:43 +00:00
Kubernetes Publisher
8306279e8a sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-08 20:00:25 +00:00
Kubernetes Publisher
b46e928ce5 sync: update required packages 2018-04-08 15:59:26 +00:00
Kubernetes Publisher
54ce7464fe sync: update godeps 2018-04-08 15:59:26 +00:00
Kubernetes Publisher
138ff46c05 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-08 15:59:08 +00:00
Kubernetes Publisher
051dbf71d6 sync: update required packages 2018-04-08 12:01:38 +00:00
Kubernetes Publisher
9174ee00a1 sync: update godeps 2018-04-08 12:01:38 +00:00
Kubernetes Publisher
0623d8089f sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-08 12:01:15 +00:00
Kubernetes Publisher
73b2d4258d sync: update required packages 2018-04-08 08:00:54 +00:00
Kubernetes Publisher
26dce95f6a sync: update godeps 2018-04-08 08:00:54 +00:00
Kubernetes Publisher
c6f22201c6 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-08 08:00:36 +00:00
Kubernetes Publisher
6ec6fd1983 sync: update required packages 2018-04-08 03:59:23 +00:00
Kubernetes Publisher
3dce427ec1 sync: update godeps 2018-04-08 03:59:23 +00:00
Kubernetes Publisher
8096957653 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-08 03:59:05 +00:00
Kubernetes Publisher
e06c41c6d3 sync: update required packages 2018-04-07 23:59:22 +00:00
Kubernetes Publisher
91909f643d sync: update godeps 2018-04-07 23:59:22 +00:00
Kubernetes Publisher
6d640c488c sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-07 23:59:02 +00:00
Kubernetes Publisher
3f9b55f6f6 sync: update required packages 2018-04-07 19:59:46 +00:00
Kubernetes Publisher
82e90ed6be sync: update godeps 2018-04-07 19:59:45 +00:00
Kubernetes Publisher
1cc3a97967 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-07 19:59:28 +00:00
Kubernetes Publisher
a31424472d sync: update required packages 2018-04-07 15:59:56 +00:00
Kubernetes Publisher
2f989798b8 sync: update godeps 2018-04-07 15:59:55 +00:00
Kubernetes Publisher
786e66ef24 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-07 15:59:37 +00:00
Kubernetes Publisher
aee8127dbe sync: update required packages 2018-04-07 11:59:44 +00:00
Kubernetes Publisher
16e1de1057 sync: update godeps 2018-04-07 11:59:44 +00:00
Kubernetes Publisher
d9ecc2cb9e sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-07 11:59:26 +00:00
Kubernetes Publisher
a445f3fee5 sync: update required packages 2018-04-07 08:01:00 +00:00
Kubernetes Publisher
3a2d7663f6 sync: update godeps 2018-04-07 08:00:59 +00:00
Kubernetes Publisher
3ffdae14ef sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-07 08:00:42 +00:00
Kubernetes Publisher
a574c4c3ef sync: update required packages 2018-04-07 04:00:28 +00:00
Kubernetes Publisher
d3203ce5ce sync: update godeps 2018-04-07 04:00:28 +00:00
Kubernetes Publisher
ad5137058f sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-07 04:00:07 +00:00
Kubernetes Publisher
607b0c2644 sync: update required packages 2018-04-06 23:59:37 +00:00
Kubernetes Publisher
bce03b60e1 sync: update godeps 2018-04-06 23:59:37 +00:00
Kubernetes Publisher
23bd2cb484 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-06 23:59:20 +00:00
Kubernetes Publisher
f11cfdebf5 sync: update required packages 2018-04-06 19:58:30 +00:00
Kubernetes Publisher
42f4c2598c sync: update godeps 2018-04-06 19:58:30 +00:00
Kubernetes Publisher
552df61ad6 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-06 19:58:13 +00:00
Kubernetes Publisher
d5fee05681 sync: update required packages 2018-04-06 16:01:10 +00:00
Kubernetes Publisher
55eaeaaa6e sync: update godeps 2018-04-06 16:01:09 +00:00
Kubernetes Publisher
7ea8e03447 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-06 16:00:52 +00:00
Kubernetes Publisher
4fa2a6a8e7 sync: update required packages 2018-04-06 12:11:29 +00:00
Kubernetes Publisher
6787e55d7a sync: update godeps 2018-04-06 12:11:29 +00:00
Kubernetes Publisher
1bfb3e4ee7 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-04-06 12:11:06 +00:00
Kubernetes Publisher
0fc2ee2089 sync: update required packages 2018-03-30 19:52:32 +00:00
Kubernetes Publisher
e5d1b3f3fd sync: update godeps 2018-03-30 19:52:32 +00:00
Kubernetes Publisher
40e1ab21a4 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-30 19:52:13 +00:00
Kubernetes Publisher
17eeebdf53 sync: update required packages 2018-03-30 16:06:50 +00:00
Kubernetes Publisher
6ab57347d8 sync: update godeps 2018-03-30 16:06:49 +00:00
Kubernetes Publisher
cd10a7b49e sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-30 16:06:25 +00:00
Kubernetes Publisher
cd906b0437 sync: update required packages 2018-03-30 12:05:36 +00:00
Kubernetes Publisher
d087cce6e8 sync: update godeps 2018-03-30 12:05:36 +00:00
Kubernetes Publisher
65e22f7740 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-30 12:05:16 +00:00
Kubernetes Publisher
6bea47b845 sync: update required packages 2018-03-30 08:04:40 +00:00
Kubernetes Publisher
bfd612abf6 sync: update godeps 2018-03-30 08:04:40 +00:00
Kubernetes Publisher
bfad7ed00b sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-30 08:04:18 +00:00
Kubernetes Publisher
f4426ef6fd sync: update required packages 2018-03-30 04:04:10 +00:00
Kubernetes Publisher
4e97399dfc sync: update godeps 2018-03-30 04:04:09 +00:00
Kubernetes Publisher
a4219630cf sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-30 04:03:50 +00:00
Kubernetes Publisher
bd44661a90 sync: update required packages 2018-03-30 00:06:16 +00:00
Kubernetes Publisher
d541bbad1f sync: update godeps 2018-03-30 00:06:16 +00:00
Kubernetes Publisher
eb0a2c0d76 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-30 00:05:55 +00:00
Kubernetes Publisher
263f7b1675 sync: update required packages 2018-03-29 19:55:55 +00:00
Kubernetes Publisher
fcbb619493 sync: update godeps 2018-03-29 19:55:55 +00:00
Kubernetes Publisher
ad9de770d7 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-29 19:55:32 +00:00
Kubernetes Publisher
0fa5f94de7 sync: update required packages 2018-03-29 15:52:02 +00:00
Kubernetes Publisher
b34e5fa3ab sync: update godeps 2018-03-29 15:52:01 +00:00
Kubernetes Publisher
d3dfc8394b sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-29 15:51:43 +00:00
Kubernetes Publisher
171ea097e8 sync: update required packages 2018-03-29 11:51:12 +00:00
Kubernetes Publisher
bbd21cee11 sync: update godeps 2018-03-29 11:51:12 +00:00
Kubernetes Publisher
f57d2f1230 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-29 11:50:55 +00:00
Kubernetes Publisher
22a61a94b3 sync: update required packages 2018-03-29 07:50:38 +00:00
Kubernetes Publisher
85c35b0ce1 sync: update godeps 2018-03-29 07:50:38 +00:00
Kubernetes Publisher
2957cc51dc sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-29 07:50:21 +00:00
Kubernetes Publisher
6c4483f032 sync: update required packages 2018-03-29 03:51:50 +00:00
Kubernetes Publisher
94fc20b05b sync: update godeps 2018-03-29 03:51:50 +00:00
Kubernetes Publisher
daaea70fab sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-29 03:51:33 +00:00
Kubernetes Publisher
143a8f25ef sync: update required packages 2018-03-28 23:51:34 +00:00
Kubernetes Publisher
17a07f263d sync: update godeps 2018-03-28 23:51:34 +00:00
Kubernetes Publisher
69ef7c63d3 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-28 23:51:16 +00:00
Kubernetes Publisher
aa2e0fdacc sync: update required packages 2018-03-28 19:52:44 +00:00
Kubernetes Publisher
6d5849604c sync: update godeps 2018-03-28 19:52:44 +00:00
Kubernetes Publisher
26d6339aeb sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-28 19:52:25 +00:00
Kubernetes Publisher
096b2e5e06 sync: update required packages 2018-03-28 15:53:50 +00:00
Kubernetes Publisher
814ef8b10b sync: update godeps 2018-03-28 15:53:50 +00:00
Kubernetes Publisher
af9002a249 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-28 15:53:27 +00:00
Kubernetes Publisher
93872e6b3c sync: update required packages 2018-03-28 11:51:57 +00:00
Kubernetes Publisher
ce7ddd0264 sync: update godeps 2018-03-28 11:51:57 +00:00
Kubernetes Publisher
002a25ec7b sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-28 11:51:39 +00:00
Kubernetes Publisher
aed009a5ce sync: update required packages 2018-03-28 07:53:37 +00:00
Kubernetes Publisher
459fe7f5a5 sync: update godeps 2018-03-28 07:53:37 +00:00
Kubernetes Publisher
ee0d40edd7 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-28 07:53:20 +00:00
Kubernetes Publisher
de7c665163 sync: update required packages 2018-03-28 03:50:41 +00:00
Kubernetes Publisher
6ed54da150 sync: update godeps 2018-03-28 03:50:41 +00:00
Kubernetes Publisher
254113449a sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-28 03:50:24 +00:00
Kubernetes Publisher
59a7c03d48 sync: update required packages 2018-03-28 00:00:31 +00:00
Kubernetes Publisher
e38be22b73 sync: update godeps 2018-03-28 00:00:31 +00:00
Kubernetes Publisher
e70e03b902 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-28 00:00:10 +00:00
Kubernetes Publisher
06977b1cad sync: update required packages 2018-03-27 19:33:43 +00:00
Kubernetes Publisher
431200b658 sync: update godeps 2018-03-27 19:33:43 +00:00
Kubernetes Publisher
b2244ed294 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-27 19:33:26 +00:00
Kubernetes Publisher
8c8027ca5e sync: update required packages 2018-03-27 15:31:21 +00:00
Kubernetes Publisher
3bfad238e0 sync: update godeps 2018-03-27 15:31:21 +00:00
Kubernetes Publisher
0360e00602 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-27 15:30:59 +00:00
Kubernetes Publisher
a4a037fd0d sync: update required packages 2018-03-27 11:30:01 +00:00
Kubernetes Publisher
f85ed1782a sync: update godeps 2018-03-27 11:30:01 +00:00
Kubernetes Publisher
0785bd016e sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-27 11:29:44 +00:00
Kubernetes Publisher
2f8b810731 sync: update required packages 2018-03-27 07:29:14 +00:00
Kubernetes Publisher
069c9ac99f sync: update godeps 2018-03-27 07:29:13 +00:00
Kubernetes Publisher
c189d16129 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-27 07:28:56 +00:00
Kubernetes Publisher
527c0da77d sync: update required packages 2018-03-27 03:29:15 +00:00
Kubernetes Publisher
e9b7f4d02f sync: update godeps 2018-03-27 03:29:15 +00:00
Kubernetes Publisher
5316967904 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-27 03:28:59 +00:00
Kubernetes Publisher
9bcd556d04 sync: update required packages 2018-03-26 23:29:28 +00:00
Kubernetes Publisher
cada811465 sync: update godeps 2018-03-26 23:29:28 +00:00
Kubernetes Publisher
005d07719f sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-26 23:29:13 +00:00
Kubernetes Publisher
85ce0c2dd7 sync: update required packages 2018-03-26 19:29:30 +00:00
Kubernetes Publisher
2ae6687d91 sync: update godeps 2018-03-26 19:29:30 +00:00
Kubernetes Publisher
b72743f639 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-26 19:29:13 +00:00
Kubernetes Publisher
7361e3cc64 sync: update required packages 2018-03-26 15:29:06 +00:00
Kubernetes Publisher
e9c16dee9c sync: update godeps 2018-03-26 15:29:06 +00:00
Kubernetes Publisher
b94f49e974 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-26 15:28:43 +00:00
Kubernetes Publisher
afa59796ba sync: update required packages 2018-03-26 11:28:39 +00:00
Kubernetes Publisher
dcc76b0fe3 sync: update godeps 2018-03-26 11:28:39 +00:00
Kubernetes Publisher
cad1a49514 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-26 11:28:23 +00:00
Kubernetes Publisher
3573f521f4 sync: update required packages 2018-03-26 07:27:59 +00:00
Kubernetes Publisher
50a4e3d5cd sync: update godeps 2018-03-26 07:27:59 +00:00
Kubernetes Publisher
134076efed sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-26 07:27:42 +00:00
Kubernetes Publisher
ac1a6baa7e sync: update required packages 2018-03-26 03:31:53 +00:00
Kubernetes Publisher
c5653a809b sync: update godeps 2018-03-26 03:31:53 +00:00
Kubernetes Publisher
73e3a39ac0 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-26 03:31:37 +00:00
Kubernetes Publisher
052ce245ab sync: update required packages 2018-03-25 23:32:12 +00:00
Kubernetes Publisher
09ffc3c5e5 sync: update godeps 2018-03-25 23:32:11 +00:00
Kubernetes Publisher
069568cbfd sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-25 23:31:53 +00:00
Kubernetes Publisher
2e37965b56 sync: update required packages 2018-03-24 23:35:09 +00:00
Kubernetes Publisher
7d43ebed9f sync: update godeps 2018-03-24 23:35:08 +00:00
Kubernetes Publisher
f52d19b0b5 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-24 23:34:51 +00:00
Kubernetes Publisher
e37ec669e6 sync: update required packages 2018-03-24 19:35:46 +00:00
Kubernetes Publisher
7f9d41036e sync: update godeps 2018-03-24 19:35:45 +00:00
Kubernetes Publisher
b728988179 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-24 19:35:26 +00:00
Kubernetes Publisher
6f6cb2c8de sync: update required packages 2018-03-24 15:35:22 +00:00
Kubernetes Publisher
a203f845e6 sync: update godeps 2018-03-24 15:35:22 +00:00
Kubernetes Publisher
29a12bb38d sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-24 15:35:01 +00:00
Kubernetes Publisher
eb6dc66a73 sync: update required packages 2018-03-24 11:36:22 +00:00
Kubernetes Publisher
382b8f2372 sync: update godeps 2018-03-24 11:36:22 +00:00
Kubernetes Publisher
a2dcb20649 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-24 11:36:01 +00:00
Kubernetes Publisher
133f2bfd95 sync: update required packages 2018-03-24 07:38:44 +00:00
Kubernetes Publisher
75fc2c0b2d sync: update godeps 2018-03-24 07:38:44 +00:00
Kubernetes Publisher
babb08cae8 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-24 07:38:26 +00:00
Kubernetes Publisher
6430cce125 sync: update required packages 2018-03-24 03:37:09 +00:00
Kubernetes Publisher
af9def9c69 sync: update godeps 2018-03-24 03:37:09 +00:00
Kubernetes Publisher
025d6a6537 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-24 03:36:51 +00:00
Kubernetes Publisher
3c50d9ef20 sync: update required packages 2018-03-23 23:35:05 +00:00
Kubernetes Publisher
fe338e3d63 sync: update godeps 2018-03-23 23:35:04 +00:00
Kubernetes Publisher
52140feca0 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-23 23:34:45 +00:00
Kubernetes Publisher
de31bdcfd4 sync: update required packages 2018-03-23 19:37:16 +00:00
Kubernetes Publisher
13dac82ec7 sync: update godeps 2018-03-23 19:37:15 +00:00
Kubernetes Publisher
7c6712734e sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-23 19:36:57 +00:00
Kubernetes Publisher
cdfc4c408a sync: update required packages 2018-03-23 15:35:47 +00:00
Kubernetes Publisher
33744e7f7a sync: update godeps 2018-03-23 15:35:47 +00:00
Kubernetes Publisher
d64377d656 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-23 15:35:29 +00:00
Kubernetes Publisher
1cba3a84dd sync: update required packages 2018-03-23 11:36:10 +00:00
Kubernetes Publisher
51a1b5be9c sync: update godeps 2018-03-23 11:36:09 +00:00
Kubernetes Publisher
d5e8acf0db sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-23 11:35:49 +00:00
Kubernetes Publisher
1beb16e925 sync: update required packages 2018-03-23 07:36:00 +00:00
Kubernetes Publisher
80701500be sync: update godeps 2018-03-23 07:35:59 +00:00
Kubernetes Publisher
af1ec638a7 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-23 07:35:36 +00:00
Kubernetes Publisher
9018ed043d sync: update required packages 2018-03-23 03:36:25 +00:00
Kubernetes Publisher
ec5c93e797 sync: update godeps 2018-03-23 03:36:25 +00:00
Kubernetes Publisher
0fae0b8942 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-23 03:36:06 +00:00
Kubernetes Publisher
697eb0bc7e sync: update required packages 2018-03-22 23:36:13 +00:00
Kubernetes Publisher
c1fb559423 sync: update godeps 2018-03-22 23:36:12 +00:00
Kubernetes Publisher
68b9c895d6 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-22 23:35:55 +00:00
Kubernetes Publisher
a87197d49f sync: update required packages 2018-03-22 19:33:42 +00:00
Kubernetes Publisher
66b748cc27 sync: update godeps 2018-03-22 19:33:41 +00:00
Kubernetes Publisher
bd8bae209d sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-22 19:33:23 +00:00
Kubernetes Publisher
5a1a4c87c2 sync: update required packages 2018-03-22 15:39:51 +00:00
Kubernetes Publisher
85be37b550 sync: update godeps 2018-03-22 15:39:51 +00:00
Kubernetes Publisher
31874c5425 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-22 15:39:32 +00:00
Kubernetes Publisher
6ab651c1b3 sync: update required packages 2018-03-22 11:35:43 +00:00
Kubernetes Publisher
9daf4b179c sync: update godeps 2018-03-22 11:35:43 +00:00
Kubernetes Publisher
e1bb1a2c4a sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-22 11:35:24 +00:00
Kubernetes Publisher
687016834b sync: update required packages 2018-03-22 07:36:58 +00:00
Kubernetes Publisher
aa2a87ff60 sync: update godeps 2018-03-22 07:36:58 +00:00
Kubernetes Publisher
c810e0a1d3 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-22 07:36:41 +00:00
Kubernetes Publisher
6ffd6147c3 sync: update required packages 2018-03-22 03:32:54 +00:00
Kubernetes Publisher
4915a979e2 sync: update godeps 2018-03-22 03:32:54 +00:00
Kubernetes Publisher
dafd52497e sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-22 03:32:36 +00:00
Kubernetes Publisher
eb0a44a5c4 sync: update required packages 2018-03-21 23:33:54 +00:00
Kubernetes Publisher
003afb1a98 sync: update godeps 2018-03-21 23:33:54 +00:00
Kubernetes Publisher
24f3c23b16 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-21 23:33:38 +00:00
Kubernetes Publisher
635b924ef2 sync: update required packages 2018-03-21 19:37:29 +00:00
Kubernetes Publisher
7997cd4908 sync: update godeps 2018-03-21 19:37:28 +00:00
Kubernetes Publisher
9c9cc9adfb sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-21 19:37:10 +00:00
Kubernetes Publisher
c46b2e4d42 sync: update required packages 2018-03-21 15:32:00 +00:00
Kubernetes Publisher
71a44d2fd1 sync: update godeps 2018-03-21 15:32:00 +00:00
Kubernetes Publisher
6485bcd146 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-21 15:31:44 +00:00
Kubernetes Publisher
97a525710a sync: update required packages 2018-03-21 11:32:09 +00:00
Kubernetes Publisher
91861f7442 sync: update godeps 2018-03-21 11:32:09 +00:00
Kubernetes Publisher
e8f07fa4f1 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-21 11:31:50 +00:00
Kubernetes Publisher
a64b1d437a sync: update required packages 2018-03-21 07:30:42 +00:00
Kubernetes Publisher
1411e2a728 sync: update godeps 2018-03-21 07:30:42 +00:00
Kubernetes Publisher
3d7b848e77 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-21 07:30:24 +00:00
Kubernetes Publisher
9722b78fad sync: update required packages 2018-03-21 03:31:09 +00:00
Kubernetes Publisher
0eebc4312f sync: update godeps 2018-03-21 03:31:08 +00:00
Kubernetes Publisher
7df88536b3 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-21 03:30:52 +00:00
Kubernetes Publisher
23fdb17340 sync: update required packages 2018-03-20 23:31:49 +00:00
Kubernetes Publisher
7e5ae68247 sync: update godeps 2018-03-20 23:31:49 +00:00
Kubernetes Publisher
1532ffa9b6 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-20 23:31:32 +00:00
Kubernetes Publisher
424afdc178 sync: update required packages 2018-03-20 19:33:50 +00:00
Kubernetes Publisher
c1502dfc36 sync: update godeps 2018-03-20 19:33:49 +00:00
Kubernetes Publisher
b389cf8b5c sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-20 19:33:29 +00:00
Kubernetes Publisher
fdb4838363 sync: update required packages 2018-03-20 15:39:41 +00:00
Kubernetes Publisher
78ad53fb9b sync: update godeps 2018-03-20 15:39:41 +00:00
Kubernetes Publisher
6a3303dffa sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-20 15:39:24 +00:00
Kubernetes Publisher
36c2bb2dbb sync: update required packages 2018-03-20 10:19:53 +00:00
Kubernetes Publisher
d7770c5705 sync: update godeps 2018-03-20 10:19:52 +00:00
Kubernetes Publisher
ab61fd3857 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-20 10:19:35 +00:00
Kubernetes Publisher
e35dd00024 sync: update required packages 2018-03-20 06:21:56 +00:00
Kubernetes Publisher
abca49aae7 sync: update godeps 2018-03-20 06:21:55 +00:00
Kubernetes Publisher
8e5edb641f sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-20 06:21:37 +00:00
Kubernetes Publisher
42bd6f720c sync: update required packages 2018-03-20 02:34:53 +00:00
Kubernetes Publisher
6e7e027bed sync: update godeps 2018-03-20 02:34:53 +00:00
Kubernetes Publisher
c6901a5311 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-20 02:34:32 +00:00
Kubernetes Publisher
8a03b8c386 sync: update required packages 2018-03-19 22:12:37 +00:00
Kubernetes Publisher
c06f72b066 sync: update godeps 2018-03-19 22:12:37 +00:00
Kubernetes Publisher
c039cdc2e8 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-19 22:12:18 +00:00
Kubernetes Publisher
cac335b012 sync: update required packages 2018-03-19 18:16:05 +00:00
Kubernetes Publisher
55e9dd7299 sync: update godeps 2018-03-19 18:16:05 +00:00
Kubernetes Publisher
946295bd47 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-19 18:15:42 +00:00
Kubernetes Publisher
c44670caf1 sync: update required packages 2018-03-19 14:09:17 +00:00
Kubernetes Publisher
97417d3768 sync: update godeps 2018-03-19 14:09:17 +00:00
Kubernetes Publisher
17e5c8f698 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-19 14:09:01 +00:00
Kubernetes Publisher
a115a855ba sync: update required packages 2018-03-19 10:07:30 +00:00
Kubernetes Publisher
d387c91b97 sync: update godeps 2018-03-19 10:07:30 +00:00
Kubernetes Publisher
0ba420a631 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-19 10:07:15 +00:00
Kubernetes Publisher
cfab3924db sync: update required packages 2018-03-19 06:08:04 +00:00
Kubernetes Publisher
1f22f5920c sync: update godeps 2018-03-19 06:08:04 +00:00
Kubernetes Publisher
86a034f052 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-19 06:07:42 +00:00
Kubernetes Publisher
e319be2fcf sync: update required packages 2018-03-19 02:08:15 +00:00
Kubernetes Publisher
0828d2b0b5 sync: update godeps 2018-03-19 02:08:14 +00:00
Kubernetes Publisher
5579549045 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-19 02:08:00 +00:00
Kubernetes Publisher
c910190c78 sync: update required packages 2018-03-18 22:08:39 +00:00
Kubernetes Publisher
c5b69f91fa sync: update godeps 2018-03-18 22:08:38 +00:00
Kubernetes Publisher
2d59b39d2b sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-18 22:08:23 +00:00
Kubernetes Publisher
2e5d8cbeab sync: update required packages 2018-03-18 18:08:10 +00:00
Kubernetes Publisher
5e9e0b0604 sync: update godeps 2018-03-18 18:08:09 +00:00
Kubernetes Publisher
f11cfe2e23 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-18 18:07:54 +00:00
Kubernetes Publisher
80f2fc1cc2 sync: update required packages 2018-03-18 14:07:41 +00:00
Kubernetes Publisher
86bca46230 sync: update godeps 2018-03-18 14:07:41 +00:00
Kubernetes Publisher
69fc5c3a8a sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-18 14:07:27 +00:00
Kubernetes Publisher
6a534c393d sync: update required packages 2018-03-18 10:07:14 +00:00
Kubernetes Publisher
6f124ebfb8 sync: update godeps 2018-03-18 10:07:14 +00:00
Kubernetes Publisher
fda4275fb0 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-18 10:06:59 +00:00
Kubernetes Publisher
4d2c899715 sync: update required packages 2018-03-18 06:06:43 +00:00
Kubernetes Publisher
2ebd366e43 sync: update godeps 2018-03-18 06:06:43 +00:00
Kubernetes Publisher
81ba2581d2 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-18 06:06:28 +00:00
Kubernetes Publisher
a77beea55a sync: update required packages 2018-03-18 02:07:27 +00:00
Kubernetes Publisher
8792f91a61 sync: update godeps 2018-03-18 02:07:27 +00:00
Kubernetes Publisher
d13469ed75 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-18 02:07:12 +00:00
Kubernetes Publisher
4183b3a644 sync: update required packages 2018-03-17 22:07:55 +00:00
Kubernetes Publisher
f1e3fb6696 sync: update godeps 2018-03-17 22:07:54 +00:00
Kubernetes Publisher
c555878b0e sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-17 22:07:40 +00:00
Kubernetes Publisher
bd5b9aef46 sync: update required packages 2018-03-17 18:08:23 +00:00
Kubernetes Publisher
215be0f23b sync: update godeps 2018-03-17 18:08:23 +00:00
Kubernetes Publisher
2eefb1d959 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-17 18:08:08 +00:00
Kubernetes Publisher
2ebf1c4bba sync: update required packages 2018-03-17 14:07:36 +00:00
Kubernetes Publisher
3d40b48cd7 sync: update godeps 2018-03-17 14:07:35 +00:00
Kubernetes Publisher
418d6d4001 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-17 14:07:20 +00:00
Kubernetes Publisher
49cf05089e sync: update required packages 2018-03-17 10:06:54 +00:00
Kubernetes Publisher
083276d636 sync: update godeps 2018-03-17 10:06:54 +00:00
Kubernetes Publisher
c32aa15559 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-17 10:06:40 +00:00
Kubernetes Publisher
9b588e7d54 sync: update required packages 2018-03-17 06:07:05 +00:00
Kubernetes Publisher
685a83a649 sync: update godeps 2018-03-17 06:07:04 +00:00
Kubernetes Publisher
5b5f8521ac sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-17 06:06:46 +00:00
Kubernetes Publisher
37150c5d59 sync: update required packages 2018-03-17 02:07:59 +00:00
Kubernetes Publisher
2772371d50 sync: update godeps 2018-03-17 02:07:58 +00:00
Kubernetes Publisher
6345ab5a5f sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-17 02:07:44 +00:00
Kubernetes Publisher
f30b502353 sync: update required packages 2018-03-16 22:07:45 +00:00
Kubernetes Publisher
7d744fa50e sync: update godeps 2018-03-16 22:07:45 +00:00
Kubernetes Publisher
19c9ee414d sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-16 22:07:30 +00:00
Kubernetes Publisher
a7c2881ee5 sync: update required packages 2018-03-16 18:07:52 +00:00
Kubernetes Publisher
015c2ddf28 sync: update godeps 2018-03-16 18:07:52 +00:00
Kubernetes Publisher
2f89831d5a sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-16 18:07:38 +00:00
Kubernetes Publisher
31c8529c96 sync: update required packages 2018-03-16 14:08:01 +00:00
Kubernetes Publisher
d21d0d1222 sync: update godeps 2018-03-16 14:08:01 +00:00
Kubernetes Publisher
f7f3a69639 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-16 14:07:47 +00:00
Kubernetes Publisher
7e2b572de8 sync: update required packages 2018-03-16 10:07:17 +00:00
Kubernetes Publisher
00bc0cc54d sync: update godeps 2018-03-16 10:07:17 +00:00
Kubernetes Publisher
482736d633 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-16 10:07:02 +00:00
Kubernetes Publisher
1a3f6402c4 sync: update required packages 2018-03-16 06:05:47 +00:00
Kubernetes Publisher
3f14529b52 sync: update godeps 2018-03-16 06:05:47 +00:00
Kubernetes Publisher
9e149a1204 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-16 06:05:33 +00:00
Kubernetes Publisher
add31d048b sync: update required packages 2018-03-16 02:07:46 +00:00
Kubernetes Publisher
1533869bbf sync: update godeps 2018-03-16 02:07:46 +00:00
Kubernetes Publisher
83c5457f37 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-16 02:07:31 +00:00
Kubernetes Publisher
9853a61a74 sync: update required packages 2018-03-15 22:07:44 +00:00
Kubernetes Publisher
32f4c1a3b3 sync: update godeps 2018-03-15 22:07:44 +00:00
Kubernetes Publisher
43c4c29266 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-15 22:07:30 +00:00
Kubernetes Publisher
c767bd8ea4 sync: update required packages 2018-03-15 18:08:17 +00:00
Kubernetes Publisher
670ea453dc sync: update godeps 2018-03-15 18:08:16 +00:00
Kubernetes Publisher
a1272159d0 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-15 18:08:02 +00:00
Kubernetes Publisher
73a6797833 sync: update required packages 2018-03-15 14:07:01 +00:00
Kubernetes Publisher
c1a78d5566 sync: update godeps 2018-03-15 14:07:00 +00:00
Kubernetes Publisher
3403c1c221 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-15 14:06:46 +00:00
Kubernetes Publisher
0186494bf8 sync: update required packages 2018-03-15 10:13:57 +00:00
Kubernetes Publisher
7b2f02651c sync: update godeps 2018-03-15 10:13:57 +00:00
Kubernetes Publisher
a9b8576de2 sync: initially remove files BUILD */BUILD BUILD.bazel */BUILD.bazel 2018-03-15 10:13:43 +00:00
Kubernetes Publisher
6f352c35bd sync: update required packages 2018-03-14 19:35:37 +00:00
Kubernetes Publisher
bf47ba396e sync: update godeps 2018-03-14 19:35:37 +00:00
Kubernetes Publisher
76c69f1e56 sync: update required packages 2018-03-14 15:34:49 +00:00
Kubernetes Publisher
c4a451c86d sync: update godeps 2018-03-14 15:34:49 +00:00
Kubernetes Publisher
33b6449064 sync: update required packages 2018-03-14 11:35:56 +00:00
Kubernetes Publisher
581bcdfd16 sync: update godeps 2018-03-14 11:35:56 +00:00
Kubernetes Publisher
84ac2ed656 sync: update required packages 2018-03-14 07:34:21 +00:00
Kubernetes Publisher
9bd2d834d4 sync: update godeps 2018-03-14 07:34:21 +00:00
Kubernetes Publisher
6e2779d244 sync: update required packages 2018-03-14 03:35:31 +00:00
Kubernetes Publisher
1fc09990a3 sync: update godeps 2018-03-14 03:35:31 +00:00
Kubernetes Publisher
524c2c3c0b sync: update required packages 2018-03-13 23:35:21 +00:00
Kubernetes Publisher
377b5807f9 sync: update godeps 2018-03-13 23:35:21 +00:00
Kubernetes Publisher
099238c893 sync: update required packages 2018-03-13 19:42:27 +00:00
Kubernetes Publisher
d773dd3c9a sync: update godeps 2018-03-13 19:42:27 +00:00
Kubernetes Publisher
0d20602d81 sync: update required packages 2018-03-13 15:38:08 +00:00
Kubernetes Publisher
cc4d8b840a sync: update godeps 2018-03-13 15:38:08 +00:00
Kubernetes Publisher
692007dd7f sync: update required packages 2018-03-13 11:40:06 +00:00
Kubernetes Publisher
8a5196b4c2 sync: update godeps 2018-03-13 11:40:06 +00:00
Kubernetes Publisher
95c1505963 sync: update required packages 2018-03-13 07:39:35 +00:00
Kubernetes Publisher
dc79eaa7ee sync: update godeps 2018-03-13 07:39:35 +00:00
Kubernetes Publisher
42aaf9654c sync: update required packages 2018-03-13 03:41:38 +00:00
Kubernetes Publisher
ddf83985cd sync: update godeps 2018-03-13 03:41:38 +00:00
Kubernetes Publisher
ef6d57380a sync: update required packages 2018-03-12 23:41:48 +00:00
Kubernetes Publisher
6fd418735c sync: update godeps 2018-03-12 23:41:48 +00:00
Kubernetes Publisher
cadcc5633e sync: update required packages 2018-03-12 19:37:34 +00:00
Kubernetes Publisher
e108570dcf sync: update godeps 2018-03-12 19:37:33 +00:00
Kubernetes Publisher
6deb56d2ec sync: update required packages 2018-03-12 15:35:58 +00:00
Kubernetes Publisher
5f11229638 sync: update godeps 2018-03-12 15:35:57 +00:00
Kubernetes Publisher
675c719e29 sync: update required packages 2018-03-12 11:37:11 +00:00
Kubernetes Publisher
c269e2ba9b sync: update godeps 2018-03-12 11:37:11 +00:00
Kubernetes Publisher
04cc38884f sync: update required packages 2018-03-12 07:36:01 +00:00
Kubernetes Publisher
a72f2a9d5c sync: update godeps 2018-03-12 07:36:01 +00:00
Kubernetes Publisher
42535640b0 sync: update required packages 2018-03-12 03:44:14 +00:00
Kubernetes Publisher
f5c017b582 sync: update godeps 2018-03-12 03:44:13 +00:00
Kubernetes Publisher
b1316120b0 sync: update required packages 2018-03-11 23:43:35 +00:00
Kubernetes Publisher
947a2e8724 sync: update godeps 2018-03-11 23:43:35 +00:00
Kubernetes Publisher
0a88455cbf sync: update required packages 2018-03-11 19:36:12 +00:00
Kubernetes Publisher
1eacb0738d sync: update godeps 2018-03-11 19:36:12 +00:00
Kubernetes Publisher
07182f6e5b sync: update required packages 2018-03-11 15:34:58 +00:00
Kubernetes Publisher
c297a81498 sync: update godeps 2018-03-11 15:34:57 +00:00
Kubernetes Publisher
3a95ae3206 sync: update required packages 2018-03-11 11:33:53 +00:00
Kubernetes Publisher
367cdaf712 sync: update godeps 2018-03-11 11:33:53 +00:00
Kubernetes Publisher
2843fd0d1d sync: update required packages 2018-03-11 07:34:35 +00:00
Kubernetes Publisher
ac9b2029a5 sync: update godeps 2018-03-11 07:34:35 +00:00
Kubernetes Publisher
e5a65f61c1 sync: update required packages 2018-03-11 03:36:42 +00:00
Kubernetes Publisher
14316764fe sync: update godeps 2018-03-11 03:36:42 +00:00
Kubernetes Publisher
2bf8a868e7 sync: update required packages 2018-03-10 23:33:19 +00:00
Kubernetes Publisher
f1c608de72 sync: update godeps 2018-03-10 23:33:19 +00:00
Kubernetes Publisher
5de782aeb9 sync: update required packages 2018-03-10 19:36:39 +00:00
Kubernetes Publisher
1b5e8f0e6a sync: update godeps 2018-03-10 19:36:39 +00:00
Kubernetes Publisher
42742eca95 sync: update required packages 2018-03-10 15:35:17 +00:00
Kubernetes Publisher
039864fd90 sync: update godeps 2018-03-10 15:35:17 +00:00
Kubernetes Publisher
248642a84a sync: update required packages 2018-03-10 11:36:26 +00:00
Kubernetes Publisher
e46f5b2dc4 sync: update godeps 2018-03-10 11:36:25 +00:00
Kubernetes Publisher
0341e918f5 sync: update required packages 2018-03-10 07:35:49 +00:00
Kubernetes Publisher
e124fb26ed sync: update godeps 2018-03-10 07:35:49 +00:00
Kubernetes Publisher
b8e7532d82 sync: update required packages 2018-03-10 03:34:54 +00:00
Kubernetes Publisher
92a7f356a7 sync: update godeps 2018-03-10 03:34:54 +00:00
Kubernetes Publisher
e89270098e sync: update required packages 2018-03-09 23:38:29 +00:00
Kubernetes Publisher
d757a500c2 sync: update godeps 2018-03-09 23:38:29 +00:00
Kubernetes Publisher
dc1bea83e7 sync: update required packages 2018-03-09 19:40:25 +00:00
Kubernetes Publisher
5ee4ca4537 sync: update godeps 2018-03-09 19:40:25 +00:00
Kubernetes Publisher
95a7de1c04 sync: update required packages 2018-03-09 15:37:07 +00:00
Kubernetes Publisher
a69e35ea88 sync: update godeps 2018-03-09 15:37:06 +00:00
Kubernetes Publisher
a593d56002 sync: update required packages 2018-03-09 11:33:36 +00:00
Kubernetes Publisher
ae9c236a47 sync: update godeps 2018-03-09 11:33:36 +00:00
Kubernetes Publisher
a717103397 sync: update required packages 2018-03-09 07:36:30 +00:00
Kubernetes Publisher
8a57124336 sync: update godeps 2018-03-09 07:36:29 +00:00
Kubernetes Publisher
c55e438328 sync: update required packages 2018-03-09 03:36:50 +00:00
Kubernetes Publisher
2980228bf6 sync: update godeps 2018-03-09 03:36:49 +00:00
Kubernetes Publisher
dbb14c55e7 sync: update required packages 2018-03-08 23:43:42 +00:00
Kubernetes Publisher
e323d1ae71 sync: update godeps 2018-03-08 23:43:41 +00:00
Kubernetes Publisher
46bb2ba055 sync: update required packages 2018-03-08 19:54:54 +00:00
Kubernetes Publisher
d7b707a0d9 sync: update godeps 2018-03-08 19:54:54 +00:00
Kubernetes Publisher
95b0712aef sync: update required packages 2018-03-08 15:53:54 +00:00
Kubernetes Publisher
18ee36d132 sync: update godeps 2018-03-08 15:53:54 +00:00
Kubernetes Publisher
130f9932f5 sync: update required packages 2018-03-08 11:53:04 +00:00
Kubernetes Publisher
ea557bd3cb sync: update godeps 2018-03-08 11:53:04 +00:00
Kubernetes Publisher
6b73aa9963 sync: update required packages 2018-03-08 07:54:56 +00:00
Kubernetes Publisher
ce99a5ecb6 sync: update godeps 2018-03-08 07:54:56 +00:00
Kubernetes Publisher
f3c7678dc0 sync: update required packages 2018-03-08 03:55:12 +00:00
Kubernetes Publisher
6be4298256 sync: update godeps 2018-03-08 03:55:12 +00:00
Kubernetes Publisher
e410338511 sync: update required packages 2018-03-08 00:02:26 +00:00
Kubernetes Publisher
b2221d1baf sync: update godeps 2018-03-08 00:02:26 +00:00
Kubernetes Publisher
e43cdbebf6 sync: update required packages 2018-03-07 20:02:25 +00:00
Kubernetes Publisher
0fabe9b059 sync: update godeps 2018-03-07 20:02:25 +00:00
Kubernetes Publisher
565d80c450 sync: update required packages 2018-03-07 15:52:39 +00:00
Kubernetes Publisher
76208bd5ac sync: update godeps 2018-03-07 15:52:39 +00:00
Kubernetes Publisher
fff375d793 sync: update required packages 2018-03-07 11:53:04 +00:00
Kubernetes Publisher
1672519a02 sync: update godeps 2018-03-07 11:53:04 +00:00
Kubernetes Publisher
3735f657a1 sync: update required packages 2018-03-07 07:55:29 +00:00
Kubernetes Publisher
1466a6e28b sync: update godeps 2018-03-07 07:55:29 +00:00
Kubernetes Publisher
f3f705f414 sync: update required packages 2018-03-07 04:15:51 +00:00
Kubernetes Publisher
38be92b288 sync: update godeps 2018-03-07 04:15:51 +00:00
Kubernetes Publisher
14df7837b7 sync: update required packages 2018-03-07 00:00:11 +00:00
Kubernetes Publisher
bf3aea7ace sync: update godeps 2018-03-07 00:00:11 +00:00
Kubernetes Publisher
e425e996a7 sync: update required packages 2018-03-06 20:02:59 +00:00
Kubernetes Publisher
f581536a9e sync: update godeps 2018-03-06 20:02:59 +00:00
Kubernetes Publisher
26625c0056 sync: update required packages 2018-03-06 16:02:02 +00:00
Kubernetes Publisher
dd76913297 sync: update godeps 2018-03-06 16:02:02 +00:00
Kubernetes Publisher
28dfabf992 sync: update required packages 2018-03-06 11:52:14 +00:00
Kubernetes Publisher
b86d1d58f3 sync: update godeps 2018-03-06 11:52:14 +00:00
Kubernetes Publisher
8c00b9b2c9 sync: update required packages 2018-03-06 07:52:31 +00:00
Kubernetes Publisher
de1e230cda sync: update godeps 2018-03-06 07:52:31 +00:00
Kubernetes Publisher
b71da6b123 sync: update required packages 2018-03-06 04:02:36 +00:00
Kubernetes Publisher
29b23c1f1c sync: update godeps 2018-03-06 04:02:36 +00:00
Kubernetes Publisher
bbab3b7a10 sync: update required packages 2018-03-06 00:01:34 +00:00
Kubernetes Publisher
cf5bffca2f sync: update godeps 2018-03-06 00:01:34 +00:00
Kubernetes Publisher
89f6dd735f sync: update required packages 2018-03-05 20:03:33 +00:00
Kubernetes Publisher
3d9d5c345e sync: update godeps 2018-03-05 20:03:33 +00:00
Kubernetes Publisher
1f57013c99 sync: update required packages 2018-03-05 15:59:29 +00:00
Kubernetes Publisher
cc15ded788 sync: update godeps 2018-03-05 15:59:29 +00:00
Kubernetes Publisher
98134c8427 sync: update required packages 2018-03-05 11:51:48 +00:00
Kubernetes Publisher
367e5756d8 sync: update godeps 2018-03-05 11:51:48 +00:00
Kubernetes Publisher
820a7a6aaf sync: update required packages 2018-03-05 07:47:23 +00:00
Kubernetes Publisher
2522216897 sync: update godeps 2018-03-05 07:47:23 +00:00
Kubernetes Publisher
38323dec6d sync: update required packages 2018-03-05 03:52:41 +00:00
Kubernetes Publisher
3b4778aa39 sync: update godeps 2018-03-05 03:52:41 +00:00
Kubernetes Publisher
b0e20e7b60 sync: update required packages 2018-03-04 23:59:21 +00:00
Kubernetes Publisher
ff061f9470 sync: update godeps 2018-03-04 23:59:20 +00:00
Kubernetes Publisher
eaa614a95d sync: update required packages 2018-03-04 20:03:40 +00:00
Kubernetes Publisher
d5d71c2684 sync: update godeps 2018-03-04 20:03:39 +00:00
Kubernetes Publisher
468fa3dc39 sync: update required packages 2018-03-04 15:56:39 +00:00
Kubernetes Publisher
200019a48e sync: update godeps 2018-03-04 15:56:38 +00:00
Kubernetes Publisher
a83f042c60 sync: update required packages 2018-03-04 11:43:30 +00:00
Kubernetes Publisher
6c4a574775 sync: update godeps 2018-03-04 11:43:30 +00:00
Kubernetes Publisher
13ac896773 sync: update required packages 2018-03-04 07:45:16 +00:00
Kubernetes Publisher
2eec0c38ea sync: update godeps 2018-03-04 07:45:16 +00:00
Kubernetes Publisher
f3c002b6a5 sync: update required packages 2018-03-04 03:47:17 +00:00
Kubernetes Publisher
971549189b sync: update godeps 2018-03-04 03:47:17 +00:00
Kubernetes Publisher
5a91342ead sync: update required packages 2018-03-03 23:53:42 +00:00
Kubernetes Publisher
0d4c61725c sync: update godeps 2018-03-03 23:53:42 +00:00
Kubernetes Publisher
614b5df479 sync: update required packages 2018-03-03 19:55:59 +00:00
Kubernetes Publisher
31f33152bc sync: update godeps 2018-03-03 19:55:59 +00:00
Kubernetes Publisher
61ef8c9240 sync: update required packages 2018-03-03 16:18:04 +00:00
Kubernetes Publisher
71d7e83357 sync: update godeps 2018-03-03 16:18:04 +00:00
Kubernetes Publisher
7fa17b48d1 sync: update required packages 2018-03-03 11:45:13 +00:00
Kubernetes Publisher
07df2fef88 sync: update godeps 2018-03-03 11:45:12 +00:00
Kubernetes Publisher
268d127a7e sync: update required packages 2018-03-03 07:45:37 +00:00
Kubernetes Publisher
2f68eddc55 sync: update godeps 2018-03-03 07:45:36 +00:00
Kubernetes Publisher
976355ea41 sync: update required packages 2018-03-03 04:03:22 +00:00
Kubernetes Publisher
8788678c62 sync: update godeps 2018-03-03 04:03:22 +00:00
Kubernetes Publisher
49daa1180c sync: update required packages 2018-03-02 23:50:45 +00:00
Kubernetes Publisher
fb4c6ae9b4 sync: update godeps 2018-03-02 23:50:45 +00:00
Kubernetes Publisher
e89ffee0aa sync: update required packages 2018-03-02 19:59:00 +00:00
Kubernetes Publisher
ecc1e4d5b7 sync: update godeps 2018-03-02 19:58:59 +00:00
Kubernetes Publisher
eecf74fa68 sync: update required packages 2018-03-02 19:58:47 +00:00
Kubernetes Publisher
d844f7d063 sync: update godeps 2018-03-02 19:58:47 +00:00
Kubernetes Publisher
1c8d36a84c Merge remote-tracking branch 'origin/master' into release-1.10. Deleting CHANGELOG-1.7.md
Kubernetes-commit: 305052d6d2c1fa976c7a841350396061a2c26ac0
2018-03-01 11:48:51 -05:00
Kubernetes Publisher
7d2b7a94fe Merge pull request #59495 from ericchiang/client-auth-exec
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

 client-go: add an exec-based client auth provider

Updates https://github.com/kubernetes/features/issues/541
Implements https://github.com/kubernetes/community/pull/1503
Closes https://github.com/kubernetes/kubernetes/issues/57164

```release-note
client-go: alpha support for exec-based credential providers
```

/sig auth
/kind feature

Kubernetes-commit: cb9d6b51556a1677f262e35e4aded0051c424818
2018-03-02 19:58:34 +00:00
Kubernetes Publisher
90185097df sync: update required packages 2018-03-01 14:09:27 +00:00
Kubernetes Publisher
153c4d7278 sync: update godeps 2018-03-01 14:09:26 +00:00
Kubernetes Publisher
cb151cc1a8 sync: update required packages 2018-03-01 10:11:16 +00:00
Kubernetes Publisher
a8e541f0d2 sync: update godeps 2018-03-01 10:11:15 +00:00
Kubernetes Publisher
d116f69397 sync: update required packages 2018-03-01 06:06:58 +00:00
Kubernetes Publisher
2be2fab7d1 sync: update godeps 2018-03-01 06:06:57 +00:00
Kubernetes Publisher
f723d5dec7 sync: update required packages 2018-03-01 02:07:13 +00:00
Kubernetes Publisher
3c8dedeb9d sync: update godeps 2018-03-01 02:07:12 +00:00
Kubernetes Publisher
bddc8dd04c sync: update required packages 2018-02-28 22:06:40 +00:00
Kubernetes Publisher
9840281fac sync: update godeps 2018-02-28 22:06:40 +00:00
Kubernetes Publisher
5529714c4a sync: update required packages 2018-02-28 18:08:07 +00:00
Kubernetes Publisher
156d42a628 sync: update godeps 2018-02-28 18:08:06 +00:00
Kubernetes Publisher
b59f54c2b2 sync: update required packages 2018-02-28 14:06:59 +00:00
Kubernetes Publisher
c9c8fd9666 sync: update godeps 2018-02-28 14:06:59 +00:00
Kubernetes Publisher
46ff43598d sync: update required packages 2018-02-28 10:06:45 +00:00
Kubernetes Publisher
9eb35675d7 sync: update godeps 2018-02-28 10:06:44 +00:00
Kubernetes Publisher
af0a89a540 sync: update required packages 2018-02-28 06:12:23 +00:00
Kubernetes Publisher
1d83fc7af7 sync: update godeps 2018-02-28 06:12:23 +00:00
Kubernetes Publisher
2facf96233 sync: update required packages 2018-02-28 06:12:02 +00:00
Kubernetes Publisher
4f271e31e4 sync: update godeps 2018-02-28 06:12:02 +00:00
Kubernetes Publisher
0430994cc2 Merge remote-tracking branch 'origin/master' into release-1.10
Kubernetes-commit: 6ee902eee1aa2022d41afd82c510b0d5e7de2d77
2018-02-27 18:10:55 -05:00
Kubernetes Publisher
b7e03de184 Merge pull request #60446 from cblecker/no-dep-reviewer
Automatic merge from submit-queue (batch tested with PRs 59365, 60446, 60448, 55019, 60431). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove dep-reviewers

**What this PR does / why we need it**:
The dep-reviewers group seems to get assigned PRs early the the review process. However, most code changes should be reviewed in the importing part of the code base first, and then assigned to an approver after.

By removing the reviewers group, the approvers plugin will still suggest assigning to an approver, but won't assign for review when the PR is initially opened.

**Release note**:

```release-note
NONE
```

Kubernetes-commit: 724a2f968c6981efc9f5a85e4ad60f56e1c0902f
2018-02-28 06:11:45 +00:00
Kubernetes Publisher
6c333b6100 Merge pull request #59674 from jennybuckley/codegen
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

code-gen: output golint compliant 'Generated by' comment

New PR instead of reopening #58115 because /reopen did not work.
This won't be ready to merge until the upstream https://github.com/kubernetes/gengo/pull/94 merges. Once that merges, the second commit will be changed to godep-save.sh and update-staging-godeps.sh, and the last commit will be changed to update-all.sh

The failing test is due to the upstream changes not being merged yet

```devel-release-note
Go code generated by the code generators will now have a comment which allows them to be easily identified by golint
```

Fixes #56489

Kubernetes-commit: 1eb1c00c44f8f597b9b23a05cd0a8da205c87f8a
2018-02-28 06:11:28 +00:00
Kubernetes Publisher
9a2a002b29 Merge pull request #59293 from roycaihw/openapi_endpoint
Automatic merge from submit-queue (batch tested with PRs 60011, 59256, 59293, 60328, 60367). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Serve OpenAPI spec with single /openapi/v2 endpoint

**What this PR does / why we need it**:
We are deprecating format-separated endpoints (`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`) for OpenAPI spec, and switching to a single `/openapi/v2` endpoint in Kubernetes 1.10. The design doc and deprecation process are tracked at: https://docs.google.com/document/d/19lEqE9lc4yHJ3WJAJxS_G7TcORIJXGHyq3wpwcH28nU

Requested format is specified by setting HTTP headers

header | possible values
-- | --
Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf`
Accept-Encoding | `gzip`

This PR changes dynamic_client (and kubectl as a result) to use the new endpoint. The old endpoints will remain in 1.10 and 1.11, and get removed in 1.12.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
action required: Deprecate format-separated endpoints for OpenAPI spec. Please use single `/openapi/v2` endpoint instead.
```

/sig api-machinery

Kubernetes-commit: d6153194d929ad6c036d5bbbf67a6f892e75feb5
2018-02-28 06:11:11 +00:00
Kubernetes Publisher
632c382b43 sync: update required packages 2018-02-27 18:05:55 +00:00
Kubernetes Publisher
5c226ce7b9 sync: update godeps 2018-02-27 18:05:55 +00:00
Kubernetes Publisher
a737fc7380 sync: update required packages 2018-02-27 14:05:00 +00:00
Kubernetes Publisher
0b3755ac9c sync: update godeps 2018-02-27 14:05:00 +00:00
Kubernetes Publisher
aa4294228f sync: update required packages 2018-02-27 10:08:35 +00:00
Kubernetes Publisher
279eb14b03 sync: update godeps 2018-02-27 10:08:35 +00:00
Kubernetes Publisher
590967646e sync: update required packages 2018-02-27 06:05:30 +00:00
Kubernetes Publisher
646dda6122 sync: update godeps 2018-02-27 06:05:30 +00:00
Kubernetes Publisher
f11de95a54 sync: update required packages 2018-02-27 02:23:00 +00:00
Kubernetes Publisher
aa22ac5b65 sync: update godeps 2018-02-27 02:22:59 +00:00
Kubernetes Publisher
fa8258f29c sync: update required packages 2018-02-27 02:22:43 +00:00
Kubernetes Publisher
b8f215ad88 sync: update godeps 2018-02-27 02:22:43 +00:00
jennybuckley
23f67bd7a3 Run hack/update-all.sh
Kubernetes-commit: c8dacd8e631f59ef158c79156d77a99fd2a632cc
2018-02-26 17:16:14 -08:00
Christoph Blecker
42d4406d1c Remove dep-reviewers
Kubernetes-commit: b97b9530f08d40a4346ea328d8a1047822fb92b7
2018-02-26 11:11:15 -08:00
Kubernetes Publisher
940d88846b Merge remote-tracking branch 'origin/master' into release-1.10
Kubernetes-commit: 8d6416d0e6674f36d90274c98dda83ed7ae873de
2018-02-24 15:22:52 -05:00
Kubernetes Publisher
8a511b140e Merge pull request #59793 from nikhita/staging-repos-boilerplate
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

staging: add boilerplate header

Follow up of https://github.com/kubernetes/kubernetes/pull/57656.

Adds boilerplate in the relevant staging repos so that they don't need to depend on code-generator's boilerplate.

**Release note**:

```release-note
NONE
```

/cc sttts fisherxu

Kubernetes-commit: 1a1643bb5df07d94330e1b16d182cee660533f91
2018-02-27 02:22:26 +00:00
Kubernetes Publisher
0236239fbb Merge pull request #59587 from cblecker/cblecker-vendor
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add cblecker to vendor OWNERS

**What this PR does / why we need it**:
Adds myself to vendor OWNERS. I can help approve dep bumps of existing deps, and refer to Tim and new deps for license review.

**Release note**:
```release-note
NONE
```

/assign thockin

Kubernetes-commit: 852e7f7bfa43d1427706c59453e39f2de12a4f32
2018-02-27 02:22:08 +00:00
Kubernetes Publisher
a95e9a627a Merge pull request #59842 from ixdy/update-rules_go-02-2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

 Update bazelbuild/rules_go, kubernetes/repo-infra, and gazelle dependencies

**What this PR does / why we need it**: updates our bazelbuild/rules_go dependency in order to bump everything to go1.9.4. I'm separating this effort into two separate PRs, since updating rules_go requires a large cleanup, removing an attribute from most build rules.

**Release note**:

```release-note
NONE
```

Kubernetes-commit: 96ec3187180b9c1d722756b3ea0984ebe65424dc
2018-02-27 02:21:51 +00:00
Haowei Cai
669a29fce3 Bump kube-openapi to add new openapi endpoint
Kubernetes-commit: 8b38e080c4ddd3e1416a5fc4d45a3e4d2dbe1033
2018-02-20 09:21:41 -08:00
Kubernetes Publisher
c3de6d02f5 sync: update required packages 2018-02-20 06:01:17 +00:00
Kubernetes Publisher
0b1456c846 sync: update godeps 2018-02-20 06:01:16 +00:00
Kubernetes Publisher
fdab74e2e7 sync: update required packages 2018-02-20 02:03:15 +00:00
Kubernetes Publisher
15b03d1b58 sync: update godeps 2018-02-20 02:03:15 +00:00
Kubernetes Publisher
f62a943b5a sync: update required packages 2018-02-19 22:03:57 +00:00
Kubernetes Publisher
25633a6136 sync: update godeps 2018-02-19 22:03:57 +00:00
Kubernetes Publisher
00379c004a sync: update required packages 2018-02-19 18:02:11 +00:00
Kubernetes Publisher
eb2ac93697 sync: update godeps 2018-02-19 18:02:10 +00:00
Kubernetes Publisher
2d6ed04f24 sync: update required packages 2018-02-19 13:58:26 +00:00
Kubernetes Publisher
7df2f66742 sync: update godeps 2018-02-19 13:58:26 +00:00
Kubernetes Publisher
bb431a3ab2 sync: update required packages 2018-02-19 09:58:05 +00:00
Kubernetes Publisher
b0d14a1c84 sync: update godeps 2018-02-19 09:58:05 +00:00
Kubernetes Publisher
f084c24891 sync: update required packages 2018-02-19 05:57:49 +00:00
Kubernetes Publisher
50534bb2bf sync: update godeps 2018-02-19 05:57:48 +00:00
Kubernetes Publisher
00ef954d98 sync: update required packages 2018-02-19 01:59:09 +00:00
Kubernetes Publisher
6a485beed6 sync: update godeps 2018-02-19 01:59:09 +00:00
Kubernetes Publisher
4eadffae42 sync: update required packages 2018-02-18 21:58:15 +00:00
Kubernetes Publisher
8d64450519 sync: update godeps 2018-02-18 21:58:14 +00:00
Kubernetes Publisher
2f9475ee12 sync: update required packages 2018-02-18 17:57:42 +00:00
Kubernetes Publisher
199796a57a sync: update godeps 2018-02-18 17:57:41 +00:00
Kubernetes Publisher
97b622f024 sync: update required packages 2018-02-18 13:57:46 +00:00
Kubernetes Publisher
1ecf0bb87c sync: update godeps 2018-02-18 13:57:46 +00:00
Kubernetes Publisher
3ace527e12 sync: update required packages 2018-02-18 09:58:04 +00:00
Kubernetes Publisher
abb9a9f498 sync: update godeps 2018-02-18 09:58:03 +00:00
Kubernetes Publisher
d99c14e079 sync: update required packages 2018-02-18 02:00:56 +00:00
Kubernetes Publisher
cfce4bf244 sync: update godeps 2018-02-18 02:00:56 +00:00
Kubernetes Publisher
dd1f49d75d sync: update required packages 2018-02-17 22:02:07 +00:00
Kubernetes Publisher
e5dbccbaba sync: update godeps 2018-02-17 22:02:07 +00:00
Kubernetes Publisher
6f38ac2beb sync: update required packages 2018-02-17 17:59:14 +00:00
Kubernetes Publisher
ae4221b560 sync: update godeps 2018-02-17 17:59:14 +00:00
Kubernetes Publisher
b1dae4c7e4 sync: update required packages 2018-02-17 13:58:22 +00:00
Kubernetes Publisher
710ff6c591 sync: update godeps 2018-02-17 13:58:22 +00:00
Kubernetes Publisher
a06fbc12fd sync: update required packages 2018-02-17 09:58:56 +00:00
Kubernetes Publisher
8b8ce2dc48 sync: update godeps 2018-02-17 09:58:56 +00:00
Kubernetes Publisher
f25b71ab89 sync: update required packages 2018-02-17 02:00:25 +00:00
Kubernetes Publisher
1191502013 sync: update godeps 2018-02-17 02:00:25 +00:00
Kubernetes Publisher
b576b15460 sync: update required packages 2018-02-16 22:02:04 +00:00
Kubernetes Publisher
5d3eb2e0dc sync: update godeps 2018-02-16 22:02:04 +00:00
Jeff Grafton
6a86852930 Autogenerated: hack/update-bazel.sh
Kubernetes-commit: ef56a8d6bb3800ab7803713eafc4191e8202ad6e
2018-02-16 13:43:01 -08:00
Kubernetes Publisher
b441530fb7 sync: update required packages 2018-02-16 18:00:01 +00:00
Kubernetes Publisher
4ca6cf6c9a sync: update godeps 2018-02-16 18:00:01 +00:00
Kubernetes Publisher
6f435c14c9 Merge pull request #58317 from nikhita/bump-go-yaml
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

bump(670d4c): gopkg.in/yaml.v2: fix parsing for non-specific tags

Fixes #56976

Fixes this bug - https://github.com/go-yaml/yaml/issues/75 - in `go-yaml`. The fix for this bug is at 670d4cfef0.

**Release note**:

```release-note
NONE
```

/cc sttts caesarxuchao jennybuckley

Kubernetes-commit: 4e2c3f060a873a0b727dbd3e66047a3b2858db97
2018-02-16 14:02:11 +00:00
Christoph Blecker
7d2f5fe51f Re-add OWNERS files to Godeps/vendor dirs
Kubernetes-commit: 6fb2304f2a6da44e42985ed662d5f7f56215eec6
2018-02-15 13:31:02 -08:00
Nikhita Raghunath
16c374870f staging: add boilerplate header
Kubernetes-commit: c7d67ad1ee412b2ebe906f21aa900dbdfc4ce5da
2018-02-13 16:50:05 +05:30
Eric Chiang
f50bf20edb generated
Kubernetes-commit: 01801ae13a86c10cd343c329f5224ab47272f826
2018-02-07 15:48:46 -08:00
2056 changed files with 109634 additions and 166446 deletions

View File

@@ -1,2 +1,2 @@
Sorry, we do not accept changes directly against this repository. Please see
Sorry, we do not accept changes directly against this repository. Please see
CONTRIBUTING.md for information on where and how to contribute instead.

View File

@@ -2,6 +2,6 @@
Do not open pull requests directly against this repository, they will be ignored. Instead, please open pull requests against [kubernetes/kubernetes](https://git.k8s.io/kubernetes/). Please follow the same [contributing guide](https://git.k8s.io/kubernetes/CONTRIBUTING.md) you would follow for any other pull request made to kubernetes/kubernetes.
This repository is published from [kubernetes/kubernetes/staging/src/k8s.io/sample-controller](https://git.k8s.io/kubernetes/staging/src/k8s.io/sample-controller) by the [kubernetes publishing-bot](https://git.k8s.io/publishing-bot).
This repository is published from [kubernetes/kubernetes/staging/src/k8s.io/sample-controller](https://git.k8s.io/kubernetes/staging/src/k8s.io/sample-controller) by the [kubernetes publishing-bot](https://git.k8s.io/publishing-bot).
Please see [Staging Directory and Publishing](https://git.k8s.io/community/contributors/devel/sig-architecture/staging.md) for more information
Please see [Staging Directory and Publishing](https://git.k8s.io/community/contributors/devel/staging.md) for more information

796
Godeps/Godeps.json generated

File diff suppressed because it is too large Load Diff

2
Godeps/OWNERS generated
View File

@@ -1,4 +1,2 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- dep-approvers

5
OWNERS
View File

@@ -1,12 +1,9 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- sttts
- munnerz
reviewers:
- gregory-m
- kargakis
- sttts
- munnerz
- nikhita
labels:
- sig/api-machinery

View File

@@ -3,8 +3,6 @@
This repository implements a simple controller for watching Foo resources as
defined with a CustomResourceDefinition (CRD).
**Note:** go-get or vendor this package as `k8s.io/sample-controller`.
This particular example demonstrates how to perform basic operations such as:
* How to register a new custom resource (custom resource type) of type `Foo` using a CustomResourceDefinition.
@@ -25,13 +23,6 @@ Changes should not be made to these files manually, and when creating your own
controller based off of this implementation you should not copy these files and
instead run the `update-codegen` script to generate your own.
## Details
The sample controller uses [client-go library](https://github.com/kubernetes/client-go/tree/master/tools/cache) extensively.
The details of interaction points of the sample controller with various mechanisms from this library are
explained [here](docs/controller-client-go.md).
## Purpose
This is an example of how to build a kube-like controller with a single type.
@@ -42,10 +33,7 @@ This is an example of how to build a kube-like controller with a single type.
```sh
# assumes you have a working kubeconfig, not required if operating in-cluster
$ go get k8s.io/sample-controller
$ cd $GOPATH/src/k8s.io/sample-controller
$ go build -o sample-controller .
$ ./sample-controller -kubeconfig=$HOME/.kube/config
$ go run *.go -kubeconfig=$HOME/.kube/config
# create a CustomResourceDefinition
$ kubectl create -f artifacts/examples/crd.yaml
@@ -91,44 +79,18 @@ type User struct {
To validate custom resources, use the [`CustomResourceValidation`](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) feature.
This feature is beta and enabled by default in v1.9.
This feature is beta and enabled by default in v1.9. If you are using v1.8, enable the feature using
the `CustomResourceValidation` feature gate on the [kube-apiserver](https://kubernetes.io/docs/admin/kube-apiserver):
```sh
--feature-gates=CustomResourceValidation=true
```
### Example
The schema in [`crd-validation.yaml`](./artifacts/examples/crd-validation.yaml) applies the following validation on the custom resource:
The schema in the [example CRD](./artifacts/examples/crd.yaml) applies the following validation on the custom resource:
`spec.replicas` must be an integer and must have a minimum value of 1 and a maximum value of 10.
In the above steps, use `crd-validation.yaml` to create the CRD:
```sh
# create a CustomResourceDefinition supporting validation
$ kubectl create -f artifacts/examples/crd-validation.yaml
```
## Subresources
Custom Resources support `/status` and `/scale` subresources as a [beta feature](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#subresources) in v1.11 and is enabled by default.
This feature is [alpha](https://v1-10.docs.kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#subresources) in v1.10 and to enable it you need to set the `CustomResourceSubresources` feature gate on the [kube-apiserver](https://kubernetes.io/docs/admin/kube-apiserver):
```sh
--feature-gates=CustomResourceSubresources=true
```
### Example
The CRD in [`crd-status-subresource.yaml`](./artifacts/examples/crd-status-subresource.yaml) enables the `/status` subresource
for custom resources.
This means that [`UpdateStatus`](./controller.go#L330) can be used by the controller to update only the status part of the custom resource.
To understand why only the status part of the custom resource should be updated, please refer to the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).
In the above steps, use `crd-status-subresource.yaml` to create the CRD:
```sh
# create a CustomResourceDefinition supporting the status subresource
$ kubectl create -f artifacts/examples/crd-status-subresource.yaml
```
## Cleanup
You can clean up the created CustomResourceDefinition with:

View File

@@ -1,17 +0,0 @@
# Defined below are the security contacts for this repo.
#
# They are the contact point for the Product Security Committee to reach out
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
# [Embargo Policy](https://git.k8s.io/security/private-distributors-list.md#embargo-policy)
# and will be removed and replaced if they violate that agreement.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
cjcullen
jessfraz
liggitt
philips
tallclair

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: foos.samplecontroller.k8s.io
spec:
group: samplecontroller.k8s.io
version: v1alpha1
names:
kind: Foo
plural: foos
scope: Namespaced
subresources:
status: {}

View File

@@ -1,20 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: foos.samplecontroller.k8s.io
spec:
group: samplecontroller.k8s.io
version: v1alpha1
names:
kind: Foo
plural: foos
scope: Namespaced
validation:
openAPIV3Schema:
properties:
spec:
properties:
replicas:
type: integer
minimum: 1
maximum: 10

View File

@@ -9,3 +9,12 @@ spec:
kind: Foo
plural: foos
scope: Namespaced
validation:
openAPIV3Schema:
properties:
spec:
properties:
replicas:
type: integer
minimum: 1
maximum: 10

View File

@@ -20,14 +20,15 @@ import (
"fmt"
"time"
"github.com/golang/glog"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
appsinformers "k8s.io/client-go/informers/apps/v1"
kubeinformers "k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
typedcorev1 "k8s.io/client-go/kubernetes/typed/core/v1"
@@ -35,13 +36,12 @@ import (
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog"
samplev1alpha1 "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
clientset "k8s.io/sample-controller/pkg/generated/clientset/versioned"
samplescheme "k8s.io/sample-controller/pkg/generated/clientset/versioned/scheme"
informers "k8s.io/sample-controller/pkg/generated/informers/externalversions/samplecontroller/v1alpha1"
listers "k8s.io/sample-controller/pkg/generated/listers/samplecontroller/v1alpha1"
clientset "k8s.io/sample-controller/pkg/client/clientset/versioned"
samplescheme "k8s.io/sample-controller/pkg/client/clientset/versioned/scheme"
informers "k8s.io/sample-controller/pkg/client/informers/externalversions"
listers "k8s.io/sample-controller/pkg/client/listers/samplecontroller/v1alpha1"
)
const controllerAgentName = "sample-controller"
@@ -88,16 +88,21 @@ type Controller struct {
func NewController(
kubeclientset kubernetes.Interface,
sampleclientset clientset.Interface,
deploymentInformer appsinformers.DeploymentInformer,
fooInformer informers.FooInformer) *Controller {
kubeInformerFactory kubeinformers.SharedInformerFactory,
sampleInformerFactory informers.SharedInformerFactory) *Controller {
// obtain references to shared index informers for the Deployment and Foo
// types.
deploymentInformer := kubeInformerFactory.Apps().V1().Deployments()
fooInformer := sampleInformerFactory.Samplecontroller().V1alpha1().Foos()
// Create event broadcaster
// Add sample-controller types to the default Kubernetes Scheme so Events can be
// logged for sample-controller types.
utilruntime.Must(samplescheme.AddToScheme(scheme.Scheme))
klog.V(4).Info("Creating event broadcaster")
samplescheme.AddToScheme(scheme.Scheme)
glog.V(4).Info("Creating event broadcaster")
eventBroadcaster := record.NewBroadcaster()
eventBroadcaster.StartLogging(klog.Infof)
eventBroadcaster.StartLogging(glog.Infof)
eventBroadcaster.StartRecordingToSink(&typedcorev1.EventSinkImpl{Interface: kubeclientset.CoreV1().Events("")})
recorder := eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: controllerAgentName})
@@ -112,7 +117,7 @@ func NewController(
recorder: recorder,
}
klog.Info("Setting up event handlers")
glog.Info("Setting up event handlers")
// Set up an event handler for when Foo resources change
fooInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controller.enqueueFoo,
@@ -149,27 +154,27 @@ func NewController(
// is closed, at which point it will shutdown the workqueue and wait for
// workers to finish processing their current work items.
func (c *Controller) Run(threadiness int, stopCh <-chan struct{}) error {
defer utilruntime.HandleCrash()
defer runtime.HandleCrash()
defer c.workqueue.ShutDown()
// Start the informer factories to begin populating the informer caches
klog.Info("Starting Foo controller")
glog.Info("Starting Foo controller")
// Wait for the caches to be synced before starting workers
klog.Info("Waiting for informer caches to sync")
glog.Info("Waiting for informer caches to sync")
if ok := cache.WaitForCacheSync(stopCh, c.deploymentsSynced, c.foosSynced); !ok {
return fmt.Errorf("failed to wait for caches to sync")
}
klog.Info("Starting workers")
glog.Info("Starting workers")
// Launch two workers to process Foo resources
for i := 0; i < threadiness; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
klog.Info("Started workers")
glog.Info("Started workers")
<-stopCh
klog.Info("Shutting down workers")
glog.Info("Shutting down workers")
return nil
}
@@ -212,25 +217,23 @@ func (c *Controller) processNextWorkItem() bool {
// Forget here else we'd go into a loop of attempting to
// process a work item that is invalid.
c.workqueue.Forget(obj)
utilruntime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj))
runtime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj))
return nil
}
// Run the syncHandler, passing it the namespace/name string of the
// Foo resource to be synced.
if err := c.syncHandler(key); err != nil {
// Put the item back on the workqueue to handle any transient errors.
c.workqueue.AddRateLimited(key)
return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error())
return fmt.Errorf("error syncing '%s': %s", key, err.Error())
}
// Finally, if no error occurs we Forget this item so it does not
// get queued again until another change happens.
c.workqueue.Forget(obj)
klog.Infof("Successfully synced '%s'", key)
glog.Infof("Successfully synced '%s'", key)
return nil
}(obj)
if err != nil {
utilruntime.HandleError(err)
runtime.HandleError(err)
return true
}
@@ -244,7 +247,7 @@ func (c *Controller) syncHandler(key string) error {
// Convert the namespace/name string into a distinct namespace and name
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
utilruntime.HandleError(fmt.Errorf("invalid resource key: %s", key))
runtime.HandleError(fmt.Errorf("invalid resource key: %s", key))
return nil
}
@@ -254,7 +257,7 @@ func (c *Controller) syncHandler(key string) error {
// The Foo resource may no longer exist, in which case we stop
// processing.
if errors.IsNotFound(err) {
utilruntime.HandleError(fmt.Errorf("foo '%s' in work queue no longer exists", key))
runtime.HandleError(fmt.Errorf("foo '%s' in work queue no longer exists", key))
return nil
}
@@ -266,7 +269,7 @@ func (c *Controller) syncHandler(key string) error {
// We choose to absorb the error here as the worker would requeue the
// resource otherwise. Instead, the next time the resource is updated
// the resource will be queued again.
utilruntime.HandleError(fmt.Errorf("%s: deployment name must be specified", key))
runtime.HandleError(fmt.Errorf("%s: deployment name must be specified", key))
return nil
}
@@ -296,7 +299,7 @@ func (c *Controller) syncHandler(key string) error {
// number does not equal the current desired replicas on the Deployment, we
// should update the Deployment resource.
if foo.Spec.Replicas != nil && *foo.Spec.Replicas != *deployment.Spec.Replicas {
klog.V(4).Infof("Foo %s replicas: %d, deployment replicas: %d", name, *foo.Spec.Replicas, *deployment.Spec.Replicas)
glog.V(4).Infof("Foo %s replicas: %d, deployment replicas: %d", name, *foo.Spec.Replicas, *deployment.Spec.Replicas)
deployment, err = c.kubeclientset.AppsV1().Deployments(foo.Namespace).Update(newDeployment(foo))
}
@@ -324,10 +327,10 @@ func (c *Controller) updateFooStatus(foo *samplev1alpha1.Foo, deployment *appsv1
// Or create a copy manually for better performance
fooCopy := foo.DeepCopy()
fooCopy.Status.AvailableReplicas = deployment.Status.AvailableReplicas
// If the CustomResourceSubresources feature gate is not enabled,
// we must use Update instead of UpdateStatus to update the Status block of the Foo resource.
// UpdateStatus will not allow changes to the Spec of the resource,
// which is ideal for ensuring nothing other than resource status has been updated.
// Until #38113 is merged, we must use Update instead of UpdateStatus to
// update the Status block of the Foo resource. UpdateStatus will not
// allow changes to the Spec of the resource, which is ideal for ensuring
// nothing other than resource status has been updated.
_, err := c.sampleclientset.SamplecontrollerV1alpha1().Foos(foo.Namespace).Update(fooCopy)
return err
}
@@ -339,7 +342,7 @@ func (c *Controller) enqueueFoo(obj interface{}) {
var key string
var err error
if key, err = cache.MetaNamespaceKeyFunc(obj); err != nil {
utilruntime.HandleError(err)
runtime.HandleError(err)
return
}
c.workqueue.AddRateLimited(key)
@@ -356,17 +359,17 @@ func (c *Controller) handleObject(obj interface{}) {
if object, ok = obj.(metav1.Object); !ok {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
utilruntime.HandleError(fmt.Errorf("error decoding object, invalid type"))
runtime.HandleError(fmt.Errorf("error decoding object, invalid type"))
return
}
object, ok = tombstone.Obj.(metav1.Object)
if !ok {
utilruntime.HandleError(fmt.Errorf("error decoding object tombstone, invalid type"))
runtime.HandleError(fmt.Errorf("error decoding object tombstone, invalid type"))
return
}
klog.V(4).Infof("Recovered deleted object '%s' from tombstone", object.GetName())
glog.V(4).Infof("Recovered deleted object '%s' from tombstone", object.GetName())
}
klog.V(4).Infof("Processing object: %s", object.GetName())
glog.V(4).Infof("Processing object: %s", object.GetName())
if ownerRef := metav1.GetControllerOf(object); ownerRef != nil {
// If this object is not owned by a Foo, we should not do anything more
// with it.
@@ -376,7 +379,7 @@ func (c *Controller) handleObject(obj interface{}) {
foo, err := c.foosLister.Foos(object.GetNamespace()).Get(ownerRef.Name)
if err != nil {
klog.V(4).Infof("ignoring orphaned object '%s' of foo '%s'", object.GetSelfLink(), ownerRef.Name)
glog.V(4).Infof("ignoring orphaned object '%s' of foo '%s'", object.GetSelfLink(), ownerRef.Name)
return
}

View File

@@ -1,313 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"reflect"
"testing"
"time"
apps "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/diff"
kubeinformers "k8s.io/client-go/informers"
k8sfake "k8s.io/client-go/kubernetes/fake"
core "k8s.io/client-go/testing"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
samplecontroller "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
"k8s.io/sample-controller/pkg/generated/clientset/versioned/fake"
informers "k8s.io/sample-controller/pkg/generated/informers/externalversions"
)
var (
alwaysReady = func() bool { return true }
noResyncPeriodFunc = func() time.Duration { return 0 }
)
type fixture struct {
t *testing.T
client *fake.Clientset
kubeclient *k8sfake.Clientset
// Objects to put in the store.
fooLister []*samplecontroller.Foo
deploymentLister []*apps.Deployment
// Actions expected to happen on the client.
kubeactions []core.Action
actions []core.Action
// Objects from here preloaded into NewSimpleFake.
kubeobjects []runtime.Object
objects []runtime.Object
}
func newFixture(t *testing.T) *fixture {
f := &fixture{}
f.t = t
f.objects = []runtime.Object{}
f.kubeobjects = []runtime.Object{}
return f
}
func newFoo(name string, replicas *int32) *samplecontroller.Foo {
return &samplecontroller.Foo{
TypeMeta: metav1.TypeMeta{APIVersion: samplecontroller.SchemeGroupVersion.String()},
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: metav1.NamespaceDefault,
},
Spec: samplecontroller.FooSpec{
DeploymentName: fmt.Sprintf("%s-deployment", name),
Replicas: replicas,
},
}
}
func (f *fixture) newController() (*Controller, informers.SharedInformerFactory, kubeinformers.SharedInformerFactory) {
f.client = fake.NewSimpleClientset(f.objects...)
f.kubeclient = k8sfake.NewSimpleClientset(f.kubeobjects...)
i := informers.NewSharedInformerFactory(f.client, noResyncPeriodFunc())
k8sI := kubeinformers.NewSharedInformerFactory(f.kubeclient, noResyncPeriodFunc())
c := NewController(f.kubeclient, f.client,
k8sI.Apps().V1().Deployments(), i.Samplecontroller().V1alpha1().Foos())
c.foosSynced = alwaysReady
c.deploymentsSynced = alwaysReady
c.recorder = &record.FakeRecorder{}
for _, f := range f.fooLister {
i.Samplecontroller().V1alpha1().Foos().Informer().GetIndexer().Add(f)
}
for _, d := range f.deploymentLister {
k8sI.Apps().V1().Deployments().Informer().GetIndexer().Add(d)
}
return c, i, k8sI
}
func (f *fixture) run(fooName string) {
f.runController(fooName, true, false)
}
func (f *fixture) runExpectError(fooName string) {
f.runController(fooName, true, true)
}
func (f *fixture) runController(fooName string, startInformers bool, expectError bool) {
c, i, k8sI := f.newController()
if startInformers {
stopCh := make(chan struct{})
defer close(stopCh)
i.Start(stopCh)
k8sI.Start(stopCh)
}
err := c.syncHandler(fooName)
if !expectError && err != nil {
f.t.Errorf("error syncing foo: %v", err)
} else if expectError && err == nil {
f.t.Error("expected error syncing foo, got nil")
}
actions := filterInformerActions(f.client.Actions())
for i, action := range actions {
if len(f.actions) < i+1 {
f.t.Errorf("%d unexpected actions: %+v", len(actions)-len(f.actions), actions[i:])
break
}
expectedAction := f.actions[i]
checkAction(expectedAction, action, f.t)
}
if len(f.actions) > len(actions) {
f.t.Errorf("%d additional expected actions:%+v", len(f.actions)-len(actions), f.actions[len(actions):])
}
k8sActions := filterInformerActions(f.kubeclient.Actions())
for i, action := range k8sActions {
if len(f.kubeactions) < i+1 {
f.t.Errorf("%d unexpected actions: %+v", len(k8sActions)-len(f.kubeactions), k8sActions[i:])
break
}
expectedAction := f.kubeactions[i]
checkAction(expectedAction, action, f.t)
}
if len(f.kubeactions) > len(k8sActions) {
f.t.Errorf("%d additional expected actions:%+v", len(f.kubeactions)-len(k8sActions), f.kubeactions[len(k8sActions):])
}
}
// checkAction verifies that expected and actual actions are equal and both have
// same attached resources
func checkAction(expected, actual core.Action, t *testing.T) {
if !(expected.Matches(actual.GetVerb(), actual.GetResource().Resource) && actual.GetSubresource() == expected.GetSubresource()) {
t.Errorf("Expected\n\t%#v\ngot\n\t%#v", expected, actual)
return
}
if reflect.TypeOf(actual) != reflect.TypeOf(expected) {
t.Errorf("Action has wrong type. Expected: %t. Got: %t", expected, actual)
return
}
switch a := actual.(type) {
case core.CreateAction:
e, _ := expected.(core.CreateAction)
expObject := e.GetObject()
object := a.GetObject()
if !reflect.DeepEqual(expObject, object) {
t.Errorf("Action %s %s has wrong object\nDiff:\n %s",
a.GetVerb(), a.GetResource().Resource, diff.ObjectGoPrintDiff(expObject, object))
}
case core.UpdateAction:
e, _ := expected.(core.UpdateAction)
expObject := e.GetObject()
object := a.GetObject()
if !reflect.DeepEqual(expObject, object) {
t.Errorf("Action %s %s has wrong object\nDiff:\n %s",
a.GetVerb(), a.GetResource().Resource, diff.ObjectGoPrintDiff(expObject, object))
}
case core.PatchAction:
e, _ := expected.(core.PatchAction)
expPatch := e.GetPatch()
patch := a.GetPatch()
if !reflect.DeepEqual(expPatch, patch) {
t.Errorf("Action %s %s has wrong patch\nDiff:\n %s",
a.GetVerb(), a.GetResource().Resource, diff.ObjectGoPrintDiff(expPatch, patch))
}
}
}
// filterInformerActions filters list and watch actions for testing resources.
// Since list and watch don't change resource state we can filter it to lower
// nose level in our tests.
func filterInformerActions(actions []core.Action) []core.Action {
ret := []core.Action{}
for _, action := range actions {
if len(action.GetNamespace()) == 0 &&
(action.Matches("list", "foos") ||
action.Matches("watch", "foos") ||
action.Matches("list", "deployments") ||
action.Matches("watch", "deployments")) {
continue
}
ret = append(ret, action)
}
return ret
}
func (f *fixture) expectCreateDeploymentAction(d *apps.Deployment) {
f.kubeactions = append(f.kubeactions, core.NewCreateAction(schema.GroupVersionResource{Resource: "deployments"}, d.Namespace, d))
}
func (f *fixture) expectUpdateDeploymentAction(d *apps.Deployment) {
f.kubeactions = append(f.kubeactions, core.NewUpdateAction(schema.GroupVersionResource{Resource: "deployments"}, d.Namespace, d))
}
func (f *fixture) expectUpdateFooStatusAction(foo *samplecontroller.Foo) {
action := core.NewUpdateAction(schema.GroupVersionResource{Resource: "foos"}, foo.Namespace, foo)
// TODO: Until #38113 is merged, we can't use Subresource
//action.Subresource = "status"
f.actions = append(f.actions, action)
}
func getKey(foo *samplecontroller.Foo, t *testing.T) string {
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(foo)
if err != nil {
t.Errorf("Unexpected error getting key for foo %v: %v", foo.Name, err)
return ""
}
return key
}
func TestCreatesDeployment(t *testing.T) {
f := newFixture(t)
foo := newFoo("test", int32Ptr(1))
f.fooLister = append(f.fooLister, foo)
f.objects = append(f.objects, foo)
expDeployment := newDeployment(foo)
f.expectCreateDeploymentAction(expDeployment)
f.expectUpdateFooStatusAction(foo)
f.run(getKey(foo, t))
}
func TestDoNothing(t *testing.T) {
f := newFixture(t)
foo := newFoo("test", int32Ptr(1))
d := newDeployment(foo)
f.fooLister = append(f.fooLister, foo)
f.objects = append(f.objects, foo)
f.deploymentLister = append(f.deploymentLister, d)
f.kubeobjects = append(f.kubeobjects, d)
f.expectUpdateFooStatusAction(foo)
f.run(getKey(foo, t))
}
func TestUpdateDeployment(t *testing.T) {
f := newFixture(t)
foo := newFoo("test", int32Ptr(1))
d := newDeployment(foo)
// Update replicas
foo.Spec.Replicas = int32Ptr(2)
expDeployment := newDeployment(foo)
f.fooLister = append(f.fooLister, foo)
f.objects = append(f.objects, foo)
f.deploymentLister = append(f.deploymentLister, d)
f.kubeobjects = append(f.kubeobjects, d)
f.expectUpdateFooStatusAction(foo)
f.expectUpdateDeploymentAction(expDeployment)
f.run(getKey(foo, t))
}
func TestNotControlledByUs(t *testing.T) {
f := newFixture(t)
foo := newFoo("test", int32Ptr(1))
d := newDeployment(foo)
d.ObjectMeta.OwnerReferences = []metav1.OwnerReference{}
f.fooLister = append(f.fooLister, foo)
f.objects = append(f.objects, foo)
f.deploymentLister = append(f.deploymentLister, d)
f.kubeobjects = append(f.kubeobjects, d)
f.runExpectError(getKey(foo, t))
}
func int32Ptr(i int32) *int32 { return &i }

View File

@@ -1,64 +0,0 @@
# client-go under the hood
The [client-go](https://github.com/kubernetes/client-go/) library contains various mechanisms that you can use when
developing your custom controllers. These mechanisms are defined in the
[tools/cache folder](https://github.com/kubernetes/client-go/tree/master/tools/cache) of the library.
Here is a pictorial representation showing how the various components in
the client-go library work and their interaction points with the custom
controller code that you will write.
<p align="center">
<img src="images/client-go-controller-interaction.jpeg" height="600" width="700"/>
</p>
## client-go components
* Reflector: A reflector, which is defined in [type *Reflector* inside package *cache*](https://github.com/kubernetes/client-go/blob/master/tools/cache/reflector.go),
watches the Kubernetes API for the specified resource type (kind).
The function in which this is done is *ListAndWatch*.
The watch could be for an in-built resource or it could be for a custom resource.
When the reflector receives notification about existence of new
resource instance through the watch API, it gets the newly created object
using the corresponding listing API and puts it in the Delta Fifo queue
inside the *watchHandler* function.
* Informer: An informer defined in the [base controller inside package *cache*](https://github.com/kubernetes/client-go/blob/master/tools/cache/controller.go) pops objects from the Delta Fifo queue.
The function in which this is done is *processLoop*. The job of this base controller
is to save the object for later retrieval, and to invoke our controller passing it the object.
* Indexer: An indexer provides indexing functionality over objects.
It is defined in [type *Indexer* inside package *cache*](https://github.com/kubernetes/client-go/blob/master/tools/cache/index.go). A typical indexing use-case is to create an index based on object labels. Indexer can
maintain indexes based on several indexing functions.
Indexer uses a thread-safe data store to store objects and their keys.
There is a default function named *MetaNamespaceKeyFunc* defined in [type Store inside package cache](https://github.com/kubernetes/client-go/blob/master/tools/cache/store.go)
that generates an objects key as `<namespace>/<name>` combination for that object.
## Custom Controller components
* Informer reference: This is the reference to the Informer instance that knows
how to work with your custom resource objects. Your custom controller code needs
to create the appropriate Informer.
* Indexer reference: This is the reference to the Indexer instance that knows
how to work with your custom resource objects. Your custom controller code needs
to create this. You will be using this reference for retrieving objects for
later processing.
The base controller in client-go provides the *NewIndexerInformer* function to create Informer and Indexer.
In your code you can either [directly invoke this function](https://github.com/kubernetes/client-go/blob/master/examples/workqueue/main.go#L174) or [use factory methods for creating an informer.](https://github.com/kubernetes/sample-controller/blob/master/main.go#L61)
* Resource Event Handlers: These are the callback functions which will be called by
the Informer when it wants to deliver an object to your controller. The typical
pattern to write these functions is to obtain the dispatched objects key
and enqueue that key in a work queue for further processing.
* Work queue: This is the queue that you create in your controller code to decouple
delivery of an object from its processing. Resource event handler functions are written
to extract the delivered objects key and add that to the work queue.
* Process Item: This is the function that you create in your code which processes items
from the work queue. There can be one or more other functions that do the actual processing.
These functions will typically use the [Indexer reference](https://github.com/kubernetes/client-go/blob/master/examples/workqueue/main.go#L73), or a Listing wrapper to retrieve the object corresponding to the key.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 109 KiB

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright YEAR The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
# Copyright 2017 The Kubernetes Authors.
#
@@ -18,18 +18,18 @@ set -o errexit
set -o nounset
set -o pipefail
SCRIPT_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
CODEGEN_PKG=${CODEGEN_PKG:-$(cd "${SCRIPT_ROOT}"; ls -d -1 ./vendor/k8s.io/code-generator 2>/dev/null || echo ../code-generator)}
SCRIPT_ROOT=$(dirname ${BASH_SOURCE})/..
CODEGEN_PKG=${CODEGEN_PKG:-$(cd ${SCRIPT_ROOT}; ls -d -1 ./vendor/k8s.io/code-generator 2>/dev/null || echo ../code-generator)}
# generate the code with:
# --output-base because this script should also be able to run inside the vendor dir of
# k8s.io/kubernetes. The output-base is needed for the generators to output into the vendor dir
# instead of the $GOPATH directly. For normal projects this can be dropped.
"${CODEGEN_PKG}"/generate-groups.sh "deepcopy,client,informer,lister" \
k8s.io/sample-controller/pkg/generated k8s.io/sample-controller/pkg/apis \
${CODEGEN_PKG}/generate-groups.sh "deepcopy,client,informer,lister" \
k8s.io/sample-controller/pkg/client k8s.io/sample-controller/pkg/apis \
samplecontroller:v1alpha1 \
--output-base "$(dirname "${BASH_SOURCE[0]}")/../../.." \
--go-header-file "${SCRIPT_ROOT}"/hack/boilerplate.go.txt
--output-base "$(dirname ${BASH_SOURCE})/../../.." \
--go-header-file ${SCRIPT_ROOT}/hack/boilerplate.go.txt
# To use your own boilerplate text append:
# --go-header-file "${SCRIPT_ROOT}"/hack/custom-boilerplate.go.txt
# To use your own boilerplate text use:
# --go-header-file ${SCRIPT_ROOT}/hack/custom-boilerplate.go.txt

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
# Copyright 2017 The Kubernetes Authors.
#
@@ -18,7 +18,7 @@ set -o errexit
set -o nounset
set -o pipefail
SCRIPT_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
SCRIPT_ROOT=$(dirname "${BASH_SOURCE}")/..
DIFFROOT="${SCRIPT_ROOT}/pkg"
TMP_DIFFROOT="${SCRIPT_ROOT}/_tmp/pkg"

24
main.go
View File

@@ -20,15 +20,15 @@ import (
"flag"
"time"
"github.com/golang/glog"
kubeinformers "k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/klog"
// Uncomment the following line to load the gcp plugin (only required to authenticate against GKE clusters).
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
clientset "k8s.io/sample-controller/pkg/generated/clientset/versioned"
informers "k8s.io/sample-controller/pkg/generated/informers/externalversions"
clientset "k8s.io/sample-controller/pkg/client/clientset/versioned"
informers "k8s.io/sample-controller/pkg/client/informers/externalversions"
"k8s.io/sample-controller/pkg/signals"
)
@@ -45,33 +45,29 @@ func main() {
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
klog.Fatalf("Error building kubeconfig: %s", err.Error())
glog.Fatalf("Error building kubeconfig: %s", err.Error())
}
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
klog.Fatalf("Error building kubernetes clientset: %s", err.Error())
glog.Fatalf("Error building kubernetes clientset: %s", err.Error())
}
exampleClient, err := clientset.NewForConfig(cfg)
if err != nil {
klog.Fatalf("Error building example clientset: %s", err.Error())
glog.Fatalf("Error building example clientset: %s", err.Error())
}
kubeInformerFactory := kubeinformers.NewSharedInformerFactory(kubeClient, time.Second*30)
exampleInformerFactory := informers.NewSharedInformerFactory(exampleClient, time.Second*30)
controller := NewController(kubeClient, exampleClient,
kubeInformerFactory.Apps().V1().Deployments(),
exampleInformerFactory.Samplecontroller().V1alpha1().Foos())
controller := NewController(kubeClient, exampleClient, kubeInformerFactory, exampleInformerFactory)
// notice that there is no need to run Start methods in a separate goroutine. (i.e. go kubeInformerFactory.Start(stopCh)
// Start method is non-blocking and runs all registered informers in a dedicated goroutine.
kubeInformerFactory.Start(stopCh)
exampleInformerFactory.Start(stopCh)
go kubeInformerFactory.Start(stopCh)
go exampleInformerFactory.Start(stopCh)
if err = controller.Run(2, stopCh); err != nil {
klog.Fatalf("Error running controller: %s", err.Error())
glog.Fatalf("Error running controller: %s", err.Error())
}
}

View File

@@ -15,7 +15,7 @@ limitations under the License.
*/
// +k8s:deepcopy-gen=package
// +groupName=samplecontroller.k8s.io
// Package v1alpha1 is the v1alpha1 version of the API.
// +groupName=samplecontroller.k8s.io
package v1alpha1

View File

@@ -21,6 +21,7 @@ import (
)
// +genclient
// +genclient:noStatus
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Foo is a specification for a Foo resource

View File

@@ -1,7 +1,7 @@
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -90,8 +90,12 @@ func (in *FooSpec) DeepCopyInto(out *FooSpec) {
*out = *in
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(int32)
**out = **in
if *in == nil {
*out = nil
} else {
*out = new(int32)
**out = **in
}
}
return
}

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -19,15 +19,18 @@ limitations under the License.
package versioned
import (
glog "github.com/golang/glog"
discovery "k8s.io/client-go/discovery"
rest "k8s.io/client-go/rest"
flowcontrol "k8s.io/client-go/util/flowcontrol"
samplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/generated/clientset/versioned/typed/samplecontroller/v1alpha1"
samplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/client/clientset/versioned/typed/samplecontroller/v1alpha1"
)
type Interface interface {
Discovery() discovery.DiscoveryInterface
SamplecontrollerV1alpha1() samplecontrollerv1alpha1.SamplecontrollerV1alpha1Interface
// Deprecated: please explicitly pick a version if possible.
Samplecontroller() samplecontrollerv1alpha1.SamplecontrollerV1alpha1Interface
}
// Clientset contains the clients for groups. Each group has exactly one
@@ -42,6 +45,12 @@ func (c *Clientset) SamplecontrollerV1alpha1() samplecontrollerv1alpha1.Sampleco
return c.samplecontrollerV1alpha1
}
// Deprecated: Samplecontroller retrieves the default version of SamplecontrollerClient.
// Please explicitly pick a version.
func (c *Clientset) Samplecontroller() samplecontrollerv1alpha1.SamplecontrollerV1alpha1Interface {
return c.samplecontrollerV1alpha1
}
// Discovery retrieves the DiscoveryClient
func (c *Clientset) Discovery() discovery.DiscoveryInterface {
if c == nil {
@@ -65,6 +74,7 @@ func NewForConfig(c *rest.Config) (*Clientset, error) {
cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy)
if err != nil {
glog.Errorf("failed to create the DiscoveryClient: %v", err)
return nil, err
}
return &cs, nil

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,9 +24,9 @@ import (
"k8s.io/client-go/discovery"
fakediscovery "k8s.io/client-go/discovery/fake"
"k8s.io/client-go/testing"
clientset "k8s.io/sample-controller/pkg/generated/clientset/versioned"
samplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/generated/clientset/versioned/typed/samplecontroller/v1alpha1"
fakesamplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/generated/clientset/versioned/typed/samplecontroller/v1alpha1/fake"
clientset "k8s.io/sample-controller/pkg/client/clientset/versioned"
samplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/client/clientset/versioned/typed/samplecontroller/v1alpha1"
fakesamplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/client/clientset/versioned/typed/samplecontroller/v1alpha1/fake"
)
// NewSimpleClientset returns a clientset that will respond with the provided objects.
@@ -41,10 +41,9 @@ func NewSimpleClientset(objects ...runtime.Object) *Clientset {
}
}
cs := &Clientset{}
cs.discovery = &fakediscovery.FakeDiscovery{Fake: &cs.Fake}
cs.AddReactor("*", "*", testing.ObjectReaction(o))
cs.AddWatchReactor("*", func(action testing.Action) (handled bool, ret watch.Interface, err error) {
fakePtr := testing.Fake{}
fakePtr.AddReactor("*", "*", testing.ObjectReaction(o))
fakePtr.AddWatchReactor("*", func(action testing.Action) (handled bool, ret watch.Interface, err error) {
gvr := action.GetResource()
ns := action.GetNamespace()
watch, err := o.Watch(gvr, ns)
@@ -54,7 +53,7 @@ func NewSimpleClientset(objects ...runtime.Object) *Clientset {
return true, watch, nil
})
return cs
return &Clientset{fakePtr, &fakediscovery.FakeDiscovery{Fake: &fakePtr}}
}
// Clientset implements clientset.Interface. Meant to be embedded into a
@@ -75,3 +74,8 @@ var _ clientset.Interface = &Clientset{}
func (c *Clientset) SamplecontrollerV1alpha1() samplecontrollerv1alpha1.SamplecontrollerV1alpha1Interface {
return &fakesamplecontrollerv1alpha1.FakeSamplecontrollerV1alpha1{Fake: &c.Fake}
}
// Samplecontroller retrieves the SamplecontrollerV1alpha1Client
func (c *Clientset) Samplecontroller() samplecontrollerv1alpha1.SamplecontrollerV1alpha1Interface {
return &fakesamplecontrollerv1alpha1.FakeSamplecontrollerV1alpha1{Fake: &c.Fake}
}

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -23,15 +23,16 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
schema "k8s.io/apimachinery/pkg/runtime/schema"
serializer "k8s.io/apimachinery/pkg/runtime/serializer"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
samplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
)
var scheme = runtime.NewScheme()
var codecs = serializer.NewCodecFactory(scheme)
var parameterCodec = runtime.NewParameterCodec(scheme)
var localSchemeBuilder = runtime.SchemeBuilder{
samplecontrollerv1alpha1.AddToScheme,
func init() {
v1.AddToGroupVersion(scheme, schema.GroupVersion{Version: "v1"})
AddToScheme(scheme)
}
// AddToScheme adds all types of this clientset into the given scheme. This allows composition
@@ -44,13 +45,10 @@ var localSchemeBuilder = runtime.SchemeBuilder{
// )
//
// kclientset, _ := kubernetes.NewForConfig(c)
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
// aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
//
// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types
// correctly.
var AddToScheme = localSchemeBuilder.AddToScheme
func init() {
v1.AddToGroupVersion(scheme, schema.GroupVersion{Version: "v1"})
utilruntime.Must(AddToScheme(scheme))
func AddToScheme(scheme *runtime.Scheme) {
samplecontrollerv1alpha1.AddToScheme(scheme)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -23,15 +23,16 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
schema "k8s.io/apimachinery/pkg/runtime/schema"
serializer "k8s.io/apimachinery/pkg/runtime/serializer"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
samplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
)
var Scheme = runtime.NewScheme()
var Codecs = serializer.NewCodecFactory(Scheme)
var ParameterCodec = runtime.NewParameterCodec(Scheme)
var localSchemeBuilder = runtime.SchemeBuilder{
samplecontrollerv1alpha1.AddToScheme,
func init() {
v1.AddToGroupVersion(Scheme, schema.GroupVersion{Version: "v1"})
AddToScheme(Scheme)
}
// AddToScheme adds all types of this clientset into the given scheme. This allows composition
@@ -44,13 +45,10 @@ var localSchemeBuilder = runtime.SchemeBuilder{
// )
//
// kclientset, _ := kubernetes.NewForConfig(c)
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
// aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
//
// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types
// correctly.
var AddToScheme = localSchemeBuilder.AddToScheme
func init() {
v1.AddToGroupVersion(Scheme, schema.GroupVersion{Version: "v1"})
utilruntime.Must(AddToScheme(Scheme))
func AddToScheme(scheme *runtime.Scheme) {
samplecontrollerv1alpha1.AddToScheme(scheme)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -62,7 +62,7 @@ func (c *FakeFoos) List(opts v1.ListOptions) (result *v1alpha1.FooList, err erro
if label == nil {
label = labels.Everything()
}
list := &v1alpha1.FooList{ListMeta: obj.(*v1alpha1.FooList).ListMeta}
list := &v1alpha1.FooList{}
for _, item := range obj.(*v1alpha1.FooList).Items {
if label.Matches(labels.Set(item.Labels)) {
list.Items = append(list.Items, item)
@@ -100,18 +100,6 @@ func (c *FakeFoos) Update(foo *v1alpha1.Foo) (result *v1alpha1.Foo, err error) {
return obj.(*v1alpha1.Foo), err
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *FakeFoos) UpdateStatus(foo *v1alpha1.Foo) (*v1alpha1.Foo, error) {
obj, err := c.Fake.
Invokes(testing.NewUpdateSubresourceAction(foosResource, "status", c.ns, foo), &v1alpha1.Foo{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha1.Foo), err
}
// Delete takes name of the foo and deletes it. Returns an error if one occurs.
func (c *FakeFoos) Delete(name string, options *v1.DeleteOptions) error {
_, err := c.Fake.
@@ -131,7 +119,7 @@ func (c *FakeFoos) DeleteCollection(options *v1.DeleteOptions, listOptions v1.Li
// Patch applies the patch and returns the patched foo.
func (c *FakeFoos) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1alpha1.Foo, err error) {
obj, err := c.Fake.
Invokes(testing.NewPatchSubresourceAction(foosResource, c.ns, name, pt, data, subresources...), &v1alpha1.Foo{})
Invokes(testing.NewPatchSubresourceAction(foosResource, c.ns, name, data, subresources...), &v1alpha1.Foo{})
if obj == nil {
return nil, err

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -21,7 +21,7 @@ package fake
import (
rest "k8s.io/client-go/rest"
testing "k8s.io/client-go/testing"
v1alpha1 "k8s.io/sample-controller/pkg/generated/clientset/versioned/typed/samplecontroller/v1alpha1"
v1alpha1 "k8s.io/sample-controller/pkg/client/clientset/versioned/typed/samplecontroller/v1alpha1"
)
type FakeSamplecontrollerV1alpha1 struct {

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -19,14 +19,12 @@ limitations under the License.
package v1alpha1
import (
"time"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
rest "k8s.io/client-go/rest"
v1alpha1 "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
scheme "k8s.io/sample-controller/pkg/generated/clientset/versioned/scheme"
scheme "k8s.io/sample-controller/pkg/client/clientset/versioned/scheme"
)
// FoosGetter has a method to return a FooInterface.
@@ -39,7 +37,6 @@ type FoosGetter interface {
type FooInterface interface {
Create(*v1alpha1.Foo) (*v1alpha1.Foo, error)
Update(*v1alpha1.Foo) (*v1alpha1.Foo, error)
UpdateStatus(*v1alpha1.Foo) (*v1alpha1.Foo, error)
Delete(name string, options *v1.DeleteOptions) error
DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error
Get(name string, options v1.GetOptions) (*v1alpha1.Foo, error)
@@ -78,16 +75,11 @@ func (c *foos) Get(name string, options v1.GetOptions) (result *v1alpha1.Foo, er
// List takes label and field selectors, and returns the list of Foos that match those selectors.
func (c *foos) List(opts v1.ListOptions) (result *v1alpha1.FooList, err error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
result = &v1alpha1.FooList{}
err = c.client.Get().
Namespace(c.ns).
Resource("foos").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Do().
Into(result)
return
@@ -95,16 +87,11 @@ func (c *foos) List(opts v1.ListOptions) (result *v1alpha1.FooList, err error) {
// Watch returns a watch.Interface that watches the requested foos.
func (c *foos) Watch(opts v1.ListOptions) (watch.Interface, error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
opts.Watch = true
return c.client.Get().
Namespace(c.ns).
Resource("foos").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Watch()
}
@@ -133,22 +120,6 @@ func (c *foos) Update(foo *v1alpha1.Foo) (result *v1alpha1.Foo, err error) {
return
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *foos) UpdateStatus(foo *v1alpha1.Foo) (result *v1alpha1.Foo, err error) {
result = &v1alpha1.Foo{}
err = c.client.Put().
Namespace(c.ns).
Resource("foos").
Name(foo.Name).
SubResource("status").
Body(foo).
Do().
Into(result)
return
}
// Delete takes name of the foo and deletes it. Returns an error if one occurs.
func (c *foos) Delete(name string, options *v1.DeleteOptions) error {
return c.client.Delete().
@@ -162,15 +133,10 @@ func (c *foos) Delete(name string, options *v1.DeleteOptions) error {
// DeleteCollection deletes a collection of objects.
func (c *foos) DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error {
var timeout time.Duration
if listOptions.TimeoutSeconds != nil {
timeout = time.Duration(*listOptions.TimeoutSeconds) * time.Second
}
return c.client.Delete().
Namespace(c.ns).
Resource("foos").
VersionedParams(&listOptions, scheme.ParameterCodec).
Timeout(timeout).
Body(options).
Do().
Error()

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -22,7 +22,7 @@ import (
serializer "k8s.io/apimachinery/pkg/runtime/serializer"
rest "k8s.io/client-go/rest"
v1alpha1 "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
"k8s.io/sample-controller/pkg/generated/clientset/versioned/scheme"
"k8s.io/sample-controller/pkg/client/clientset/versioned/scheme"
)
type SamplecontrollerV1alpha1Interface interface {

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -27,21 +27,17 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
schema "k8s.io/apimachinery/pkg/runtime/schema"
cache "k8s.io/client-go/tools/cache"
versioned "k8s.io/sample-controller/pkg/generated/clientset/versioned"
internalinterfaces "k8s.io/sample-controller/pkg/generated/informers/externalversions/internalinterfaces"
samplecontroller "k8s.io/sample-controller/pkg/generated/informers/externalversions/samplecontroller"
versioned "k8s.io/sample-controller/pkg/client/clientset/versioned"
internalinterfaces "k8s.io/sample-controller/pkg/client/informers/externalversions/internalinterfaces"
samplecontroller "k8s.io/sample-controller/pkg/client/informers/externalversions/samplecontroller"
)
// SharedInformerOption defines the functional option type for SharedInformerFactory.
type SharedInformerOption func(*sharedInformerFactory) *sharedInformerFactory
type sharedInformerFactory struct {
client versioned.Interface
namespace string
tweakListOptions internalinterfaces.TweakListOptionsFunc
lock sync.Mutex
defaultResync time.Duration
customResync map[reflect.Type]time.Duration
informers map[reflect.Type]cache.SharedIndexInformer
// startedInformers is used for tracking which informers have been started.
@@ -49,62 +45,23 @@ type sharedInformerFactory struct {
startedInformers map[reflect.Type]bool
}
// WithCustomResyncConfig sets a custom resync period for the specified informer types.
func WithCustomResyncConfig(resyncConfig map[v1.Object]time.Duration) SharedInformerOption {
return func(factory *sharedInformerFactory) *sharedInformerFactory {
for k, v := range resyncConfig {
factory.customResync[reflect.TypeOf(k)] = v
}
return factory
}
}
// WithTweakListOptions sets a custom filter on all listers of the configured SharedInformerFactory.
func WithTweakListOptions(tweakListOptions internalinterfaces.TweakListOptionsFunc) SharedInformerOption {
return func(factory *sharedInformerFactory) *sharedInformerFactory {
factory.tweakListOptions = tweakListOptions
return factory
}
}
// WithNamespace limits the SharedInformerFactory to the specified namespace.
func WithNamespace(namespace string) SharedInformerOption {
return func(factory *sharedInformerFactory) *sharedInformerFactory {
factory.namespace = namespace
return factory
}
}
// NewSharedInformerFactory constructs a new instance of sharedInformerFactory for all namespaces.
// NewSharedInformerFactory constructs a new instance of sharedInformerFactory
func NewSharedInformerFactory(client versioned.Interface, defaultResync time.Duration) SharedInformerFactory {
return NewSharedInformerFactoryWithOptions(client, defaultResync)
return NewFilteredSharedInformerFactory(client, defaultResync, v1.NamespaceAll, nil)
}
// NewFilteredSharedInformerFactory constructs a new instance of sharedInformerFactory.
// Listers obtained via this SharedInformerFactory will be subject to the same filters
// as specified here.
// Deprecated: Please use NewSharedInformerFactoryWithOptions instead
func NewFilteredSharedInformerFactory(client versioned.Interface, defaultResync time.Duration, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) SharedInformerFactory {
return NewSharedInformerFactoryWithOptions(client, defaultResync, WithNamespace(namespace), WithTweakListOptions(tweakListOptions))
}
// NewSharedInformerFactoryWithOptions constructs a new instance of a SharedInformerFactory with additional options.
func NewSharedInformerFactoryWithOptions(client versioned.Interface, defaultResync time.Duration, options ...SharedInformerOption) SharedInformerFactory {
factory := &sharedInformerFactory{
return &sharedInformerFactory{
client: client,
namespace: v1.NamespaceAll,
namespace: namespace,
tweakListOptions: tweakListOptions,
defaultResync: defaultResync,
informers: make(map[reflect.Type]cache.SharedIndexInformer),
startedInformers: make(map[reflect.Type]bool),
customResync: make(map[reflect.Type]time.Duration),
}
// Apply all options
for _, opt := range options {
factory = opt(factory)
}
return factory
}
// Start initializes all requested informers.
@@ -153,13 +110,7 @@ func (f *sharedInformerFactory) InformerFor(obj runtime.Object, newFunc internal
if exists {
return informer
}
resyncPeriod, exists := f.customResync[informerType]
if !exists {
resyncPeriod = f.defaultResync
}
informer = newFunc(f.client, resyncPeriod)
informer = newFunc(f.client, f.defaultResync)
f.informers[informerType] = informer
return informer

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,10 +24,9 @@ import (
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
cache "k8s.io/client-go/tools/cache"
versioned "k8s.io/sample-controller/pkg/generated/clientset/versioned"
versioned "k8s.io/sample-controller/pkg/client/clientset/versioned"
)
// NewInformerFunc takes versioned.Interface and time.Duration to return a SharedIndexInformer.
type NewInformerFunc func(versioned.Interface, time.Duration) cache.SharedIndexInformer
// SharedInformerFactory a small interface to allow for adding an informer without an import cycle
@@ -36,5 +35,4 @@ type SharedInformerFactory interface {
InformerFor(obj runtime.Object, newFunc NewInformerFunc) cache.SharedIndexInformer
}
// TweakListOptionsFunc is a function that transforms a v1.ListOptions.
type TweakListOptionsFunc func(*v1.ListOptions)

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -19,8 +19,8 @@ limitations under the License.
package samplecontroller
import (
internalinterfaces "k8s.io/sample-controller/pkg/generated/informers/externalversions/internalinterfaces"
v1alpha1 "k8s.io/sample-controller/pkg/generated/informers/externalversions/samplecontroller/v1alpha1"
internalinterfaces "k8s.io/sample-controller/pkg/client/informers/externalversions/internalinterfaces"
v1alpha1 "k8s.io/sample-controller/pkg/client/informers/externalversions/samplecontroller/v1alpha1"
)
// Interface provides access to each of this group's versions.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -25,10 +25,10 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
watch "k8s.io/apimachinery/pkg/watch"
cache "k8s.io/client-go/tools/cache"
samplecontrollerv1alpha1 "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
versioned "k8s.io/sample-controller/pkg/generated/clientset/versioned"
internalinterfaces "k8s.io/sample-controller/pkg/generated/informers/externalversions/internalinterfaces"
v1alpha1 "k8s.io/sample-controller/pkg/generated/listers/samplecontroller/v1alpha1"
samplecontroller_v1alpha1 "k8s.io/sample-controller/pkg/apis/samplecontroller/v1alpha1"
versioned "k8s.io/sample-controller/pkg/client/clientset/versioned"
internalinterfaces "k8s.io/sample-controller/pkg/client/informers/externalversions/internalinterfaces"
v1alpha1 "k8s.io/sample-controller/pkg/client/listers/samplecontroller/v1alpha1"
)
// FooInformer provides access to a shared informer and lister for
@@ -70,7 +70,7 @@ func NewFilteredFooInformer(client versioned.Interface, namespace string, resync
return client.SamplecontrollerV1alpha1().Foos(namespace).Watch(options)
},
},
&samplecontrollerv1alpha1.Foo{},
&samplecontroller_v1alpha1.Foo{},
resyncPeriod,
indexers,
)
@@ -81,7 +81,7 @@ func (f *fooInformer) defaultInformer(client versioned.Interface, resyncPeriod t
}
func (f *fooInformer) Informer() cache.SharedIndexInformer {
return f.factory.InformerFor(&samplecontrollerv1alpha1.Foo{}, f.defaultInformer)
return f.factory.InformerFor(&samplecontroller_v1alpha1.Foo{}, f.defaultInformer)
}
func (f *fooInformer) Lister() v1alpha1.FooLister {

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -19,7 +19,7 @@ limitations under the License.
package v1alpha1
import (
internalinterfaces "k8s.io/sample-controller/pkg/generated/informers/externalversions/internalinterfaces"
internalinterfaces "k8s.io/sample-controller/pkg/client/informers/externalversions/internalinterfaces"
)
// Interface provides access to all the informers in this group version.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +1,5 @@
/*
Copyright The Kubernetes Authors.
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,20 +0,0 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// This package has the automatically generated clientset.
package versioned

View File

@@ -1,20 +0,0 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// This package has the automatically generated fake clientset.
package fake

View File

@@ -1,20 +0,0 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// This package contains the scheme of the automatically generated clientset.
package scheme

View File

@@ -1,20 +0,0 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// Package fake has the automatically generated clients.
package fake

View File

@@ -1,16 +0,0 @@
language: go
go:
- 1.8
- 1.7
install:
- if ! go get code.google.com/p/go.tools/cmd/cover; then go get golang.org/x/tools/cmd/cover; fi
- go get github.com/jessevdk/go-flags
script:
- go get
- go test -cover ./...
notifications:
email: false

View File

@@ -1,25 +0,0 @@
Copyright (c) 2014, Evan Phoenix
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the Evan Phoenix nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,297 +0,0 @@
# JSON-Patch
`jsonpatch` is a library which provides functionallity for both applying
[RFC6902 JSON patches](http://tools.ietf.org/html/rfc6902) against documents, as
well as for calculating & applying [RFC7396 JSON merge patches](https://tools.ietf.org/html/rfc7396).
[![GoDoc](https://godoc.org/github.com/evanphx/json-patch?status.svg)](http://godoc.org/github.com/evanphx/json-patch)
[![Build Status](https://travis-ci.org/evanphx/json-patch.svg?branch=master)](https://travis-ci.org/evanphx/json-patch)
[![Report Card](https://goreportcard.com/badge/github.com/evanphx/json-patch)](https://goreportcard.com/report/github.com/evanphx/json-patch)
# Get It!
**Latest and greatest**:
```bash
go get -u github.com/evanphx/json-patch
```
**Stable Versions**:
* Version 4: `go get -u gopkg.in/evanphx/json-patch.v4`
(previous versions below `v3` are unavailable)
# Use It!
* [Create and apply a merge patch](#create-and-apply-a-merge-patch)
* [Create and apply a JSON Patch](#create-and-apply-a-json-patch)
* [Comparing JSON documents](#comparing-json-documents)
* [Combine merge patches](#combine-merge-patches)
# Configuration
* There is a global configuration variable `jsonpatch.SupportNegativeIndices`.
This defaults to `true` and enables the non-standard practice of allowing
negative indices to mean indices starting at the end of an array. This
functionality can be disabled by setting `jsonpatch.SupportNegativeIndices =
false`.
* There is a global configuration variable `jsonpatch.AccumulatedCopySizeLimit`,
which limits the total size increase in bytes caused by "copy" operations in a
patch. It defaults to 0, which means there is no limit.
## Create and apply a merge patch
Given both an original JSON document and a modified JSON document, you can create
a [Merge Patch](https://tools.ietf.org/html/rfc7396) document.
It can describe the changes needed to convert from the original to the
modified JSON document.
Once you have a merge patch, you can apply it to other JSON documents using the
`jsonpatch.MergePatch(document, patch)` function.
```go
package main
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
)
func main() {
// Let's create a merge patch from these two documents...
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
target := []byte(`{"name": "Jane", "age": 24}`)
patch, err := jsonpatch.CreateMergePatch(original, target)
if err != nil {
panic(err)
}
// Now lets apply the patch against a different JSON document...
alternative := []byte(`{"name": "Tina", "age": 28, "height": 3.75}`)
modifiedAlternative, err := jsonpatch.MergePatch(alternative, patch)
fmt.Printf("patch document: %s\n", patch)
fmt.Printf("updated alternative doc: %s\n", modifiedAlternative)
}
```
When ran, you get the following output:
```bash
$ go run main.go
patch document: {"height":null,"name":"Jane"}
updated tina doc: {"age":28,"name":"Jane"}
```
## Create and apply a JSON Patch
You can create patch objects using `DecodePatch([]byte)`, which can then
be applied against JSON documents.
The following is an example of creating a patch from two operations, and
applying it against a JSON document.
```go
package main
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
)
func main() {
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
patchJSON := []byte(`[
{"op": "replace", "path": "/name", "value": "Jane"},
{"op": "remove", "path": "/height"}
]`)
patch, err := jsonpatch.DecodePatch(patchJSON)
if err != nil {
panic(err)
}
modified, err := patch.Apply(original)
if err != nil {
panic(err)
}
fmt.Printf("Original document: %s\n", original)
fmt.Printf("Modified document: %s\n", modified)
}
```
When ran, you get the following output:
```bash
$ go run main.go
Original document: {"name": "John", "age": 24, "height": 3.21}
Modified document: {"age":24,"name":"Jane"}
```
## Comparing JSON documents
Due to potential whitespace and ordering differences, one cannot simply compare
JSON strings or byte-arrays directly.
As such, you can instead use `jsonpatch.Equal(document1, document2)` to
determine if two JSON documents are _structurally_ equal. This ignores
whitespace differences, and key-value ordering.
```go
package main
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
)
func main() {
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
similar := []byte(`
{
"age": 24,
"height": 3.21,
"name": "John"
}
`)
different := []byte(`{"name": "Jane", "age": 20, "height": 3.37}`)
if jsonpatch.Equal(original, similar) {
fmt.Println(`"original" is structurally equal to "similar"`)
}
if !jsonpatch.Equal(original, different) {
fmt.Println(`"original" is _not_ structurally equal to "similar"`)
}
}
```
When ran, you get the following output:
```bash
$ go run main.go
"original" is structurally equal to "similar"
"original" is _not_ structurally equal to "similar"
```
## Combine merge patches
Given two JSON merge patch documents, it is possible to combine them into a
single merge patch which can describe both set of changes.
The resulting merge patch can be used such that applying it results in a
document structurally similar as merging each merge patch to the document
in succession.
```go
package main
import (
"fmt"
jsonpatch "github.com/evanphx/json-patch"
)
func main() {
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
nameAndHeight := []byte(`{"height":null,"name":"Jane"}`)
ageAndEyes := []byte(`{"age":4.23,"eyes":"blue"}`)
// Let's combine these merge patch documents...
combinedPatch, err := jsonpatch.MergeMergePatches(nameAndHeight, ageAndEyes)
if err != nil {
panic(err)
}
// Apply each patch individual against the original document
withoutCombinedPatch, err := jsonpatch.MergePatch(original, nameAndHeight)
if err != nil {
panic(err)
}
withoutCombinedPatch, err = jsonpatch.MergePatch(withoutCombinedPatch, ageAndEyes)
if err != nil {
panic(err)
}
// Apply the combined patch against the original document
withCombinedPatch, err := jsonpatch.MergePatch(original, combinedPatch)
if err != nil {
panic(err)
}
// Do both result in the same thing? They should!
if jsonpatch.Equal(withCombinedPatch, withoutCombinedPatch) {
fmt.Println("Both JSON documents are structurally the same!")
}
fmt.Printf("combined merge patch: %s", combinedPatch)
}
```
When ran, you get the following output:
```bash
$ go run main.go
Both JSON documents are structurally the same!
combined merge patch: {"age":4.23,"eyes":"blue","height":null,"name":"Jane"}
```
# CLI for comparing JSON documents
You can install the commandline program `json-patch`.
This program can take multiple JSON patch documents as arguments,
and fed a JSON document from `stdin`. It will apply the patch(es) against
the document and output the modified doc.
**patch.1.json**
```json
[
{"op": "replace", "path": "/name", "value": "Jane"},
{"op": "remove", "path": "/height"}
]
```
**patch.2.json**
```json
[
{"op": "add", "path": "/address", "value": "123 Main St"},
{"op": "replace", "path": "/age", "value": "21"}
]
```
**document.json**
```json
{
"name": "John",
"age": 24,
"height": 3.21
}
```
You can then run:
```bash
$ go install github.com/evanphx/json-patch/cmd/json-patch
$ cat document.json | json-patch -p patch.1.json -p patch.2.json
{"address":"123 Main St","age":"21","name":"Jane"}
```
# Help It!
Contributions are welcomed! Leave [an issue](https://github.com/evanphx/json-patch/issues)
or [create a PR](https://github.com/evanphx/json-patch/compare).
Before creating a pull request, we'd ask that you make sure tests are passing
and that you have added new tests when applicable.
Contributors can run tests using:
```bash
go test -cover ./...
```
Builds for pull requests are tested automatically
using [TravisCI](https://travis-ci.org/evanphx/json-patch).

View File

@@ -1,38 +0,0 @@
package jsonpatch
import "fmt"
// AccumulatedCopySizeError is an error type returned when the accumulated size
// increase caused by copy operations in a patch operation has exceeded the
// limit.
type AccumulatedCopySizeError struct {
limit int64
accumulated int64
}
// NewAccumulatedCopySizeError returns an AccumulatedCopySizeError.
func NewAccumulatedCopySizeError(l, a int64) *AccumulatedCopySizeError {
return &AccumulatedCopySizeError{limit: l, accumulated: a}
}
// Error implements the error interface.
func (a *AccumulatedCopySizeError) Error() string {
return fmt.Sprintf("Unable to complete the copy, the accumulated size increase of copy is %d, exceeding the limit %d", a.accumulated, a.limit)
}
// ArraySizeError is an error type returned when the array size has exceeded
// the limit.
type ArraySizeError struct {
limit int
size int
}
// NewArraySizeError returns an ArraySizeError.
func NewArraySizeError(l, s int) *ArraySizeError {
return &ArraySizeError{limit: l, size: s}
}
// Error implements the error interface.
func (a *ArraySizeError) Error() string {
return fmt.Sprintf("Unable to create array of size %d, limit is %d", a.size, a.limit)
}

View File

@@ -1,383 +0,0 @@
package jsonpatch
import (
"bytes"
"encoding/json"
"fmt"
"reflect"
)
func merge(cur, patch *lazyNode, mergeMerge bool) *lazyNode {
curDoc, err := cur.intoDoc()
if err != nil {
pruneNulls(patch)
return patch
}
patchDoc, err := patch.intoDoc()
if err != nil {
return patch
}
mergeDocs(curDoc, patchDoc, mergeMerge)
return cur
}
func mergeDocs(doc, patch *partialDoc, mergeMerge bool) {
for k, v := range *patch {
if v == nil {
if mergeMerge {
(*doc)[k] = nil
} else {
delete(*doc, k)
}
} else {
cur, ok := (*doc)[k]
if !ok || cur == nil {
pruneNulls(v)
(*doc)[k] = v
} else {
(*doc)[k] = merge(cur, v, mergeMerge)
}
}
}
}
func pruneNulls(n *lazyNode) {
sub, err := n.intoDoc()
if err == nil {
pruneDocNulls(sub)
} else {
ary, err := n.intoAry()
if err == nil {
pruneAryNulls(ary)
}
}
}
func pruneDocNulls(doc *partialDoc) *partialDoc {
for k, v := range *doc {
if v == nil {
delete(*doc, k)
} else {
pruneNulls(v)
}
}
return doc
}
func pruneAryNulls(ary *partialArray) *partialArray {
newAry := []*lazyNode{}
for _, v := range *ary {
if v != nil {
pruneNulls(v)
newAry = append(newAry, v)
}
}
*ary = newAry
return ary
}
var errBadJSONDoc = fmt.Errorf("Invalid JSON Document")
var errBadJSONPatch = fmt.Errorf("Invalid JSON Patch")
var errBadMergeTypes = fmt.Errorf("Mismatched JSON Documents")
// MergeMergePatches merges two merge patches together, such that
// applying this resulting merged merge patch to a document yields the same
// as merging each merge patch to the document in succession.
func MergeMergePatches(patch1Data, patch2Data []byte) ([]byte, error) {
return doMergePatch(patch1Data, patch2Data, true)
}
// MergePatch merges the patchData into the docData.
func MergePatch(docData, patchData []byte) ([]byte, error) {
return doMergePatch(docData, patchData, false)
}
func doMergePatch(docData, patchData []byte, mergeMerge bool) ([]byte, error) {
doc := &partialDoc{}
docErr := json.Unmarshal(docData, doc)
patch := &partialDoc{}
patchErr := json.Unmarshal(patchData, patch)
if _, ok := docErr.(*json.SyntaxError); ok {
return nil, errBadJSONDoc
}
if _, ok := patchErr.(*json.SyntaxError); ok {
return nil, errBadJSONPatch
}
if docErr == nil && *doc == nil {
return nil, errBadJSONDoc
}
if patchErr == nil && *patch == nil {
return nil, errBadJSONPatch
}
if docErr != nil || patchErr != nil {
// Not an error, just not a doc, so we turn straight into the patch
if patchErr == nil {
if mergeMerge {
doc = patch
} else {
doc = pruneDocNulls(patch)
}
} else {
patchAry := &partialArray{}
patchErr = json.Unmarshal(patchData, patchAry)
if patchErr != nil {
return nil, errBadJSONPatch
}
pruneAryNulls(patchAry)
out, patchErr := json.Marshal(patchAry)
if patchErr != nil {
return nil, errBadJSONPatch
}
return out, nil
}
} else {
mergeDocs(doc, patch, mergeMerge)
}
return json.Marshal(doc)
}
// resemblesJSONArray indicates whether the byte-slice "appears" to be
// a JSON array or not.
// False-positives are possible, as this function does not check the internal
// structure of the array. It only checks that the outer syntax is present and
// correct.
func resemblesJSONArray(input []byte) bool {
input = bytes.TrimSpace(input)
hasPrefix := bytes.HasPrefix(input, []byte("["))
hasSuffix := bytes.HasSuffix(input, []byte("]"))
return hasPrefix && hasSuffix
}
// CreateMergePatch will return a merge patch document capable of converting
// the original document(s) to the modified document(s).
// The parameters can be bytes of either two JSON Documents, or two arrays of
// JSON documents.
// The merge patch returned follows the specification defined at http://tools.ietf.org/html/draft-ietf-appsawg-json-merge-patch-07
func CreateMergePatch(originalJSON, modifiedJSON []byte) ([]byte, error) {
originalResemblesArray := resemblesJSONArray(originalJSON)
modifiedResemblesArray := resemblesJSONArray(modifiedJSON)
// Do both byte-slices seem like JSON arrays?
if originalResemblesArray && modifiedResemblesArray {
return createArrayMergePatch(originalJSON, modifiedJSON)
}
// Are both byte-slices are not arrays? Then they are likely JSON objects...
if !originalResemblesArray && !modifiedResemblesArray {
return createObjectMergePatch(originalJSON, modifiedJSON)
}
// None of the above? Then return an error because of mismatched types.
return nil, errBadMergeTypes
}
// createObjectMergePatch will return a merge-patch document capable of
// converting the original document to the modified document.
func createObjectMergePatch(originalJSON, modifiedJSON []byte) ([]byte, error) {
originalDoc := map[string]interface{}{}
modifiedDoc := map[string]interface{}{}
err := json.Unmarshal(originalJSON, &originalDoc)
if err != nil {
return nil, errBadJSONDoc
}
err = json.Unmarshal(modifiedJSON, &modifiedDoc)
if err != nil {
return nil, errBadJSONDoc
}
dest, err := getDiff(originalDoc, modifiedDoc)
if err != nil {
return nil, err
}
return json.Marshal(dest)
}
// createArrayMergePatch will return an array of merge-patch documents capable
// of converting the original document to the modified document for each
// pair of JSON documents provided in the arrays.
// Arrays of mismatched sizes will result in an error.
func createArrayMergePatch(originalJSON, modifiedJSON []byte) ([]byte, error) {
originalDocs := []json.RawMessage{}
modifiedDocs := []json.RawMessage{}
err := json.Unmarshal(originalJSON, &originalDocs)
if err != nil {
return nil, errBadJSONDoc
}
err = json.Unmarshal(modifiedJSON, &modifiedDocs)
if err != nil {
return nil, errBadJSONDoc
}
total := len(originalDocs)
if len(modifiedDocs) != total {
return nil, errBadJSONDoc
}
result := []json.RawMessage{}
for i := 0; i < len(originalDocs); i++ {
original := originalDocs[i]
modified := modifiedDocs[i]
patch, err := createObjectMergePatch(original, modified)
if err != nil {
return nil, err
}
result = append(result, json.RawMessage(patch))
}
return json.Marshal(result)
}
// Returns true if the array matches (must be json types).
// As is idiomatic for go, an empty array is not the same as a nil array.
func matchesArray(a, b []interface{}) bool {
if len(a) != len(b) {
return false
}
if (a == nil && b != nil) || (a != nil && b == nil) {
return false
}
for i := range a {
if !matchesValue(a[i], b[i]) {
return false
}
}
return true
}
// Returns true if the values matches (must be json types)
// The types of the values must match, otherwise it will always return false
// If two map[string]interface{} are given, all elements must match.
func matchesValue(av, bv interface{}) bool {
if reflect.TypeOf(av) != reflect.TypeOf(bv) {
return false
}
switch at := av.(type) {
case string:
bt := bv.(string)
if bt == at {
return true
}
case float64:
bt := bv.(float64)
if bt == at {
return true
}
case bool:
bt := bv.(bool)
if bt == at {
return true
}
case nil:
// Both nil, fine.
return true
case map[string]interface{}:
bt := bv.(map[string]interface{})
for key := range at {
if !matchesValue(at[key], bt[key]) {
return false
}
}
for key := range bt {
if !matchesValue(at[key], bt[key]) {
return false
}
}
return true
case []interface{}:
bt := bv.([]interface{})
return matchesArray(at, bt)
}
return false
}
// getDiff returns the (recursive) difference between a and b as a map[string]interface{}.
func getDiff(a, b map[string]interface{}) (map[string]interface{}, error) {
into := map[string]interface{}{}
for key, bv := range b {
av, ok := a[key]
// value was added
if !ok {
into[key] = bv
continue
}
// If types have changed, replace completely
if reflect.TypeOf(av) != reflect.TypeOf(bv) {
into[key] = bv
continue
}
// Types are the same, compare values
switch at := av.(type) {
case map[string]interface{}:
bt := bv.(map[string]interface{})
dst := make(map[string]interface{}, len(bt))
dst, err := getDiff(at, bt)
if err != nil {
return nil, err
}
if len(dst) > 0 {
into[key] = dst
}
case string, float64, bool:
if !matchesValue(av, bv) {
into[key] = bv
}
case []interface{}:
bt := bv.([]interface{})
if !matchesArray(at, bt) {
into[key] = bv
}
case nil:
switch bv.(type) {
case nil:
// Both nil, fine.
default:
into[key] = bv
}
default:
panic(fmt.Sprintf("Unknown type:%T in key %s", av, key))
}
}
// Now add all deleted values as nil
for key := range a {
_, found := b[key]
if !found {
into[key] = nil
}
}
return into, nil
}

View File

@@ -1,696 +0,0 @@
package jsonpatch
import (
"bytes"
"encoding/json"
"fmt"
"strconv"
"strings"
)
const (
eRaw = iota
eDoc
eAry
)
var (
// SupportNegativeIndices decides whether to support non-standard practice of
// allowing negative indices to mean indices starting at the end of an array.
// Default to true.
SupportNegativeIndices bool = true
// AccumulatedCopySizeLimit limits the total size increase in bytes caused by
// "copy" operations in a patch.
AccumulatedCopySizeLimit int64 = 0
)
type lazyNode struct {
raw *json.RawMessage
doc partialDoc
ary partialArray
which int
}
type operation map[string]*json.RawMessage
// Patch is an ordered collection of operations.
type Patch []operation
type partialDoc map[string]*lazyNode
type partialArray []*lazyNode
type container interface {
get(key string) (*lazyNode, error)
set(key string, val *lazyNode) error
add(key string, val *lazyNode) error
remove(key string) error
}
func newLazyNode(raw *json.RawMessage) *lazyNode {
return &lazyNode{raw: raw, doc: nil, ary: nil, which: eRaw}
}
func (n *lazyNode) MarshalJSON() ([]byte, error) {
switch n.which {
case eRaw:
return json.Marshal(n.raw)
case eDoc:
return json.Marshal(n.doc)
case eAry:
return json.Marshal(n.ary)
default:
return nil, fmt.Errorf("Unknown type")
}
}
func (n *lazyNode) UnmarshalJSON(data []byte) error {
dest := make(json.RawMessage, len(data))
copy(dest, data)
n.raw = &dest
n.which = eRaw
return nil
}
func deepCopy(src *lazyNode) (*lazyNode, int, error) {
if src == nil {
return nil, 0, nil
}
a, err := src.MarshalJSON()
if err != nil {
return nil, 0, err
}
sz := len(a)
ra := make(json.RawMessage, sz)
copy(ra, a)
return newLazyNode(&ra), sz, nil
}
func (n *lazyNode) intoDoc() (*partialDoc, error) {
if n.which == eDoc {
return &n.doc, nil
}
if n.raw == nil {
return nil, fmt.Errorf("Unable to unmarshal nil pointer as partial document")
}
err := json.Unmarshal(*n.raw, &n.doc)
if err != nil {
return nil, err
}
n.which = eDoc
return &n.doc, nil
}
func (n *lazyNode) intoAry() (*partialArray, error) {
if n.which == eAry {
return &n.ary, nil
}
if n.raw == nil {
return nil, fmt.Errorf("Unable to unmarshal nil pointer as partial array")
}
err := json.Unmarshal(*n.raw, &n.ary)
if err != nil {
return nil, err
}
n.which = eAry
return &n.ary, nil
}
func (n *lazyNode) compact() []byte {
buf := &bytes.Buffer{}
if n.raw == nil {
return nil
}
err := json.Compact(buf, *n.raw)
if err != nil {
return *n.raw
}
return buf.Bytes()
}
func (n *lazyNode) tryDoc() bool {
if n.raw == nil {
return false
}
err := json.Unmarshal(*n.raw, &n.doc)
if err != nil {
return false
}
n.which = eDoc
return true
}
func (n *lazyNode) tryAry() bool {
if n.raw == nil {
return false
}
err := json.Unmarshal(*n.raw, &n.ary)
if err != nil {
return false
}
n.which = eAry
return true
}
func (n *lazyNode) equal(o *lazyNode) bool {
if n.which == eRaw {
if !n.tryDoc() && !n.tryAry() {
if o.which != eRaw {
return false
}
return bytes.Equal(n.compact(), o.compact())
}
}
if n.which == eDoc {
if o.which == eRaw {
if !o.tryDoc() {
return false
}
}
if o.which != eDoc {
return false
}
for k, v := range n.doc {
ov, ok := o.doc[k]
if !ok {
return false
}
if v == nil && ov == nil {
continue
}
if !v.equal(ov) {
return false
}
}
return true
}
if o.which != eAry && !o.tryAry() {
return false
}
if len(n.ary) != len(o.ary) {
return false
}
for idx, val := range n.ary {
if !val.equal(o.ary[idx]) {
return false
}
}
return true
}
func (o operation) kind() string {
if obj, ok := o["op"]; ok && obj != nil {
var op string
err := json.Unmarshal(*obj, &op)
if err != nil {
return "unknown"
}
return op
}
return "unknown"
}
func (o operation) path() string {
if obj, ok := o["path"]; ok && obj != nil {
var op string
err := json.Unmarshal(*obj, &op)
if err != nil {
return "unknown"
}
return op
}
return "unknown"
}
func (o operation) from() string {
if obj, ok := o["from"]; ok && obj != nil {
var op string
err := json.Unmarshal(*obj, &op)
if err != nil {
return "unknown"
}
return op
}
return "unknown"
}
func (o operation) value() *lazyNode {
if obj, ok := o["value"]; ok {
return newLazyNode(obj)
}
return nil
}
func isArray(buf []byte) bool {
Loop:
for _, c := range buf {
switch c {
case ' ':
case '\n':
case '\t':
continue
case '[':
return true
default:
break Loop
}
}
return false
}
func findObject(pd *container, path string) (container, string) {
doc := *pd
split := strings.Split(path, "/")
if len(split) < 2 {
return nil, ""
}
parts := split[1 : len(split)-1]
key := split[len(split)-1]
var err error
for _, part := range parts {
next, ok := doc.get(decodePatchKey(part))
if next == nil || ok != nil {
return nil, ""
}
if isArray(*next.raw) {
doc, err = next.intoAry()
if err != nil {
return nil, ""
}
} else {
doc, err = next.intoDoc()
if err != nil {
return nil, ""
}
}
}
return doc, decodePatchKey(key)
}
func (d *partialDoc) set(key string, val *lazyNode) error {
(*d)[key] = val
return nil
}
func (d *partialDoc) add(key string, val *lazyNode) error {
(*d)[key] = val
return nil
}
func (d *partialDoc) get(key string) (*lazyNode, error) {
return (*d)[key], nil
}
func (d *partialDoc) remove(key string) error {
_, ok := (*d)[key]
if !ok {
return fmt.Errorf("Unable to remove nonexistent key: %s", key)
}
delete(*d, key)
return nil
}
// set should only be used to implement the "replace" operation, so "key" must
// be an already existing index in "d".
func (d *partialArray) set(key string, val *lazyNode) error {
idx, err := strconv.Atoi(key)
if err != nil {
return err
}
(*d)[idx] = val
return nil
}
func (d *partialArray) add(key string, val *lazyNode) error {
if key == "-" {
*d = append(*d, val)
return nil
}
idx, err := strconv.Atoi(key)
if err != nil {
return err
}
sz := len(*d) + 1
ary := make([]*lazyNode, sz)
cur := *d
if idx >= len(ary) {
return fmt.Errorf("Unable to access invalid index: %d", idx)
}
if SupportNegativeIndices {
if idx < -len(ary) {
return fmt.Errorf("Unable to access invalid index: %d", idx)
}
if idx < 0 {
idx += len(ary)
}
}
copy(ary[0:idx], cur[0:idx])
ary[idx] = val
copy(ary[idx+1:], cur[idx:])
*d = ary
return nil
}
func (d *partialArray) get(key string) (*lazyNode, error) {
idx, err := strconv.Atoi(key)
if err != nil {
return nil, err
}
if idx >= len(*d) {
return nil, fmt.Errorf("Unable to access invalid index: %d", idx)
}
return (*d)[idx], nil
}
func (d *partialArray) remove(key string) error {
idx, err := strconv.Atoi(key)
if err != nil {
return err
}
cur := *d
if idx >= len(cur) {
return fmt.Errorf("Unable to access invalid index: %d", idx)
}
if SupportNegativeIndices {
if idx < -len(cur) {
return fmt.Errorf("Unable to access invalid index: %d", idx)
}
if idx < 0 {
idx += len(cur)
}
}
ary := make([]*lazyNode, len(cur)-1)
copy(ary[0:idx], cur[0:idx])
copy(ary[idx:], cur[idx+1:])
*d = ary
return nil
}
func (p Patch) add(doc *container, op operation) error {
path := op.path()
con, key := findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch add operation does not apply: doc is missing path: \"%s\"", path)
}
return con.add(key, op.value())
}
func (p Patch) remove(doc *container, op operation) error {
path := op.path()
con, key := findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch remove operation does not apply: doc is missing path: \"%s\"", path)
}
return con.remove(key)
}
func (p Patch) replace(doc *container, op operation) error {
path := op.path()
con, key := findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch replace operation does not apply: doc is missing path: %s", path)
}
_, ok := con.get(key)
if ok != nil {
return fmt.Errorf("jsonpatch replace operation does not apply: doc is missing key: %s", path)
}
return con.set(key, op.value())
}
func (p Patch) move(doc *container, op operation) error {
from := op.from()
con, key := findObject(doc, from)
if con == nil {
return fmt.Errorf("jsonpatch move operation does not apply: doc is missing from path: %s", from)
}
val, err := con.get(key)
if err != nil {
return err
}
err = con.remove(key)
if err != nil {
return err
}
path := op.path()
con, key = findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch move operation does not apply: doc is missing destination path: %s", path)
}
return con.add(key, val)
}
func (p Patch) test(doc *container, op operation) error {
path := op.path()
con, key := findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch test operation does not apply: is missing path: %s", path)
}
val, err := con.get(key)
if err != nil {
return err
}
if val == nil {
if op.value().raw == nil {
return nil
}
return fmt.Errorf("Testing value %s failed", path)
} else if op.value() == nil {
return fmt.Errorf("Testing value %s failed", path)
}
if val.equal(op.value()) {
return nil
}
return fmt.Errorf("Testing value %s failed", path)
}
func (p Patch) copy(doc *container, op operation, accumulatedCopySize *int64) error {
from := op.from()
con, key := findObject(doc, from)
if con == nil {
return fmt.Errorf("jsonpatch copy operation does not apply: doc is missing from path: %s", from)
}
val, err := con.get(key)
if err != nil {
return err
}
path := op.path()
con, key = findObject(doc, path)
if con == nil {
return fmt.Errorf("jsonpatch copy operation does not apply: doc is missing destination path: %s", path)
}
valCopy, sz, err := deepCopy(val)
if err != nil {
return err
}
(*accumulatedCopySize) += int64(sz)
if AccumulatedCopySizeLimit > 0 && *accumulatedCopySize > AccumulatedCopySizeLimit {
return NewAccumulatedCopySizeError(AccumulatedCopySizeLimit, *accumulatedCopySize)
}
return con.add(key, valCopy)
}
// Equal indicates if 2 JSON documents have the same structural equality.
func Equal(a, b []byte) bool {
ra := make(json.RawMessage, len(a))
copy(ra, a)
la := newLazyNode(&ra)
rb := make(json.RawMessage, len(b))
copy(rb, b)
lb := newLazyNode(&rb)
return la.equal(lb)
}
// DecodePatch decodes the passed JSON document as an RFC 6902 patch.
func DecodePatch(buf []byte) (Patch, error) {
var p Patch
err := json.Unmarshal(buf, &p)
if err != nil {
return nil, err
}
return p, nil
}
// Apply mutates a JSON document according to the patch, and returns the new
// document.
func (p Patch) Apply(doc []byte) ([]byte, error) {
return p.ApplyIndent(doc, "")
}
// ApplyIndent mutates a JSON document according to the patch, and returns the new
// document indented.
func (p Patch) ApplyIndent(doc []byte, indent string) ([]byte, error) {
var pd container
if doc[0] == '[' {
pd = &partialArray{}
} else {
pd = &partialDoc{}
}
err := json.Unmarshal(doc, pd)
if err != nil {
return nil, err
}
err = nil
var accumulatedCopySize int64
for _, op := range p {
switch op.kind() {
case "add":
err = p.add(&pd, op)
case "remove":
err = p.remove(&pd, op)
case "replace":
err = p.replace(&pd, op)
case "move":
err = p.move(&pd, op)
case "test":
err = p.test(&pd, op)
case "copy":
err = p.copy(&pd, op, &accumulatedCopySize)
default:
err = fmt.Errorf("Unexpected kind: %s", op.kind())
}
if err != nil {
return nil, err
}
}
if indent != "" {
return json.MarshalIndent(pd, "", indent)
}
return json.Marshal(pd)
}
// From http://tools.ietf.org/html/rfc6901#section-4 :
//
// Evaluation of each reference token begins by decoding any escaped
// character sequence. This is performed by first transforming any
// occurrence of the sequence '~1' to '/', and then transforming any
// occurrence of the sequence '~0' to '~'.
var (
rfc6901Decoder = strings.NewReplacer("~1", "/", "~0", "~")
)
func decodePatchKey(k string) string {
return rfc6901Decoder.Replace(k)
}

7
vendor/github.com/ghodss/yaml/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,7 @@
language: go
go:
- 1.3
- 1.4
script:
- go test
- go build

View File

@@ -4,13 +4,13 @@
## Introduction
A wrapper around [go-yaml](https://github.com/go-yaml/yaml) designed to enable a better way of handling YAML when marshaling to and from structs.
A wrapper around [go-yaml](https://github.com/go-yaml/yaml) designed to enable a better way of handling YAML when marshaling to and from structs.
In short, this library first converts YAML to JSON using go-yaml and then uses `json.Marshal` and `json.Unmarshal` to convert to or from the struct. This means that it effectively reuses the JSON struct tags as well as the custom JSON methods `MarshalJSON` and `UnmarshalJSON` unlike go-yaml. For a detailed overview of the rationale behind this method, [see this blog post](http://ghodss.com/2014/the-right-way-to-handle-yaml-in-golang/).
## Compatibility
This package uses [go-yaml](https://github.com/go-yaml/yaml) and therefore supports [everything go-yaml supports](https://github.com/go-yaml/yaml#compatibility).
This package uses [go-yaml v2](https://github.com/go-yaml/yaml) and therefore supports [everything go-yaml supports](https://github.com/go-yaml/yaml#compatibility).
## Caveats
@@ -44,8 +44,6 @@ import "github.com/ghodss/yaml"
Usage is very similar to the JSON library:
```go
package main
import (
"fmt"
@@ -53,8 +51,8 @@ import (
)
type Person struct {
Name string `json:"name"` // Affects YAML field names too.
Age int `json:"age"`
Name string `json:"name"` // Affects YAML field names too.
Age int `json:"name"`
}
func main() {
@@ -67,13 +65,13 @@ func main() {
}
fmt.Println(string(y))
/* Output:
age: 30
name: John
age: 30
*/
// Unmarshal the YAML back into a Person struct.
var p2 Person
err = yaml.Unmarshal(y, &p2)
err := yaml.Unmarshal(y, &p2)
if err != nil {
fmt.Printf("err: %v\n", err)
return
@@ -88,14 +86,11 @@ func main() {
`yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available:
```go
package main
import (
"fmt"
"github.com/ghodss/yaml"
)
func main() {
j := []byte(`{"name": "John", "age": 30}`)
y, err := yaml.JSONToYAML(j)

View File

@@ -1,7 +1,6 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package yaml
import (
@@ -46,11 +45,7 @@ func indirect(v reflect.Value, decodingNull bool) (json.Unmarshaler, encoding.Te
break
}
if v.IsNil() {
if v.CanSet() {
v.Set(reflect.New(v.Type().Elem()))
} else {
v = reflect.New(v.Type().Elem())
}
v.Set(reflect.New(v.Type().Elem()))
}
if v.Type().NumMethod() > 0 {
if u, ok := v.Interface().(json.Unmarshaler); ok {

View File

@@ -4,58 +4,37 @@ import (
"bytes"
"encoding/json"
"fmt"
"io"
"reflect"
"strconv"
"gopkg.in/yaml.v2"
)
// Marshal marshals the object into JSON then converts JSON to YAML and returns the
// Marshals the object into JSON then converts JSON to YAML and returns the
// YAML.
func Marshal(o interface{}) ([]byte, error) {
j, err := json.Marshal(o)
if err != nil {
return nil, fmt.Errorf("error marshaling into JSON: %v", err)
return nil, fmt.Errorf("error marshaling into JSON: ", err)
}
y, err := JSONToYAML(j)
if err != nil {
return nil, fmt.Errorf("error converting JSON to YAML: %v", err)
return nil, fmt.Errorf("error converting JSON to YAML: ", err)
}
return y, nil
}
// JSONOpt is a decoding option for decoding from JSON format.
type JSONOpt func(*json.Decoder) *json.Decoder
// Unmarshal converts YAML to JSON then uses JSON to unmarshal into an object,
// optionally configuring the behavior of the JSON unmarshal.
func Unmarshal(y []byte, o interface{}, opts ...JSONOpt) error {
return yamlUnmarshal(y, o, false, opts...)
}
// UnmarshalStrict strictly converts YAML to JSON then uses JSON to unmarshal
// into an object, optionally configuring the behavior of the JSON unmarshal.
func UnmarshalStrict(y []byte, o interface{}, opts ...JSONOpt) error {
return yamlUnmarshal(y, o, true, append(opts, DisallowUnknownFields)...)
}
// yamlUnmarshal unmarshals the given YAML byte stream into the given interface,
// optionally performing the unmarshalling strictly
func yamlUnmarshal(y []byte, o interface{}, strict bool, opts ...JSONOpt) error {
// Converts YAML to JSON then uses JSON to unmarshal into an object.
func Unmarshal(y []byte, o interface{}) error {
vo := reflect.ValueOf(o)
unmarshalFn := yaml.Unmarshal
if strict {
unmarshalFn = yaml.UnmarshalStrict
}
j, err := yamlToJSON(y, &vo, unmarshalFn)
j, err := yamlToJSON(y, &vo)
if err != nil {
return fmt.Errorf("error converting YAML to JSON: %v", err)
}
err = jsonUnmarshal(bytes.NewReader(j), o, opts...)
err = json.Unmarshal(j, o)
if err != nil {
return fmt.Errorf("error unmarshaling JSON: %v", err)
}
@@ -63,28 +42,13 @@ func yamlUnmarshal(y []byte, o interface{}, strict bool, opts ...JSONOpt) error
return nil
}
// jsonUnmarshal unmarshals the JSON byte stream from the given reader into the
// object, optionally applying decoder options prior to decoding. We are not
// using json.Unmarshal directly as we want the chance to pass in non-default
// options.
func jsonUnmarshal(r io.Reader, o interface{}, opts ...JSONOpt) error {
d := json.NewDecoder(r)
for _, opt := range opts {
d = opt(d)
}
if err := d.Decode(&o); err != nil {
return fmt.Errorf("while decoding JSON: %v", err)
}
return nil
}
// JSONToYAML Converts JSON to YAML.
// Convert JSON to YAML.
func JSONToYAML(j []byte) ([]byte, error) {
// Convert the JSON to an object.
var jsonObj interface{}
// We are using yaml.Unmarshal here (instead of json.Unmarshal) because the
// Go JSON library doesn't try to pick the right number type (int, float,
// etc.) when unmarshalling to interface{}, it just picks float64
// etc.) when unmarshling to interface{}, it just picks float64
// universally. go-yaml does go through the effort of picking the right
// number type, so we can preserve number type throughout this process.
err := yaml.Unmarshal(j, &jsonObj)
@@ -96,8 +60,8 @@ func JSONToYAML(j []byte) ([]byte, error) {
return yaml.Marshal(jsonObj)
}
// YAMLToJSON converts YAML to JSON. Since JSON is a subset of YAML,
// passing JSON through this method should be a no-op.
// Convert YAML to JSON. Since JSON is a subset of YAML, passing JSON through
// this method should be a no-op.
//
// Things YAML can do that are not supported by JSON:
// * In YAML you can have binary and null keys in your maps. These are invalid
@@ -106,22 +70,14 @@ func JSONToYAML(j []byte) ([]byte, error) {
// use binary data with this library, encode the data as base64 as usual but do
// not use the !!binary tag in your YAML. This will ensure the original base64
// encoded data makes it all the way through to the JSON.
//
// For strict decoding of YAML, use YAMLToJSONStrict.
func YAMLToJSON(y []byte) ([]byte, error) {
return yamlToJSON(y, nil, yaml.Unmarshal)
return yamlToJSON(y, nil)
}
// YAMLToJSONStrict is like YAMLToJSON but enables strict YAML decoding,
// returning an error on any duplicate field names.
func YAMLToJSONStrict(y []byte) ([]byte, error) {
return yamlToJSON(y, nil, yaml.UnmarshalStrict)
}
func yamlToJSON(y []byte, jsonTarget *reflect.Value, yamlUnmarshal func([]byte, interface{}) error) ([]byte, error) {
func yamlToJSON(y []byte, jsonTarget *reflect.Value) ([]byte, error) {
// Convert the YAML to an object.
var yamlObj interface{}
err := yamlUnmarshal(y, &yamlObj)
err := yaml.Unmarshal(y, &yamlObj)
if err != nil {
return nil, err
}
@@ -316,4 +272,6 @@ func convertToJSONableObject(yamlObj interface{}, jsonTarget *reflect.Value) (in
}
return yamlObj, nil
}
return nil, nil
}

View File

@@ -10,6 +10,5 @@
# Please keep the list sorted.
Sendgrid, Inc
Vastech SA (PTY) LTD
Walter Schulze <awalterschulze@gmail.com>

View File

@@ -1,5 +1,4 @@
Anton Povarov <anton.povarov@gmail.com>
Brian Goff <cpuguy83@gmail.com>
Clayton Coleman <ccoleman@redhat.com>
Denis Smirnov <denis.smirnov.91@gmail.com>
DongYun Kang <ceram1000@gmail.com>
@@ -11,12 +10,9 @@ John Shahid <jvshahid@gmail.com>
John Tuley <john@tuley.org>
Laurent <laurent@adyoulike.com>
Patrick Lee <patrick@dropbox.com>
Roger Johansson <rogeralsing@gmail.com>
Sam Nguyen <sam.nguyen@sendgrid.com>
Sergio Arbeo <serabe@gmail.com>
Stephen J Day <stephen.day@docker.com>
Tamir Duberstein <tamird@gmail.com>
Todd Eisenberger <teisenberger@dropbox.com>
Tormod Erevik Lea <tormodlea@gmail.com>
Vyacheslav Kim <kane@sendgrid.com>
Walter Schulze <awalterschulze@gmail.com>

View File

@@ -174,11 +174,11 @@ func sizeFixed32(x uint64) int {
// This is the format used for the sint64 protocol buffer type.
func (p *Buffer) EncodeZigzag64(x uint64) error {
// use signed number to get arithmetic right shift.
return p.EncodeVarint((x << 1) ^ uint64((int64(x) >> 63)))
return p.EncodeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func sizeZigzag64(x uint64) int {
return sizeVarint((x << 1) ^ uint64((int64(x) >> 63)))
return sizeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
// EncodeZigzag32 writes a zigzag-encoded 32-bit integer

View File

@@ -73,6 +73,7 @@ for a protocol buffer variable v:
When the .proto file specifies `syntax="proto3"`, there are some differences:
- Non-repeated fields of non-message type are values instead of pointers.
- Getters are only generated for message and oneof fields.
- Enum types do not get an Enum method.
The simplest way to describe this is to see an example.

View File

@@ -193,7 +193,6 @@ type Properties struct {
Default string // default value
HasDefault bool // whether an explicit default was provided
CustomType string
CastType string
StdTime bool
StdDuration bool
@@ -342,8 +341,6 @@ func (p *Properties) Parse(s string) {
p.OrigName = strings.Split(f, "=")[1]
case strings.HasPrefix(f, "customtype="):
p.CustomType = strings.Split(f, "=")[1]
case strings.HasPrefix(f, "casttype="):
p.CastType = strings.Split(f, "=")[1]
case f == "stdtime":
p.StdTime = true
case f == "stdduration":

View File

@@ -522,17 +522,6 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
}
return nil
}
} else if len(props.CastType) > 0 {
if _, ok := v.Interface().(interface {
String() string
}); ok {
switch v.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
_, err := fmt.Fprintf(w, "%d", v.Interface())
return err
}
}
} else if props.StdTime {
t, ok := v.Interface().(time.Time)
if !ok {
@@ -542,9 +531,9 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
if err != nil {
return err
}
propsCopy := *props // Make a copy so that this is goroutine-safe
propsCopy.StdTime = false
err = tm.writeAny(w, reflect.ValueOf(tproto), &propsCopy)
props.StdTime = false
err = tm.writeAny(w, reflect.ValueOf(tproto), props)
props.StdTime = true
return err
} else if props.StdDuration {
d, ok := v.Interface().(time.Duration)
@@ -552,9 +541,9 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
return fmt.Errorf("stdtime is not time.Duration, but %T", v.Interface())
}
dproto := durationProto(d)
propsCopy := *props // Make a copy so that this is goroutine-safe
propsCopy.StdDuration = false
err := tm.writeAny(w, reflect.ValueOf(dproto), &propsCopy)
props.StdDuration = false
err := tm.writeAny(w, reflect.ValueOf(dproto), props)
props.StdDuration = true
return err
}
}

View File

@@ -983,7 +983,7 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
return p.readStruct(fv, terminator)
case reflect.Uint32:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
fv.SetUint(x)
fv.SetUint(uint64(x))
return nil
}
case reflect.Uint64:

44
vendor/github.com/golang/glog/README generated vendored Normal file
View File

@@ -0,0 +1,44 @@
glog
====
Leveled execution logs for Go.
This is an efficient pure Go implementation of leveled logs in the
manner of the open source C++ package
http://code.google.com/p/google-glog
By binding methods to booleans it is possible to use the log package
without paying the expense of evaluating the arguments to the log.
Through the -vmodule flag, the package also provides fine-grained
control over logging at the file level.
The comment from glog.go introduces the ideas:
Package glog implements logging analogous to the Google-internal
C++ INFO/ERROR/V setup. It provides functions Info, Warning,
Error, Fatal, plus formatting variants such as Infof. It
also provides V-style logging controlled by the -v and
-vmodule=file=2 flags.
Basic examples:
glog.Info("Prepare to repel boarders")
glog.Fatalf("Initialization failed: %s", err)
See the documentation for the V function for an explanation
of these examples:
if glog.V(2) {
glog.Info("Starting transaction...")
}
glog.V(2).Infoln("Processed", nItems, "elements")
The repository contains an open source version of the log package
used inside Google. The master copy of the source lives inside
Google, not here. The code in this repo is for export only and is not itself
under development. Feature requests will be ignored.
Send bug reports to golang-nuts@googlegroups.com.

View File

@@ -14,7 +14,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package klog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup.
// Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup.
// It provides functions Info, Warning, Error, Fatal, plus formatting variants such as
// Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags.
//
@@ -68,7 +68,7 @@
// -vmodule=gopher*=3
// sets the V level to 3 in all Go files whose names begin "gopher".
//
package klog
package glog
import (
"bufio"
@@ -396,6 +396,13 @@ type flushSyncWriter interface {
}
func init() {
flag.BoolVar(&logging.toStderr, "logtostderr", false, "log to standard error instead of files")
flag.BoolVar(&logging.alsoToStderr, "alsologtostderr", false, "log to standard error as well as files")
flag.Var(&logging.verbosity, "v", "log level for V logs")
flag.Var(&logging.stderrThreshold, "stderrthreshold", "logs at or above this threshold go to stderr")
flag.Var(&logging.vmodule, "vmodule", "comma-separated list of pattern=N settings for file-filtered logging")
flag.Var(&logging.traceLocation, "log_backtrace_at", "when logging hits line file:N, emit a stack trace")
// Default stderrThreshold is ERROR.
logging.stderrThreshold = errorLog
@@ -403,22 +410,6 @@ func init() {
go logging.flushDaemon()
}
// InitFlags is for explicitly initializing the flags
func InitFlags(flagset *flag.FlagSet) {
if flagset == nil {
flagset = flag.CommandLine
}
flagset.StringVar(&logging.logDir, "log_dir", "", "If non-empty, write log files in this directory")
flagset.StringVar(&logging.logFile, "log_file", "", "If non-empty, use this log file")
flagset.BoolVar(&logging.toStderr, "logtostderr", true, "log to standard error instead of files")
flagset.BoolVar(&logging.alsoToStderr, "alsologtostderr", false, "log to standard error as well as files")
flagset.Var(&logging.verbosity, "v", "number for the log level verbosity")
flagset.BoolVar(&logging.skipHeaders, "skip_headers", false, "If true, avoid header prefixes in the log messages")
flagset.Var(&logging.stderrThreshold, "stderrthreshold", "logs at or above this threshold go to stderr")
flagset.Var(&logging.vmodule, "vmodule", "comma-separated list of pattern=N settings for file-filtered logging")
flagset.Var(&logging.traceLocation, "log_backtrace_at", "when logging hits line file:N, emit a stack trace")
}
// Flush flushes all pending log I/O.
func Flush() {
logging.lockAndFlushAll()
@@ -462,17 +453,6 @@ type loggingT struct {
// safely using atomic.LoadInt32.
vmodule moduleSpec // The state of the -vmodule flag.
verbosity Level // V logging level, the value of the -v flag/
// If non-empty, overrides the choice of directory in which to write logs.
// See createLogDirs for the full list of possible destinations.
logDir string
// If non-empty, specifies the path of the file to write logs. mutually exclusive
// with the log-dir option.
logFile string
// If true, do not add the prefix headers, useful when used with SetOutput
skipHeaders bool
}
// buffer holds a byte Buffer for reuse. The zero value is ready for use.
@@ -576,9 +556,6 @@ func (l *loggingT) formatHeader(s severity, file string, line int) *buffer {
s = infoLog // for safety.
}
buf := l.getBuffer()
if l.skipHeaders {
return buf
}
// Avoid Fprintf, for speed. The format is so simple that we can do it quickly by hand.
// It's worth about 3X. Fprintf is hard.
@@ -690,45 +667,6 @@ func (l *loggingT) printWithFileLine(s severity, file string, line int, alsoToSt
l.output(s, buf, file, line, alsoToStderr)
}
// redirectBuffer is used to set an alternate destination for the logs
type redirectBuffer struct {
w io.Writer
}
func (rb *redirectBuffer) Sync() error {
return nil
}
func (rb *redirectBuffer) Flush() error {
return nil
}
func (rb *redirectBuffer) Write(bytes []byte) (n int, err error) {
return rb.w.Write(bytes)
}
// SetOutput sets the output destination for all severities
func SetOutput(w io.Writer) {
for s := fatalLog; s >= infoLog; s-- {
rb := &redirectBuffer{
w: w,
}
logging.file[s] = rb
}
}
// SetOutputBySeverity sets the output destination for specific severity
func SetOutputBySeverity(name string, w io.Writer) {
sev, ok := severityByName(name)
if !ok {
panic(fmt.Sprintf("SetOutputBySeverity(%q): unrecognized severity name", name))
}
rb := &redirectBuffer{
w: w,
}
logging.file[sev] = rb
}
// output writes the data to the log files and releases the buffer.
func (l *loggingT) output(s severity, buf *buffer, file string, line int, alsoToStderr bool) {
l.mu.Lock()
@@ -872,7 +810,7 @@ func (sb *syncBuffer) Sync() error {
func (sb *syncBuffer) Write(p []byte) (n int, err error) {
if sb.nbytes+uint64(len(p)) >= MaxSize {
if err := sb.rotateFile(time.Now(), false); err != nil {
if err := sb.rotateFile(time.Now()); err != nil {
sb.logger.exit(err)
}
}
@@ -885,15 +823,13 @@ func (sb *syncBuffer) Write(p []byte) (n int, err error) {
}
// rotateFile closes the syncBuffer's file and starts a new one.
// The startup argument indicates whether this is the initial startup of klog.
// If startup is true, existing files are opened for apending instead of truncated.
func (sb *syncBuffer) rotateFile(now time.Time, startup bool) error {
func (sb *syncBuffer) rotateFile(now time.Time) error {
if sb.file != nil {
sb.Flush()
sb.file.Close()
}
var err error
sb.file, _, err = create(severityName[sb.sev], now, startup)
sb.file, _, err = create(severityName[sb.sev], now)
sb.nbytes = 0
if err != nil {
return err
@@ -928,7 +864,7 @@ func (l *loggingT) createFiles(sev severity) error {
logger: l,
sev: s,
}
if err := sb.rotateFile(now, true); err != nil {
if err := sb.rotateFile(now); err != nil {
return err
}
l.file[s] = sb
@@ -936,11 +872,11 @@ func (l *loggingT) createFiles(sev severity) error {
return nil
}
const flushInterval = 5 * time.Second
const flushInterval = 30 * time.Second
// flushDaemon periodically flushes the log file buffers.
func (l *loggingT) flushDaemon() {
for range time.NewTicker(flushInterval).C {
for _ = range time.NewTicker(flushInterval).C {
l.lockAndFlushAll()
}
}

View File

@@ -16,10 +16,11 @@
// File I/O for logs.
package klog
package glog
import (
"errors"
"flag"
"fmt"
"os"
"os/user"
@@ -35,9 +36,13 @@ var MaxSize uint64 = 1024 * 1024 * 1800
// logDirs lists the candidate directories for new log files.
var logDirs []string
// If non-empty, overrides the choice of directory in which to write logs.
// See createLogDirs for the full list of possible destinations.
var logDir = flag.String("log_dir", "", "If non-empty, write log files in this directory")
func createLogDirs() {
if logging.logDir != "" {
logDirs = append(logDirs, logging.logDir)
if *logDir != "" {
logDirs = append(logDirs, *logDir)
}
logDirs = append(logDirs, os.TempDir())
}
@@ -97,16 +102,7 @@ var onceLogDirs sync.Once
// contains tag ("INFO", "FATAL", etc.) and t. If the file is created
// successfully, create also attempts to update the symlink for that tag, ignoring
// errors.
// The startup argument indicates whether this is the initial startup of klog.
// If startup is true, existing files are opened for apending instead of truncated.
func create(tag string, t time.Time, startup bool) (f *os.File, filename string, err error) {
if logging.logFile != "" {
f, err := openOrCreate(logging.logFile, startup)
if err == nil {
return f, logging.logFile, nil
}
return nil, "", fmt.Errorf("log: unable to create log: %v", err)
}
func create(tag string, t time.Time) (f *os.File, filename string, err error) {
onceLogDirs.Do(createLogDirs)
if len(logDirs) == 0 {
return nil, "", errors.New("log: no log dirs")
@@ -115,7 +111,7 @@ func create(tag string, t time.Time, startup bool) (f *os.File, filename string,
var lastErr error
for _, dir := range logDirs {
fname := filepath.Join(dir, name)
f, err := openOrCreate(fname, startup)
f, err := os.Create(fname)
if err == nil {
symlink := filepath.Join(dir, link)
os.Remove(symlink) // ignore err
@@ -126,14 +122,3 @@ func create(tag string, t time.Time, startup bool) (f *os.File, filename string,
}
return nil, "", fmt.Errorf("log: cannot create log: %v", lastErr)
}
// The startup argument indicates whether this is the initial startup of klog.
// If startup is true, existing files are opened for appending instead of truncated.
func openOrCreate(name string, startup bool) (*os.File, error) {
if startup {
f, err := os.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
return f, err
}
f, err := os.Create(name)
return f, err
}

43
vendor/github.com/golang/protobuf/proto/Makefile generated vendored Normal file
View File

@@ -0,0 +1,43 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
install:
go install
test: install generate-test-pbs
go test
generate-test-pbs:
make install
make -C testdata
protoc --go_out=Mtestdata/test.proto=github.com/golang/protobuf/proto/testdata,Mgoogle/protobuf/any.proto=github.com/golang/protobuf/ptypes/any:. proto3_proto/proto3.proto
make

View File

@@ -35,39 +35,22 @@
package proto
import (
"fmt"
"log"
"reflect"
"strings"
)
// Clone returns a deep copy of a protocol buffer.
func Clone(src Message) Message {
in := reflect.ValueOf(src)
func Clone(pb Message) Message {
in := reflect.ValueOf(pb)
if in.IsNil() {
return src
return pb
}
out := reflect.New(in.Type().Elem())
dst := out.Interface().(Message)
Merge(dst, src)
return dst
}
// Merger is the interface representing objects that can merge messages of the same type.
type Merger interface {
// Merge merges src into this message.
// Required and optional fields that are set in src will be set to that value in dst.
// Elements of repeated fields will be appended.
//
// Merge may panic if called with a different argument type than the receiver.
Merge(src Message)
}
// generatedMerger is the custom merge method that generated protos will have.
// We must add this method since a generate Merge method will conflict with
// many existing protos that have a Merge data field already defined.
type generatedMerger interface {
XXX_Merge(src Message)
// out is empty so a merge is a deep copy.
mergeStruct(out.Elem(), in.Elem())
return out.Interface().(Message)
}
// Merge merges src into dst.
@@ -75,24 +58,17 @@ type generatedMerger interface {
// Elements of repeated fields will be appended.
// Merge panics if src and dst are not the same type, or if dst is nil.
func Merge(dst, src Message) {
if m, ok := dst.(Merger); ok {
m.Merge(src)
return
}
in := reflect.ValueOf(src)
out := reflect.ValueOf(dst)
if out.IsNil() {
panic("proto: nil destination")
}
if in.Type() != out.Type() {
panic(fmt.Sprintf("proto.Merge(%T, %T) type mismatch", dst, src))
// Explicit test prior to mergeStruct so that mistyped nils will fail
panic("proto: type mismatch")
}
if in.IsNil() {
return // Merge from nil src is a noop
}
if m, ok := dst.(generatedMerger); ok {
m.XXX_Merge(src)
// Merging nil into non-nil is a quiet no-op
return
}
mergeStruct(out.Elem(), in.Elem())
@@ -108,7 +84,7 @@ func mergeStruct(out, in reflect.Value) {
mergeAny(out.Field(i), in.Field(i), false, sprop.Prop[i])
}
if emIn, err := extendable(in.Addr().Interface()); err == nil {
if emIn, ok := extendable(in.Addr().Interface()); ok {
emOut, _ := extendable(out.Addr().Interface())
mIn, muIn := emIn.extensionsRead()
if mIn != nil {

View File

@@ -39,6 +39,8 @@ import (
"errors"
"fmt"
"io"
"os"
"reflect"
)
// errOverflow is returned when an integer is too large to be represented.
@@ -48,6 +50,10 @@ var errOverflow = errors.New("proto: integer overflow")
// wire type is encountered. It does not get returned to user code.
var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof")
// The fundamental decoders that interpret bytes on the wire.
// Those that take integer types all return uint64 and are
// therefore of type valueDecoder.
// DecodeVarint reads a varint-encoded integer from the slice.
// It returns the integer and the number of bytes consumed, or
// zero if there is not enough.
@@ -261,6 +267,9 @@ func (p *Buffer) DecodeZigzag32() (x uint64, err error) {
return
}
// These are not ValueDecoders: they produce an array of bytes or a string.
// bytes, embedded messages
// DecodeRawBytes reads a count-delimited byte buffer from the Buffer.
// This is the format used for the bytes protocol buffer
// type and for embedded messages.
@@ -302,27 +311,79 @@ func (p *Buffer) DecodeStringBytes() (s string, err error) {
return string(buf), nil
}
// Unmarshaler is the interface representing objects that can
// unmarshal themselves. The argument points to data that may be
// overwritten, so implementations should not keep references to the
// buffer.
// Unmarshal implementations should not clear the receiver.
// Any unmarshaled data should be merged into the receiver.
// Callers of Unmarshal that do not want to retain existing data
// should Reset the receiver before calling Unmarshal.
type Unmarshaler interface {
Unmarshal([]byte) error
// Skip the next item in the buffer. Its wire type is decoded and presented as an argument.
// If the protocol buffer has extensions, and the field matches, add it as an extension.
// Otherwise, if the XXX_unrecognized field exists, append the skipped data there.
func (o *Buffer) skipAndSave(t reflect.Type, tag, wire int, base structPointer, unrecField field) error {
oi := o.index
err := o.skip(t, tag, wire)
if err != nil {
return err
}
if !unrecField.IsValid() {
return nil
}
ptr := structPointer_Bytes(base, unrecField)
// Add the skipped field to struct field
obuf := o.buf
o.buf = *ptr
o.EncodeVarint(uint64(tag<<3 | wire))
*ptr = append(o.buf, obuf[oi:o.index]...)
o.buf = obuf
return nil
}
// newUnmarshaler is the interface representing objects that can
// unmarshal themselves. The semantics are identical to Unmarshaler.
//
// This exists to support protoc-gen-go generated messages.
// The proto package will stop type-asserting to this interface in the future.
//
// DO NOT DEPEND ON THIS.
type newUnmarshaler interface {
XXX_Unmarshal([]byte) error
// Skip the next item in the buffer. Its wire type is decoded and presented as an argument.
func (o *Buffer) skip(t reflect.Type, tag, wire int) error {
var u uint64
var err error
switch wire {
case WireVarint:
_, err = o.DecodeVarint()
case WireFixed64:
_, err = o.DecodeFixed64()
case WireBytes:
_, err = o.DecodeRawBytes(false)
case WireFixed32:
_, err = o.DecodeFixed32()
case WireStartGroup:
for {
u, err = o.DecodeVarint()
if err != nil {
break
}
fwire := int(u & 0x7)
if fwire == WireEndGroup {
break
}
ftag := int(u >> 3)
err = o.skip(t, ftag, fwire)
if err != nil {
break
}
}
default:
err = fmt.Errorf("proto: can't skip unknown wire type %d for %s", wire, t)
}
return err
}
// Unmarshaler is the interface representing objects that can
// unmarshal themselves. The method should reset the receiver before
// decoding starts. The argument points to data that may be
// overwritten, so implementations should not keep references to the
// buffer.
type Unmarshaler interface {
Unmarshal([]byte) error
}
// Unmarshal parses the protocol buffer representation in buf and places the
@@ -334,13 +395,7 @@ type newUnmarshaler interface {
// to preserve and append to existing data.
func Unmarshal(buf []byte, pb Message) error {
pb.Reset()
if u, ok := pb.(newUnmarshaler); ok {
return u.XXX_Unmarshal(buf)
}
if u, ok := pb.(Unmarshaler); ok {
return u.Unmarshal(buf)
}
return NewBuffer(buf).Unmarshal(pb)
return UnmarshalMerge(buf, pb)
}
// UnmarshalMerge parses the protocol buffer representation in buf and
@@ -350,16 +405,8 @@ func Unmarshal(buf []byte, pb Message) error {
// UnmarshalMerge merges into existing data in pb.
// Most code should use Unmarshal instead.
func UnmarshalMerge(buf []byte, pb Message) error {
if u, ok := pb.(newUnmarshaler); ok {
return u.XXX_Unmarshal(buf)
}
// If the object can unmarshal itself, let it.
if u, ok := pb.(Unmarshaler); ok {
// NOTE: The history of proto have unfortunately been inconsistent
// whether Unmarshaler should or should not implicitly clear itself.
// Some implementations do, most do not.
// Thus, calling this here may or may not do what people want.
//
// See https://github.com/golang/protobuf/issues/424
return u.Unmarshal(buf)
}
return NewBuffer(buf).Unmarshal(pb)
@@ -375,17 +422,12 @@ func (p *Buffer) DecodeMessage(pb Message) error {
}
// DecodeGroup reads a tag-delimited group from the Buffer.
// StartGroup tag is already consumed. This function consumes
// EndGroup tag.
func (p *Buffer) DecodeGroup(pb Message) error {
b := p.buf[p.index:]
x, y := findEndGroup(b)
if x < 0 {
return io.ErrUnexpectedEOF
typ, base, err := getbase(pb)
if err != nil {
return err
}
err := Unmarshal(b[:x], pb)
p.index += y
return err
return p.unmarshalType(typ.Elem(), GetProperties(typ.Elem()), true, base)
}
// Unmarshal parses the protocol buffer representation in the
@@ -396,33 +438,533 @@ func (p *Buffer) DecodeGroup(pb Message) error {
// Unlike proto.Unmarshal, this does not reset pb before starting to unmarshal.
func (p *Buffer) Unmarshal(pb Message) error {
// If the object can unmarshal itself, let it.
if u, ok := pb.(newUnmarshaler); ok {
err := u.XXX_Unmarshal(p.buf[p.index:])
p.index = len(p.buf)
return err
}
if u, ok := pb.(Unmarshaler); ok {
// NOTE: The history of proto have unfortunately been inconsistent
// whether Unmarshaler should or should not implicitly clear itself.
// Some implementations do, most do not.
// Thus, calling this here may or may not do what people want.
//
// See https://github.com/golang/protobuf/issues/424
err := u.Unmarshal(p.buf[p.index:])
p.index = len(p.buf)
return err
}
// Slow workaround for messages that aren't Unmarshalers.
// This includes some hand-coded .pb.go files and
// bootstrap protos.
// TODO: fix all of those and then add Unmarshal to
// the Message interface. Then:
// The cast above and code below can be deleted.
// The old unmarshaler can be deleted.
// Clients can call Unmarshal directly (can already do that, actually).
var info InternalMessageInfo
err := info.Unmarshal(pb, p.buf[p.index:])
p.index = len(p.buf)
typ, base, err := getbase(pb)
if err != nil {
return err
}
err = p.unmarshalType(typ.Elem(), GetProperties(typ.Elem()), false, base)
if collectStats {
stats.Decode++
}
return err
}
// unmarshalType does the work of unmarshaling a structure.
func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group bool, base structPointer) error {
var state errorState
required, reqFields := prop.reqCount, uint64(0)
var err error
for err == nil && o.index < len(o.buf) {
oi := o.index
var u uint64
u, err = o.DecodeVarint()
if err != nil {
break
}
wire := int(u & 0x7)
if wire == WireEndGroup {
if is_group {
if required > 0 {
// Not enough information to determine the exact field.
// (See below.)
return &RequiredNotSetError{"{Unknown}"}
}
return nil // input is satisfied
}
return fmt.Errorf("proto: %s: wiretype end group for non-group", st)
}
tag := int(u >> 3)
if tag <= 0 {
return fmt.Errorf("proto: %s: illegal tag %d (wire type %d)", st, tag, wire)
}
fieldnum, ok := prop.decoderTags.get(tag)
if !ok {
// Maybe it's an extension?
if prop.extendable {
if e, _ := extendable(structPointer_Interface(base, st)); isExtensionField(e, int32(tag)) {
if err = o.skip(st, tag, wire); err == nil {
extmap := e.extensionsWrite()
ext := extmap[int32(tag)] // may be missing
ext.enc = append(ext.enc, o.buf[oi:o.index]...)
extmap[int32(tag)] = ext
}
continue
}
}
// Maybe it's a oneof?
if prop.oneofUnmarshaler != nil {
m := structPointer_Interface(base, st).(Message)
// First return value indicates whether tag is a oneof field.
ok, err = prop.oneofUnmarshaler(m, tag, wire, o)
if err == ErrInternalBadWireType {
// Map the error to something more descriptive.
// Do the formatting here to save generated code space.
err = fmt.Errorf("bad wiretype for oneof field in %T", m)
}
if ok {
continue
}
}
err = o.skipAndSave(st, tag, wire, base, prop.unrecField)
continue
}
p := prop.Prop[fieldnum]
if p.dec == nil {
fmt.Fprintf(os.Stderr, "proto: no protobuf decoder for %s.%s\n", st, st.Field(fieldnum).Name)
continue
}
dec := p.dec
if wire != WireStartGroup && wire != p.WireType {
if wire == WireBytes && p.packedDec != nil {
// a packable field
dec = p.packedDec
} else {
err = fmt.Errorf("proto: bad wiretype for field %s.%s: got wiretype %d, want %d", st, st.Field(fieldnum).Name, wire, p.WireType)
continue
}
}
decErr := dec(o, p, base)
if decErr != nil && !state.shouldContinue(decErr, p) {
err = decErr
}
if err == nil && p.Required {
// Successfully decoded a required field.
if tag <= 64 {
// use bitmap for fields 1-64 to catch field reuse.
var mask uint64 = 1 << uint64(tag-1)
if reqFields&mask == 0 {
// new required field
reqFields |= mask
required--
}
} else {
// This is imprecise. It can be fooled by a required field
// with a tag > 64 that is encoded twice; that's very rare.
// A fully correct implementation would require allocating
// a data structure, which we would like to avoid.
required--
}
}
}
if err == nil {
if is_group {
return io.ErrUnexpectedEOF
}
if state.err != nil {
return state.err
}
if required > 0 {
// Not enough information to determine the exact field. If we use extra
// CPU, we could determine the field only if the missing required field
// has a tag <= 64 and we check reqFields.
return &RequiredNotSetError{"{Unknown}"}
}
}
return err
}
// Individual type decoders
// For each,
// u is the decoded value,
// v is a pointer to the field (pointer) in the struct
// Sizes of the pools to allocate inside the Buffer.
// The goal is modest amortization and allocation
// on at least 16-byte boundaries.
const (
boolPoolSize = 16
uint32PoolSize = 8
uint64PoolSize = 4
)
// Decode a bool.
func (o *Buffer) dec_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
if len(o.bools) == 0 {
o.bools = make([]bool, boolPoolSize)
}
o.bools[0] = u != 0
*structPointer_Bool(base, p.field) = &o.bools[0]
o.bools = o.bools[1:]
return nil
}
func (o *Buffer) dec_proto3_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
*structPointer_BoolVal(base, p.field) = u != 0
return nil
}
// Decode an int32.
func (o *Buffer) dec_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word32_Set(structPointer_Word32(base, p.field), o, uint32(u))
return nil
}
func (o *Buffer) dec_proto3_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word32Val_Set(structPointer_Word32Val(base, p.field), uint32(u))
return nil
}
// Decode an int64.
func (o *Buffer) dec_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word64_Set(structPointer_Word64(base, p.field), o, u)
return nil
}
func (o *Buffer) dec_proto3_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word64Val_Set(structPointer_Word64Val(base, p.field), o, u)
return nil
}
// Decode a string.
func (o *Buffer) dec_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_String(base, p.field) = &s
return nil
}
func (o *Buffer) dec_proto3_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_StringVal(base, p.field) = s
return nil
}
// Decode a slice of bytes ([]byte).
func (o *Buffer) dec_slice_byte(p *Properties, base structPointer) error {
b, err := o.DecodeRawBytes(true)
if err != nil {
return err
}
*structPointer_Bytes(base, p.field) = b
return nil
}
// Decode a slice of bools ([]bool).
func (o *Buffer) dec_slice_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
v := structPointer_BoolSlice(base, p.field)
*v = append(*v, u != 0)
return nil
}
// Decode a slice of bools ([]bool) in packed format.
func (o *Buffer) dec_slice_packed_bool(p *Properties, base structPointer) error {
v := structPointer_BoolSlice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded bools
fin := o.index + nb
if fin < o.index {
return errOverflow
}
y := *v
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
y = append(y, u != 0)
}
*v = y
return nil
}
// Decode a slice of int32s ([]int32).
func (o *Buffer) dec_slice_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
structPointer_Word32Slice(base, p.field).Append(uint32(u))
return nil
}
// Decode a slice of int32s ([]int32) in packed format.
func (o *Buffer) dec_slice_packed_int32(p *Properties, base structPointer) error {
v := structPointer_Word32Slice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded int32s
fin := o.index + nb
if fin < o.index {
return errOverflow
}
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
v.Append(uint32(u))
}
return nil
}
// Decode a slice of int64s ([]int64).
func (o *Buffer) dec_slice_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
structPointer_Word64Slice(base, p.field).Append(u)
return nil
}
// Decode a slice of int64s ([]int64) in packed format.
func (o *Buffer) dec_slice_packed_int64(p *Properties, base structPointer) error {
v := structPointer_Word64Slice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded int64s
fin := o.index + nb
if fin < o.index {
return errOverflow
}
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
v.Append(u)
}
return nil
}
// Decode a slice of strings ([]string).
func (o *Buffer) dec_slice_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
v := structPointer_StringSlice(base, p.field)
*v = append(*v, s)
return nil
}
// Decode a slice of slice of bytes ([][]byte).
func (o *Buffer) dec_slice_slice_byte(p *Properties, base structPointer) error {
b, err := o.DecodeRawBytes(true)
if err != nil {
return err
}
v := structPointer_BytesSlice(base, p.field)
*v = append(*v, b)
return nil
}
// Decode a map field.
func (o *Buffer) dec_new_map(p *Properties, base structPointer) error {
raw, err := o.DecodeRawBytes(false)
if err != nil {
return err
}
oi := o.index // index at the end of this map entry
o.index -= len(raw) // move buffer back to start of map entry
mptr := structPointer_NewAt(base, p.field, p.mtype) // *map[K]V
if mptr.Elem().IsNil() {
mptr.Elem().Set(reflect.MakeMap(mptr.Type().Elem()))
}
v := mptr.Elem() // map[K]V
// Prepare addressable doubly-indirect placeholders for the key and value types.
// See enc_new_map for why.
keyptr := reflect.New(reflect.PtrTo(p.mtype.Key())).Elem() // addressable *K
keybase := toStructPointer(keyptr.Addr()) // **K
var valbase structPointer
var valptr reflect.Value
switch p.mtype.Elem().Kind() {
case reflect.Slice:
// []byte
var dummy []byte
valptr = reflect.ValueOf(&dummy) // *[]byte
valbase = toStructPointer(valptr) // *[]byte
case reflect.Ptr:
// message; valptr is **Msg; need to allocate the intermediate pointer
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
valptr.Set(reflect.New(valptr.Type().Elem()))
valbase = toStructPointer(valptr)
default:
// everything else
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
valbase = toStructPointer(valptr.Addr()) // **V
}
// Decode.
// This parses a restricted wire format, namely the encoding of a message
// with two fields. See enc_new_map for the format.
for o.index < oi {
// tagcode for key and value properties are always a single byte
// because they have tags 1 and 2.
tagcode := o.buf[o.index]
o.index++
switch tagcode {
case p.mkeyprop.tagcode[0]:
if err := p.mkeyprop.dec(o, p.mkeyprop, keybase); err != nil {
return err
}
case p.mvalprop.tagcode[0]:
if err := p.mvalprop.dec(o, p.mvalprop, valbase); err != nil {
return err
}
default:
// TODO: Should we silently skip this instead?
return fmt.Errorf("proto: bad map data tag %d", raw[0])
}
}
keyelem, valelem := keyptr.Elem(), valptr.Elem()
if !keyelem.IsValid() {
keyelem = reflect.Zero(p.mtype.Key())
}
if !valelem.IsValid() {
valelem = reflect.Zero(p.mtype.Elem())
}
v.SetMapIndex(keyelem, valelem)
return nil
}
// Decode a group.
func (o *Buffer) dec_struct_group(p *Properties, base structPointer) error {
bas := structPointer_GetStructPointer(base, p.field)
if structPointer_IsNil(bas) {
// allocate new nested message
bas = toStructPointer(reflect.New(p.stype))
structPointer_SetStructPointer(base, p.field, bas)
}
return o.unmarshalType(p.stype, p.sprop, true, bas)
}
// Decode an embedded message.
func (o *Buffer) dec_struct_message(p *Properties, base structPointer) (err error) {
raw, e := o.DecodeRawBytes(false)
if e != nil {
return e
}
bas := structPointer_GetStructPointer(base, p.field)
if structPointer_IsNil(bas) {
// allocate new nested message
bas = toStructPointer(reflect.New(p.stype))
structPointer_SetStructPointer(base, p.field, bas)
}
// If the object can unmarshal itself, let it.
if p.isUnmarshaler {
iv := structPointer_Interface(bas, p.stype)
return iv.(Unmarshaler).Unmarshal(raw)
}
obuf := o.buf
oi := o.index
o.buf = raw
o.index = 0
err = o.unmarshalType(p.stype, p.sprop, false, bas)
o.buf = obuf
o.index = oi
return err
}
// Decode a slice of embedded messages.
func (o *Buffer) dec_slice_struct_message(p *Properties, base structPointer) error {
return o.dec_slice_struct(p, false, base)
}
// Decode a slice of embedded groups.
func (o *Buffer) dec_slice_struct_group(p *Properties, base structPointer) error {
return o.dec_slice_struct(p, true, base)
}
// Decode a slice of structs ([]*struct).
func (o *Buffer) dec_slice_struct(p *Properties, is_group bool, base structPointer) error {
v := reflect.New(p.stype)
bas := toStructPointer(v)
structPointer_StructPointerSlice(base, p.field).Append(bas)
if is_group {
err := o.unmarshalType(p.stype, p.sprop, is_group, bas)
return err
}
raw, err := o.DecodeRawBytes(false)
if err != nil {
return err
}
// If the object can unmarshal itself, let it.
if p.isUnmarshaler {
iv := v.Interface()
return iv.(Unmarshaler).Unmarshal(raw)
}
obuf := o.buf
oi := o.index
o.buf = raw
o.index = 0
err = o.unmarshalType(p.stype, p.sprop, is_group, bas)
o.buf = obuf
o.index = oi
return err
}

View File

@@ -1,350 +0,0 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2017 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
import (
"fmt"
"reflect"
"strings"
"sync"
"sync/atomic"
)
type generatedDiscarder interface {
XXX_DiscardUnknown()
}
// DiscardUnknown recursively discards all unknown fields from this message
// and all embedded messages.
//
// When unmarshaling a message with unrecognized fields, the tags and values
// of such fields are preserved in the Message. This allows a later call to
// marshal to be able to produce a message that continues to have those
// unrecognized fields. To avoid this, DiscardUnknown is used to
// explicitly clear the unknown fields after unmarshaling.
//
// For proto2 messages, the unknown fields of message extensions are only
// discarded from messages that have been accessed via GetExtension.
func DiscardUnknown(m Message) {
if m, ok := m.(generatedDiscarder); ok {
m.XXX_DiscardUnknown()
return
}
// TODO: Dynamically populate a InternalMessageInfo for legacy messages,
// but the master branch has no implementation for InternalMessageInfo,
// so it would be more work to replicate that approach.
discardLegacy(m)
}
// DiscardUnknown recursively discards all unknown fields.
func (a *InternalMessageInfo) DiscardUnknown(m Message) {
di := atomicLoadDiscardInfo(&a.discard)
if di == nil {
di = getDiscardInfo(reflect.TypeOf(m).Elem())
atomicStoreDiscardInfo(&a.discard, di)
}
di.discard(toPointer(&m))
}
type discardInfo struct {
typ reflect.Type
initialized int32 // 0: only typ is valid, 1: everything is valid
lock sync.Mutex
fields []discardFieldInfo
unrecognized field
}
type discardFieldInfo struct {
field field // Offset of field, guaranteed to be valid
discard func(src pointer)
}
var (
discardInfoMap = map[reflect.Type]*discardInfo{}
discardInfoLock sync.Mutex
)
func getDiscardInfo(t reflect.Type) *discardInfo {
discardInfoLock.Lock()
defer discardInfoLock.Unlock()
di := discardInfoMap[t]
if di == nil {
di = &discardInfo{typ: t}
discardInfoMap[t] = di
}
return di
}
func (di *discardInfo) discard(src pointer) {
if src.isNil() {
return // Nothing to do.
}
if atomic.LoadInt32(&di.initialized) == 0 {
di.computeDiscardInfo()
}
for _, fi := range di.fields {
sfp := src.offset(fi.field)
fi.discard(sfp)
}
// For proto2 messages, only discard unknown fields in message extensions
// that have been accessed via GetExtension.
if em, err := extendable(src.asPointerTo(di.typ).Interface()); err == nil {
// Ignore lock since DiscardUnknown is not concurrency safe.
emm, _ := em.extensionsRead()
for _, mx := range emm {
if m, ok := mx.value.(Message); ok {
DiscardUnknown(m)
}
}
}
if di.unrecognized.IsValid() {
*src.offset(di.unrecognized).toBytes() = nil
}
}
func (di *discardInfo) computeDiscardInfo() {
di.lock.Lock()
defer di.lock.Unlock()
if di.initialized != 0 {
return
}
t := di.typ
n := t.NumField()
for i := 0; i < n; i++ {
f := t.Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
dfi := discardFieldInfo{field: toField(&f)}
tf := f.Type
// Unwrap tf to get its most basic type.
var isPointer, isSlice bool
if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 {
isSlice = true
tf = tf.Elem()
}
if tf.Kind() == reflect.Ptr {
isPointer = true
tf = tf.Elem()
}
if isPointer && isSlice && tf.Kind() != reflect.Struct {
panic(fmt.Sprintf("%v.%s cannot be a slice of pointers to primitive types", t, f.Name))
}
switch tf.Kind() {
case reflect.Struct:
switch {
case !isPointer:
panic(fmt.Sprintf("%v.%s cannot be a direct struct value", t, f.Name))
case isSlice: // E.g., []*pb.T
di := getDiscardInfo(tf)
dfi.discard = func(src pointer) {
sps := src.getPointerSlice()
for _, sp := range sps {
if !sp.isNil() {
di.discard(sp)
}
}
}
default: // E.g., *pb.T
di := getDiscardInfo(tf)
dfi.discard = func(src pointer) {
sp := src.getPointer()
if !sp.isNil() {
di.discard(sp)
}
}
}
case reflect.Map:
switch {
case isPointer || isSlice:
panic(fmt.Sprintf("%v.%s cannot be a pointer to a map or a slice of map values", t, f.Name))
default: // E.g., map[K]V
if tf.Elem().Kind() == reflect.Ptr { // Proto struct (e.g., *T)
dfi.discard = func(src pointer) {
sm := src.asPointerTo(tf).Elem()
if sm.Len() == 0 {
return
}
for _, key := range sm.MapKeys() {
val := sm.MapIndex(key)
DiscardUnknown(val.Interface().(Message))
}
}
} else {
dfi.discard = func(pointer) {} // Noop
}
}
case reflect.Interface:
// Must be oneof field.
switch {
case isPointer || isSlice:
panic(fmt.Sprintf("%v.%s cannot be a pointer to a interface or a slice of interface values", t, f.Name))
default: // E.g., interface{}
// TODO: Make this faster?
dfi.discard = func(src pointer) {
su := src.asPointerTo(tf).Elem()
if !su.IsNil() {
sv := su.Elem().Elem().Field(0)
if sv.Kind() == reflect.Ptr && sv.IsNil() {
return
}
switch sv.Type().Kind() {
case reflect.Ptr: // Proto struct (e.g., *T)
DiscardUnknown(sv.Interface().(Message))
}
}
}
}
default:
continue
}
di.fields = append(di.fields, dfi)
}
di.unrecognized = invalidField
if f, ok := t.FieldByName("XXX_unrecognized"); ok {
if f.Type != reflect.TypeOf([]byte{}) {
panic("expected XXX_unrecognized to be of type []byte")
}
di.unrecognized = toField(&f)
}
atomic.StoreInt32(&di.initialized, 1)
}
func discardLegacy(m Message) {
v := reflect.ValueOf(m)
if v.Kind() != reflect.Ptr || v.IsNil() {
return
}
v = v.Elem()
if v.Kind() != reflect.Struct {
return
}
t := v.Type()
for i := 0; i < v.NumField(); i++ {
f := t.Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
vf := v.Field(i)
tf := f.Type
// Unwrap tf to get its most basic type.
var isPointer, isSlice bool
if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 {
isSlice = true
tf = tf.Elem()
}
if tf.Kind() == reflect.Ptr {
isPointer = true
tf = tf.Elem()
}
if isPointer && isSlice && tf.Kind() != reflect.Struct {
panic(fmt.Sprintf("%T.%s cannot be a slice of pointers to primitive types", m, f.Name))
}
switch tf.Kind() {
case reflect.Struct:
switch {
case !isPointer:
panic(fmt.Sprintf("%T.%s cannot be a direct struct value", m, f.Name))
case isSlice: // E.g., []*pb.T
for j := 0; j < vf.Len(); j++ {
discardLegacy(vf.Index(j).Interface().(Message))
}
default: // E.g., *pb.T
discardLegacy(vf.Interface().(Message))
}
case reflect.Map:
switch {
case isPointer || isSlice:
panic(fmt.Sprintf("%T.%s cannot be a pointer to a map or a slice of map values", m, f.Name))
default: // E.g., map[K]V
tv := vf.Type().Elem()
if tv.Kind() == reflect.Ptr && tv.Implements(protoMessageType) { // Proto struct (e.g., *T)
for _, key := range vf.MapKeys() {
val := vf.MapIndex(key)
discardLegacy(val.Interface().(Message))
}
}
}
case reflect.Interface:
// Must be oneof field.
switch {
case isPointer || isSlice:
panic(fmt.Sprintf("%T.%s cannot be a pointer to a interface or a slice of interface values", m, f.Name))
default: // E.g., test_proto.isCommunique_Union interface
if !vf.IsNil() && f.Tag.Get("protobuf_oneof") != "" {
vf = vf.Elem() // E.g., *test_proto.Communique_Msg
if !vf.IsNil() {
vf = vf.Elem() // E.g., test_proto.Communique_Msg
vf = vf.Field(0) // E.g., Proto struct (e.g., *T) or primitive value
if vf.Kind() == reflect.Ptr {
discardLegacy(vf.Interface().(Message))
}
}
}
}
}
}
if vf := v.FieldByName("XXX_unrecognized"); vf.IsValid() {
if vf.Type() != reflect.TypeOf([]byte{}) {
panic("expected XXX_unrecognized to be of type []byte")
}
vf.Set(reflect.ValueOf([]byte(nil)))
}
// For proto2 messages, only discard unknown fields in message extensions
// that have been accessed via GetExtension.
if em, err := extendable(m); err == nil {
// Ignore lock since discardLegacy is not concurrency safe.
emm, _ := em.extensionsRead()
for _, mx := range emm {
if m, ok := mx.value.(Message); ok {
discardLegacy(m)
}
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -109,6 +109,15 @@ func equalStruct(v1, v2 reflect.Value) bool {
// set/unset mismatch
return false
}
b1, ok := f1.Interface().(raw)
if ok {
b2 := f2.Interface().(raw)
// RawMessage
if !bytes.Equal(b1.Bytes(), b2.Bytes()) {
return false
}
continue
}
f1, f2 = f1.Elem(), f2.Elem()
}
if !equalAny(f1, f2, sprop.Prop[i]) {
@@ -137,7 +146,11 @@ func equalStruct(v1, v2 reflect.Value) bool {
u1 := uf.Bytes()
u2 := v2.FieldByName("XXX_unrecognized").Bytes()
return bytes.Equal(u1, u2)
if !bytes.Equal(u1, u2) {
return false
}
return true
}
// v1 and v2 are known to have the same type.
@@ -248,15 +261,6 @@ func equalExtMap(base reflect.Type, em1, em2 map[int32]Extension) bool {
m1, m2 := e1.value, e2.value
if m1 == nil && m2 == nil {
// Both have only encoded form.
if bytes.Equal(e1.enc, e2.enc) {
continue
}
// The bytes are different, but the extensions might still be
// equal. We need to decode them to compare.
}
if m1 != nil && m2 != nil {
// Both are unencoded.
if !equalAny(reflect.ValueOf(m1), reflect.ValueOf(m2), nil) {
@@ -272,12 +276,8 @@ func equalExtMap(base reflect.Type, em1, em2 map[int32]Extension) bool {
desc = m[extNum]
}
if desc == nil {
// If both have only encoded form and the bytes are the same,
// it is handled above. We get here when the bytes are different.
// We don't know how to decode it, so just compare them as byte
// slices.
log.Printf("proto: don't know how to compare extension %d of %v", extNum, base)
return false
continue
}
var err error
if m1 == nil {

View File

@@ -38,7 +38,6 @@ package proto
import (
"errors"
"fmt"
"io"
"reflect"
"strconv"
"sync"
@@ -92,29 +91,14 @@ func (n notLocker) Unlock() {}
// extendable returns the extendableProto interface for the given generated proto message.
// If the proto message has the old extension format, it returns a wrapper that implements
// the extendableProto interface.
func extendable(p interface{}) (extendableProto, error) {
switch p := p.(type) {
case extendableProto:
if isNilPtr(p) {
return nil, fmt.Errorf("proto: nil %T is not extendable", p)
}
return p, nil
case extendableProtoV1:
if isNilPtr(p) {
return nil, fmt.Errorf("proto: nil %T is not extendable", p)
}
return extensionAdapter{p}, nil
func extendable(p interface{}) (extendableProto, bool) {
if ep, ok := p.(extendableProto); ok {
return ep, ok
}
// Don't allocate a specific error containing %T:
// this is the hot path for Clone and MarshalText.
return nil, errNotExtendable
}
var errNotExtendable = errors.New("proto: not an extendable proto.Message")
func isNilPtr(x interface{}) bool {
v := reflect.ValueOf(x)
return v.Kind() == reflect.Ptr && v.IsNil()
if ep, ok := p.(extendableProtoV1); ok {
return extensionAdapter{ep}, ok
}
return nil, false
}
// XXX_InternalExtensions is an internal representation of proto extensions.
@@ -159,6 +143,9 @@ func (e *XXX_InternalExtensions) extensionsRead() (map[int32]Extension, sync.Loc
return e.p.extensionMap, &e.p.mu
}
var extendableProtoType = reflect.TypeOf((*extendableProto)(nil)).Elem()
var extendableProtoV1Type = reflect.TypeOf((*extendableProtoV1)(nil)).Elem()
// ExtensionDesc represents an extension specification.
// Used in generated code from the protocol compiler.
type ExtensionDesc struct {
@@ -192,8 +179,8 @@ type Extension struct {
// SetRawExtension is for testing only.
func SetRawExtension(base Message, id int32, b []byte) {
epb, err := extendable(base)
if err != nil {
epb, ok := extendable(base)
if !ok {
return
}
extmap := epb.extensionsWrite()
@@ -218,7 +205,7 @@ func checkExtensionTypes(pb extendableProto, extension *ExtensionDesc) error {
pbi = ea.extendableProtoV1
}
if a, b := reflect.TypeOf(pbi), reflect.TypeOf(extension.ExtendedType); a != b {
return fmt.Errorf("proto: bad extended type; %v does not extend %v", b, a)
return errors.New("proto: bad extended type; " + b.String() + " does not extend " + a.String())
}
// Check the range.
if !isExtensionField(pb, extension.Field) {
@@ -263,11 +250,85 @@ func extensionProperties(ed *ExtensionDesc) *Properties {
return prop
}
// encode encodes any unmarshaled (unencoded) extensions in e.
func encodeExtensions(e *XXX_InternalExtensions) error {
m, mu := e.extensionsRead()
if m == nil {
return nil // fast path
}
mu.Lock()
defer mu.Unlock()
return encodeExtensionsMap(m)
}
// encode encodes any unmarshaled (unencoded) extensions in e.
func encodeExtensionsMap(m map[int32]Extension) error {
for k, e := range m {
if e.value == nil || e.desc == nil {
// Extension is only in its encoded form.
continue
}
// We don't skip extensions that have an encoded form set,
// because the extension value may have been mutated after
// the last time this function was called.
et := reflect.TypeOf(e.desc.ExtensionType)
props := extensionProperties(e.desc)
p := NewBuffer(nil)
// If e.value has type T, the encoder expects a *struct{ X T }.
// Pass a *T with a zero field and hope it all works out.
x := reflect.New(et)
x.Elem().Set(reflect.ValueOf(e.value))
if err := props.enc(p, props, toStructPointer(x)); err != nil {
return err
}
e.enc = p.buf
m[k] = e
}
return nil
}
func extensionsSize(e *XXX_InternalExtensions) (n int) {
m, mu := e.extensionsRead()
if m == nil {
return 0
}
mu.Lock()
defer mu.Unlock()
return extensionsMapSize(m)
}
func extensionsMapSize(m map[int32]Extension) (n int) {
for _, e := range m {
if e.value == nil || e.desc == nil {
// Extension is only in its encoded form.
n += len(e.enc)
continue
}
// We don't skip extensions that have an encoded form set,
// because the extension value may have been mutated after
// the last time this function was called.
et := reflect.TypeOf(e.desc.ExtensionType)
props := extensionProperties(e.desc)
// If e.value has type T, the encoder expects a *struct{ X T }.
// Pass a *T with a zero field and hope it all works out.
x := reflect.New(et)
x.Elem().Set(reflect.ValueOf(e.value))
n += props.size(props, toStructPointer(x))
}
return
}
// HasExtension returns whether the given extension is present in pb.
func HasExtension(pb Message, extension *ExtensionDesc) bool {
// TODO: Check types, field numbers, etc.?
epb, err := extendable(pb)
if err != nil {
epb, ok := extendable(pb)
if !ok {
return false
}
extmap, mu := epb.extensionsRead()
@@ -275,15 +336,15 @@ func HasExtension(pb Message, extension *ExtensionDesc) bool {
return false
}
mu.Lock()
_, ok := extmap[extension.Field]
_, ok = extmap[extension.Field]
mu.Unlock()
return ok
}
// ClearExtension removes the given extension from pb.
func ClearExtension(pb Message, extension *ExtensionDesc) {
epb, err := extendable(pb)
if err != nil {
epb, ok := extendable(pb)
if !ok {
return
}
// TODO: Check types, field numbers, etc.?
@@ -291,26 +352,16 @@ func ClearExtension(pb Message, extension *ExtensionDesc) {
delete(extmap, extension.Field)
}
// GetExtension retrieves a proto2 extended field from pb.
//
// If the descriptor is type complete (i.e., ExtensionDesc.ExtensionType is non-nil),
// then GetExtension parses the encoded field and returns a Go value of the specified type.
// If the field is not present, then the default value is returned (if one is specified),
// otherwise ErrMissingExtension is reported.
//
// If the descriptor is not type complete (i.e., ExtensionDesc.ExtensionType is nil),
// then GetExtension returns the raw encoded bytes of the field extension.
// GetExtension parses and returns the given extension of pb.
// If the extension is not present and has no default value it returns ErrMissingExtension.
func GetExtension(pb Message, extension *ExtensionDesc) (interface{}, error) {
epb, err := extendable(pb)
if err != nil {
return nil, err
epb, ok := extendable(pb)
if !ok {
return nil, errors.New("proto: not an extendable proto")
}
if extension.ExtendedType != nil {
// can only check type if this is a complete descriptor
if err := checkExtensionTypes(epb, extension); err != nil {
return nil, err
}
if err := checkExtensionTypes(epb, extension); err != nil {
return nil, err
}
emap, mu := epb.extensionsRead()
@@ -337,11 +388,6 @@ func GetExtension(pb Message, extension *ExtensionDesc) (interface{}, error) {
return e.value, nil
}
if extension.ExtensionType == nil {
// incomplete descriptor
return e.enc, nil
}
v, err := decodeExtension(e.enc, extension)
if err != nil {
return nil, err
@@ -359,11 +405,6 @@ func GetExtension(pb Message, extension *ExtensionDesc) (interface{}, error) {
// defaultExtensionValue returns the default value for extension.
// If no default for an extension is defined ErrMissingExtension is returned.
func defaultExtensionValue(extension *ExtensionDesc) (interface{}, error) {
if extension.ExtensionType == nil {
// incomplete descriptor, so no default
return nil, ErrMissingExtension
}
t := reflect.TypeOf(extension.ExtensionType)
props := extensionProperties(extension)
@@ -398,28 +439,31 @@ func defaultExtensionValue(extension *ExtensionDesc) (interface{}, error) {
// decodeExtension decodes an extension encoded in b.
func decodeExtension(b []byte, extension *ExtensionDesc) (interface{}, error) {
o := NewBuffer(b)
t := reflect.TypeOf(extension.ExtensionType)
unmarshal := typeUnmarshaler(t, extension.Tag)
props := extensionProperties(extension)
// t is a pointer to a struct, pointer to basic type or a slice.
// Allocate space to store the pointer/slice.
// Allocate a "field" to store the pointer/slice itself; the
// pointer/slice will be stored here. We pass
// the address of this field to props.dec.
// This passes a zero field and a *t and lets props.dec
// interpret it as a *struct{ x t }.
value := reflect.New(t).Elem()
var err error
for {
x, n := decodeVarint(b)
if n == 0 {
return nil, io.ErrUnexpectedEOF
}
b = b[n:]
wire := int(x) & 7
b, err = unmarshal(b, valToPointer(value.Addr()), wire)
if err != nil {
// Discard wire type and field number varint. It isn't needed.
if _, err := o.DecodeVarint(); err != nil {
return nil, err
}
if len(b) == 0 {
if err := props.dec(o, props, toStructPointer(value.Addr())); err != nil {
return nil, err
}
if o.index >= len(o.buf) {
break
}
}
@@ -429,9 +473,9 @@ func decodeExtension(b []byte, extension *ExtensionDesc) (interface{}, error) {
// GetExtensions returns a slice of the extensions present in pb that are also listed in es.
// The returned slice has the same length as es; missing extensions will appear as nil elements.
func GetExtensions(pb Message, es []*ExtensionDesc) (extensions []interface{}, err error) {
epb, err := extendable(pb)
if err != nil {
return nil, err
epb, ok := extendable(pb)
if !ok {
return nil, errors.New("proto: not an extendable proto")
}
extensions = make([]interface{}, len(es))
for i, e := range es {
@@ -450,9 +494,9 @@ func GetExtensions(pb Message, es []*ExtensionDesc) (extensions []interface{}, e
// For non-registered extensions, ExtensionDescs returns an incomplete descriptor containing
// just the Field field, which defines the extension's field number.
func ExtensionDescs(pb Message) ([]*ExtensionDesc, error) {
epb, err := extendable(pb)
if err != nil {
return nil, err
epb, ok := extendable(pb)
if !ok {
return nil, fmt.Errorf("proto: %T is not an extendable proto.Message", pb)
}
registeredExtensions := RegisteredExtensions(pb)
@@ -479,9 +523,9 @@ func ExtensionDescs(pb Message) ([]*ExtensionDesc, error) {
// SetExtension sets the specified extension of pb to the specified value.
func SetExtension(pb Message, extension *ExtensionDesc, value interface{}) error {
epb, err := extendable(pb)
if err != nil {
return err
epb, ok := extendable(pb)
if !ok {
return errors.New("proto: not an extendable proto")
}
if err := checkExtensionTypes(epb, extension); err != nil {
return err
@@ -506,8 +550,8 @@ func SetExtension(pb Message, extension *ExtensionDesc, value interface{}) error
// ClearAllExtensions clears all extensions from pb.
func ClearAllExtensions(pb Message) {
epb, err := extendable(pb)
if err != nil {
epb, ok := extendable(pb)
if !ok {
return
}
m := epb.extensionsWrite()

View File

@@ -265,7 +265,6 @@ package proto
import (
"encoding/json"
"errors"
"fmt"
"log"
"reflect"
@@ -274,8 +273,6 @@ import (
"sync"
)
var errInvalidUTF8 = errors.New("proto: invalid UTF-8 string")
// Message is implemented by generated protocol buffer messages.
type Message interface {
Reset()
@@ -312,7 +309,16 @@ type Buffer struct {
buf []byte // encode/decode byte stream
index int // read point
deterministic bool
// pools of basic types to amortize allocation.
bools []bool
uint32s []uint32
uint64s []uint64
// extra pools, only used with pointer_reflect.go
int32s []int32
int64s []int64
float32s []float32
float64s []float64
}
// NewBuffer allocates a new Buffer and initializes its internal data to
@@ -337,30 +343,6 @@ func (p *Buffer) SetBuf(s []byte) {
// Bytes returns the contents of the Buffer.
func (p *Buffer) Bytes() []byte { return p.buf }
// SetDeterministic sets whether to use deterministic serialization.
//
// Deterministic serialization guarantees that for a given binary, equal
// messages will always be serialized to the same bytes. This implies:
//
// - Repeated serialization of a message will return the same bytes.
// - Different processes of the same binary (which may be executing on
// different machines) will serialize equal messages to the same bytes.
//
// Note that the deterministic serialization is NOT canonical across
// languages. It is not guaranteed to remain stable over time. It is unstable
// across different builds with schema changes due to unknown fields.
// Users who need canonical serialization (e.g., persistent storage in a
// canonical form, fingerprinting, etc.) should define their own
// canonicalization specification and implement their own serializer rather
// than relying on this API.
//
// If deterministic serialization is requested, map entries will be sorted
// by keys in lexographical order. This is an implementation detail and
// subject to change.
func (p *Buffer) SetDeterministic(deterministic bool) {
p.deterministic = deterministic
}
/*
* Helper routines for simplifying the creation of optional fields of basic type.
*/
@@ -849,12 +831,22 @@ func fieldDefault(ft reflect.Type, prop *Properties) (sf *scalarField, nestedMes
return sf, false, nil
}
// mapKeys returns a sort.Interface to be used for sorting the map keys.
// Map fields may have key types of non-float scalars, strings and enums.
func mapKeys(vs []reflect.Value) sort.Interface {
s := mapKeySorter{vs: vs}
// The easiest way to sort them in some deterministic order is to use fmt.
// If this turns out to be inefficient we can always consider other options,
// such as doing a Schwartzian transform.
// Type specialization per https://developers.google.com/protocol-buffers/docs/proto#maps.
func mapKeys(vs []reflect.Value) sort.Interface {
s := mapKeySorter{
vs: vs,
// default Less function: textual comparison
less: func(a, b reflect.Value) bool {
return fmt.Sprint(a.Interface()) < fmt.Sprint(b.Interface())
},
}
// Type specialization per https://developers.google.com/protocol-buffers/docs/proto#maps;
// numeric keys are sorted numerically.
if len(vs) == 0 {
return s
}
@@ -863,12 +855,6 @@ func mapKeys(vs []reflect.Value) sort.Interface {
s.less = func(a, b reflect.Value) bool { return a.Int() < b.Int() }
case reflect.Uint32, reflect.Uint64:
s.less = func(a, b reflect.Value) bool { return a.Uint() < b.Uint() }
case reflect.Bool:
s.less = func(a, b reflect.Value) bool { return !a.Bool() && b.Bool() } // false < true
case reflect.String:
s.less = func(a, b reflect.Value) bool { return a.String() < b.String() }
default:
panic(fmt.Sprintf("unsupported map key type: %v", vs[0].Kind()))
}
return s
@@ -909,13 +895,3 @@ const ProtoPackageIsVersion2 = true
// ProtoPackageIsVersion1 is referenced from generated protocol buffer files
// to assert that that code is compatible with this version of the proto package.
const ProtoPackageIsVersion1 = true
// InternalMessageInfo is a type used internally by generated .pb.go files.
// This type is not intended to be used by non-generated code.
// This type is not subject to any compatibility guarantee.
type InternalMessageInfo struct {
marshal *marshalInfo
unmarshal *unmarshalInfo
merge *mergeInfo
discard *discardInfo
}

View File

@@ -42,7 +42,6 @@ import (
"fmt"
"reflect"
"sort"
"sync"
)
// errNoMessageTypeID occurs when a protocol buffer does not have a message type ID.
@@ -95,7 +94,10 @@ func (ms *messageSet) find(pb Message) *_MessageSet_Item {
}
func (ms *messageSet) Has(pb Message) bool {
return ms.find(pb) != nil
if ms.find(pb) != nil {
return true
}
return false
}
func (ms *messageSet) Unmarshal(pb Message) error {
@@ -148,42 +150,46 @@ func skipVarint(buf []byte) []byte {
// MarshalMessageSet encodes the extension map represented by m in the message set wire format.
// It is called by generated Marshal methods on protocol buffer messages with the message_set_wire_format option.
func MarshalMessageSet(exts interface{}) ([]byte, error) {
return marshalMessageSet(exts, false)
}
// marshaMessageSet implements above function, with the opt to turn on / off deterministic during Marshal.
func marshalMessageSet(exts interface{}, deterministic bool) ([]byte, error) {
var m map[int32]Extension
switch exts := exts.(type) {
case *XXX_InternalExtensions:
var u marshalInfo
siz := u.sizeMessageSet(exts)
b := make([]byte, 0, siz)
return u.appendMessageSet(b, exts, deterministic)
case map[int32]Extension:
// This is an old-style extension map.
// Wrap it in a new-style XXX_InternalExtensions.
ie := XXX_InternalExtensions{
p: &struct {
mu sync.Mutex
extensionMap map[int32]Extension
}{
extensionMap: exts,
},
if err := encodeExtensions(exts); err != nil {
return nil, err
}
var u marshalInfo
siz := u.sizeMessageSet(&ie)
b := make([]byte, 0, siz)
return u.appendMessageSet(b, &ie, deterministic)
m, _ = exts.extensionsRead()
case map[int32]Extension:
if err := encodeExtensionsMap(exts); err != nil {
return nil, err
}
m = exts
default:
return nil, errors.New("proto: not an extension map")
}
// Sort extension IDs to provide a deterministic encoding.
// See also enc_map in encode.go.
ids := make([]int, 0, len(m))
for id := range m {
ids = append(ids, int(id))
}
sort.Ints(ids)
ms := &messageSet{Item: make([]*_MessageSet_Item, 0, len(m))}
for _, id := range ids {
e := m[int32(id)]
// Remove the wire type and field number varint, as well as the length varint.
msg := skipVarint(skipVarint(e.enc))
ms.Item = append(ms.Item, &_MessageSet_Item{
TypeId: Int32(int32(id)),
Message: msg,
})
}
return Marshal(ms)
}
// UnmarshalMessageSet decodes the extension map encoded in buf in the message set wire format.
// It is called by Unmarshal methods on protocol buffer messages with the message_set_wire_format option.
// It is called by generated Unmarshal methods on protocol buffer messages with the message_set_wire_format option.
func UnmarshalMessageSet(buf []byte, exts interface{}) error {
var m map[int32]Extension
switch exts := exts.(type) {
@@ -229,15 +235,7 @@ func MarshalMessageSetJSON(exts interface{}) ([]byte, error) {
var m map[int32]Extension
switch exts := exts.(type) {
case *XXX_InternalExtensions:
var mu sync.Locker
m, mu = exts.extensionsRead()
if m != nil {
// Keep the extensions map locked until we're done marshaling to prevent
// races between marshaling and unmarshaling the lazily-{en,de}coded
// values.
mu.Lock()
defer mu.Unlock()
}
m, _ = exts.extensionsRead()
case map[int32]Extension:
m = exts
default:
@@ -255,16 +253,15 @@ func MarshalMessageSetJSON(exts interface{}) ([]byte, error) {
for i, id := range ids {
ext := m[id]
if i > 0 {
b.WriteByte(',')
}
msd, ok := messageSetMap[id]
if !ok {
// Unknown type; we can't render it, so skip it.
continue
}
if i > 0 && b.Len() > 1 {
b.WriteByte(',')
}
fmt.Fprintf(&b, `"[%s]":`, msd.name)
x := ext.value

View File

@@ -29,7 +29,7 @@
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// +build purego appengine js
// +build appengine js
// This file contains an implementation of proto field accesses using package reflect.
// It is slower than the code in pointer_unsafe.go but it avoids package unsafe and can
@@ -38,13 +38,32 @@
package proto
import (
"math"
"reflect"
"sync"
)
const unsafeAllowed = false
// A structPointer is a pointer to a struct.
type structPointer struct {
v reflect.Value
}
// A field identifies a field in a struct, accessible from a pointer.
// toStructPointer returns a structPointer equivalent to the given reflect value.
// The reflect value must itself be a pointer to a struct.
func toStructPointer(v reflect.Value) structPointer {
return structPointer{v}
}
// IsNil reports whether p is nil.
func structPointer_IsNil(p structPointer) bool {
return p.v.IsNil()
}
// Interface returns the struct pointer as an interface value.
func structPointer_Interface(p structPointer, _ reflect.Type) interface{} {
return p.v.Interface()
}
// A field identifies a field in a struct, accessible from a structPointer.
// In this implementation, a field is identified by the sequence of field indices
// passed to reflect's FieldByIndex.
type field []int
@@ -57,301 +76,409 @@ func toField(f *reflect.StructField) field {
// invalidField is an invalid field identifier.
var invalidField = field(nil)
// zeroField is a noop when calling pointer.offset.
var zeroField = field([]int{})
// IsValid reports whether the field identifier is valid.
func (f field) IsValid() bool { return f != nil }
// The pointer type is for the table-driven decoder.
// The implementation here uses a reflect.Value of pointer type to
// create a generic pointer. In pointer_unsafe.go we use unsafe
// instead of reflect to implement the same (but faster) interface.
type pointer struct {
// field returns the given field in the struct as a reflect value.
func structPointer_field(p structPointer, f field) reflect.Value {
// Special case: an extension map entry with a value of type T
// passes a *T to the struct-handling code with a zero field,
// expecting that it will be treated as equivalent to *struct{ X T },
// which has the same memory layout. We have to handle that case
// specially, because reflect will panic if we call FieldByIndex on a
// non-struct.
if f == nil {
return p.v.Elem()
}
return p.v.Elem().FieldByIndex(f)
}
// ifield returns the given field in the struct as an interface value.
func structPointer_ifield(p structPointer, f field) interface{} {
return structPointer_field(p, f).Addr().Interface()
}
// Bytes returns the address of a []byte field in the struct.
func structPointer_Bytes(p structPointer, f field) *[]byte {
return structPointer_ifield(p, f).(*[]byte)
}
// BytesSlice returns the address of a [][]byte field in the struct.
func structPointer_BytesSlice(p structPointer, f field) *[][]byte {
return structPointer_ifield(p, f).(*[][]byte)
}
// Bool returns the address of a *bool field in the struct.
func structPointer_Bool(p structPointer, f field) **bool {
return structPointer_ifield(p, f).(**bool)
}
// BoolVal returns the address of a bool field in the struct.
func structPointer_BoolVal(p structPointer, f field) *bool {
return structPointer_ifield(p, f).(*bool)
}
// BoolSlice returns the address of a []bool field in the struct.
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
return structPointer_ifield(p, f).(*[]bool)
}
// String returns the address of a *string field in the struct.
func structPointer_String(p structPointer, f field) **string {
return structPointer_ifield(p, f).(**string)
}
// StringVal returns the address of a string field in the struct.
func structPointer_StringVal(p structPointer, f field) *string {
return structPointer_ifield(p, f).(*string)
}
// StringSlice returns the address of a []string field in the struct.
func structPointer_StringSlice(p structPointer, f field) *[]string {
return structPointer_ifield(p, f).(*[]string)
}
// Extensions returns the address of an extension map field in the struct.
func structPointer_Extensions(p structPointer, f field) *XXX_InternalExtensions {
return structPointer_ifield(p, f).(*XXX_InternalExtensions)
}
// ExtMap returns the address of an extension map field in the struct.
func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
return structPointer_ifield(p, f).(*map[int32]Extension)
}
// NewAt returns the reflect.Value for a pointer to a field in the struct.
func structPointer_NewAt(p structPointer, f field, typ reflect.Type) reflect.Value {
return structPointer_field(p, f).Addr()
}
// SetStructPointer writes a *struct field in the struct.
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
structPointer_field(p, f).Set(q.v)
}
// GetStructPointer reads a *struct field in the struct.
func structPointer_GetStructPointer(p structPointer, f field) structPointer {
return structPointer{structPointer_field(p, f)}
}
// StructPointerSlice the address of a []*struct field in the struct.
func structPointer_StructPointerSlice(p structPointer, f field) structPointerSlice {
return structPointerSlice{structPointer_field(p, f)}
}
// A structPointerSlice represents the address of a slice of pointers to structs
// (themselves messages or groups). That is, v.Type() is *[]*struct{...}.
type structPointerSlice struct {
v reflect.Value
}
// toPointer converts an interface of pointer type to a pointer
// that points to the same target.
func toPointer(i *Message) pointer {
return pointer{v: reflect.ValueOf(*i)}
func (p structPointerSlice) Len() int { return p.v.Len() }
func (p structPointerSlice) Index(i int) structPointer { return structPointer{p.v.Index(i)} }
func (p structPointerSlice) Append(q structPointer) {
p.v.Set(reflect.Append(p.v, q.v))
}
// toAddrPointer converts an interface to a pointer that points to
// the interface data.
func toAddrPointer(i *interface{}, isptr bool) pointer {
v := reflect.ValueOf(*i)
u := reflect.New(v.Type())
u.Elem().Set(v)
return pointer{v: u}
var (
int32Type = reflect.TypeOf(int32(0))
uint32Type = reflect.TypeOf(uint32(0))
float32Type = reflect.TypeOf(float32(0))
int64Type = reflect.TypeOf(int64(0))
uint64Type = reflect.TypeOf(uint64(0))
float64Type = reflect.TypeOf(float64(0))
)
// A word32 represents a field of type *int32, *uint32, *float32, or *enum.
// That is, v.Type() is *int32, *uint32, *float32, or *enum and v is assignable.
type word32 struct {
v reflect.Value
}
// valToPointer converts v to a pointer. v must be of pointer type.
func valToPointer(v reflect.Value) pointer {
return pointer{v: v}
}
// offset converts from a pointer to a structure to a pointer to
// one of its fields.
func (p pointer) offset(f field) pointer {
return pointer{v: p.v.Elem().FieldByIndex(f).Addr()}
}
func (p pointer) isNil() bool {
// IsNil reports whether p is nil.
func word32_IsNil(p word32) bool {
return p.v.IsNil()
}
// grow updates the slice s in place to make it one element longer.
// s must be addressable.
// Returns the (addressable) new element.
func grow(s reflect.Value) reflect.Value {
n, m := s.Len(), s.Cap()
// Set sets p to point at a newly allocated word with bits set to x.
func word32_Set(p word32, o *Buffer, x uint32) {
t := p.v.Type().Elem()
switch t {
case int32Type:
if len(o.int32s) == 0 {
o.int32s = make([]int32, uint32PoolSize)
}
o.int32s[0] = int32(x)
p.v.Set(reflect.ValueOf(&o.int32s[0]))
o.int32s = o.int32s[1:]
return
case uint32Type:
if len(o.uint32s) == 0 {
o.uint32s = make([]uint32, uint32PoolSize)
}
o.uint32s[0] = x
p.v.Set(reflect.ValueOf(&o.uint32s[0]))
o.uint32s = o.uint32s[1:]
return
case float32Type:
if len(o.float32s) == 0 {
o.float32s = make([]float32, uint32PoolSize)
}
o.float32s[0] = math.Float32frombits(x)
p.v.Set(reflect.ValueOf(&o.float32s[0]))
o.float32s = o.float32s[1:]
return
}
// must be enum
p.v.Set(reflect.New(t))
p.v.Elem().SetInt(int64(int32(x)))
}
// Get gets the bits pointed at by p, as a uint32.
func word32_Get(p word32) uint32 {
elem := p.v.Elem()
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32 returns a reference to a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32(p structPointer, f field) word32 {
return word32{structPointer_field(p, f)}
}
// A word32Val represents a field of type int32, uint32, float32, or enum.
// That is, v.Type() is int32, uint32, float32, or enum and v is assignable.
type word32Val struct {
v reflect.Value
}
// Set sets *p to x.
func word32Val_Set(p word32Val, x uint32) {
switch p.v.Type() {
case int32Type:
p.v.SetInt(int64(x))
return
case uint32Type:
p.v.SetUint(uint64(x))
return
case float32Type:
p.v.SetFloat(float64(math.Float32frombits(x)))
return
}
// must be enum
p.v.SetInt(int64(int32(x)))
}
// Get gets the bits pointed at by p, as a uint32.
func word32Val_Get(p word32Val) uint32 {
elem := p.v
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32Val returns a reference to a int32, uint32, float32, or enum field in the struct.
func structPointer_Word32Val(p structPointer, f field) word32Val {
return word32Val{structPointer_field(p, f)}
}
// A word32Slice is a slice of 32-bit values.
// That is, v.Type() is []int32, []uint32, []float32, or []enum.
type word32Slice struct {
v reflect.Value
}
func (p word32Slice) Append(x uint32) {
n, m := p.v.Len(), p.v.Cap()
if n < m {
s.SetLen(n + 1)
p.v.SetLen(n + 1)
} else {
s.Set(reflect.Append(s, reflect.Zero(s.Type().Elem())))
t := p.v.Type().Elem()
p.v.Set(reflect.Append(p.v, reflect.Zero(t)))
}
return s.Index(n)
}
func (p pointer) toInt64() *int64 {
return p.v.Interface().(*int64)
}
func (p pointer) toInt64Ptr() **int64 {
return p.v.Interface().(**int64)
}
func (p pointer) toInt64Slice() *[]int64 {
return p.v.Interface().(*[]int64)
}
var int32ptr = reflect.TypeOf((*int32)(nil))
func (p pointer) toInt32() *int32 {
return p.v.Convert(int32ptr).Interface().(*int32)
}
// The toInt32Ptr/Slice methods don't work because of enums.
// Instead, we must use set/get methods for the int32ptr/slice case.
/*
func (p pointer) toInt32Ptr() **int32 {
return p.v.Interface().(**int32)
}
func (p pointer) toInt32Slice() *[]int32 {
return p.v.Interface().(*[]int32)
}
*/
func (p pointer) getInt32Ptr() *int32 {
if p.v.Type().Elem().Elem() == reflect.TypeOf(int32(0)) {
// raw int32 type
return p.v.Elem().Interface().(*int32)
elem := p.v.Index(n)
switch elem.Kind() {
case reflect.Int32:
elem.SetInt(int64(int32(x)))
case reflect.Uint32:
elem.SetUint(uint64(x))
case reflect.Float32:
elem.SetFloat(float64(math.Float32frombits(x)))
}
// an enum
return p.v.Elem().Convert(int32PtrType).Interface().(*int32)
}
func (p pointer) setInt32Ptr(v int32) {
// Allocate value in a *int32. Possibly convert that to a *enum.
// Then assign it to a **int32 or **enum.
// Note: we can convert *int32 to *enum, but we can't convert
// **int32 to **enum!
p.v.Elem().Set(reflect.ValueOf(&v).Convert(p.v.Type().Elem()))
}
// getInt32Slice copies []int32 from p as a new slice.
// This behavior differs from the implementation in pointer_unsafe.go.
func (p pointer) getInt32Slice() []int32 {
if p.v.Type().Elem().Elem() == reflect.TypeOf(int32(0)) {
// raw int32 type
return p.v.Elem().Interface().([]int32)
}
// an enum
// Allocate a []int32, then assign []enum's values into it.
// Note: we can't convert []enum to []int32.
slice := p.v.Elem()
s := make([]int32, slice.Len())
for i := 0; i < slice.Len(); i++ {
s[i] = int32(slice.Index(i).Int())
}
return s
func (p word32Slice) Len() int {
return p.v.Len()
}
// setInt32Slice copies []int32 into p as a new slice.
// This behavior differs from the implementation in pointer_unsafe.go.
func (p pointer) setInt32Slice(v []int32) {
if p.v.Type().Elem().Elem() == reflect.TypeOf(int32(0)) {
// raw int32 type
p.v.Elem().Set(reflect.ValueOf(v))
func (p word32Slice) Index(i int) uint32 {
elem := p.v.Index(i)
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32Slice returns a reference to a []int32, []uint32, []float32, or []enum field in the struct.
func structPointer_Word32Slice(p structPointer, f field) word32Slice {
return word32Slice{structPointer_field(p, f)}
}
// word64 is like word32 but for 64-bit values.
type word64 struct {
v reflect.Value
}
func word64_Set(p word64, o *Buffer, x uint64) {
t := p.v.Type().Elem()
switch t {
case int64Type:
if len(o.int64s) == 0 {
o.int64s = make([]int64, uint64PoolSize)
}
o.int64s[0] = int64(x)
p.v.Set(reflect.ValueOf(&o.int64s[0]))
o.int64s = o.int64s[1:]
return
case uint64Type:
if len(o.uint64s) == 0 {
o.uint64s = make([]uint64, uint64PoolSize)
}
o.uint64s[0] = x
p.v.Set(reflect.ValueOf(&o.uint64s[0]))
o.uint64s = o.uint64s[1:]
return
case float64Type:
if len(o.float64s) == 0 {
o.float64s = make([]float64, uint64PoolSize)
}
o.float64s[0] = math.Float64frombits(x)
p.v.Set(reflect.ValueOf(&o.float64s[0]))
o.float64s = o.float64s[1:]
return
}
// an enum
// Allocate a []enum, then assign []int32's values into it.
// Note: we can't convert []enum to []int32.
slice := reflect.MakeSlice(p.v.Type().Elem(), len(v), cap(v))
for i, x := range v {
slice.Index(i).SetInt(int64(x))
}
p.v.Elem().Set(slice)
}
func (p pointer) appendInt32Slice(v int32) {
grow(p.v.Elem()).SetInt(int64(v))
panic("unreachable")
}
func (p pointer) toUint64() *uint64 {
return p.v.Interface().(*uint64)
}
func (p pointer) toUint64Ptr() **uint64 {
return p.v.Interface().(**uint64)
}
func (p pointer) toUint64Slice() *[]uint64 {
return p.v.Interface().(*[]uint64)
}
func (p pointer) toUint32() *uint32 {
return p.v.Interface().(*uint32)
}
func (p pointer) toUint32Ptr() **uint32 {
return p.v.Interface().(**uint32)
}
func (p pointer) toUint32Slice() *[]uint32 {
return p.v.Interface().(*[]uint32)
}
func (p pointer) toBool() *bool {
return p.v.Interface().(*bool)
}
func (p pointer) toBoolPtr() **bool {
return p.v.Interface().(**bool)
}
func (p pointer) toBoolSlice() *[]bool {
return p.v.Interface().(*[]bool)
}
func (p pointer) toFloat64() *float64 {
return p.v.Interface().(*float64)
}
func (p pointer) toFloat64Ptr() **float64 {
return p.v.Interface().(**float64)
}
func (p pointer) toFloat64Slice() *[]float64 {
return p.v.Interface().(*[]float64)
}
func (p pointer) toFloat32() *float32 {
return p.v.Interface().(*float32)
}
func (p pointer) toFloat32Ptr() **float32 {
return p.v.Interface().(**float32)
}
func (p pointer) toFloat32Slice() *[]float32 {
return p.v.Interface().(*[]float32)
}
func (p pointer) toString() *string {
return p.v.Interface().(*string)
}
func (p pointer) toStringPtr() **string {
return p.v.Interface().(**string)
}
func (p pointer) toStringSlice() *[]string {
return p.v.Interface().(*[]string)
}
func (p pointer) toBytes() *[]byte {
return p.v.Interface().(*[]byte)
}
func (p pointer) toBytesSlice() *[][]byte {
return p.v.Interface().(*[][]byte)
}
func (p pointer) toExtensions() *XXX_InternalExtensions {
return p.v.Interface().(*XXX_InternalExtensions)
}
func (p pointer) toOldExtensions() *map[int32]Extension {
return p.v.Interface().(*map[int32]Extension)
}
func (p pointer) getPointer() pointer {
return pointer{v: p.v.Elem()}
}
func (p pointer) setPointer(q pointer) {
p.v.Elem().Set(q.v)
}
func (p pointer) appendPointer(q pointer) {
grow(p.v.Elem()).Set(q.v)
func word64_IsNil(p word64) bool {
return p.v.IsNil()
}
// getPointerSlice copies []*T from p as a new []pointer.
// This behavior differs from the implementation in pointer_unsafe.go.
func (p pointer) getPointerSlice() []pointer {
if p.v.IsNil() {
return nil
func word64_Get(p word64) uint64 {
elem := p.v.Elem()
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return elem.Uint()
case reflect.Float64:
return math.Float64bits(elem.Float())
}
n := p.v.Elem().Len()
s := make([]pointer, n)
for i := 0; i < n; i++ {
s[i] = pointer{v: p.v.Elem().Index(i)}
}
return s
panic("unreachable")
}
// setPointerSlice copies []pointer into p as a new []*T.
// This behavior differs from the implementation in pointer_unsafe.go.
func (p pointer) setPointerSlice(v []pointer) {
if v == nil {
p.v.Elem().Set(reflect.New(p.v.Elem().Type()).Elem())
func structPointer_Word64(p structPointer, f field) word64 {
return word64{structPointer_field(p, f)}
}
// word64Val is like word32Val but for 64-bit values.
type word64Val struct {
v reflect.Value
}
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
switch p.v.Type() {
case int64Type:
p.v.SetInt(int64(x))
return
case uint64Type:
p.v.SetUint(x)
return
case float64Type:
p.v.SetFloat(math.Float64frombits(x))
return
}
s := reflect.MakeSlice(p.v.Elem().Type(), 0, len(v))
for _, p := range v {
s = reflect.Append(s, p.v)
panic("unreachable")
}
func word64Val_Get(p word64Val) uint64 {
elem := p.v
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return elem.Uint()
case reflect.Float64:
return math.Float64bits(elem.Float())
}
p.v.Elem().Set(s)
panic("unreachable")
}
// getInterfacePointer returns a pointer that points to the
// interface data of the interface pointed by p.
func (p pointer) getInterfacePointer() pointer {
if p.v.Elem().IsNil() {
return pointer{v: p.v.Elem()}
func structPointer_Word64Val(p structPointer, f field) word64Val {
return word64Val{structPointer_field(p, f)}
}
type word64Slice struct {
v reflect.Value
}
func (p word64Slice) Append(x uint64) {
n, m := p.v.Len(), p.v.Cap()
if n < m {
p.v.SetLen(n + 1)
} else {
t := p.v.Type().Elem()
p.v.Set(reflect.Append(p.v, reflect.Zero(t)))
}
elem := p.v.Index(n)
switch elem.Kind() {
case reflect.Int64:
elem.SetInt(int64(int64(x)))
case reflect.Uint64:
elem.SetUint(uint64(x))
case reflect.Float64:
elem.SetFloat(float64(math.Float64frombits(x)))
}
return pointer{v: p.v.Elem().Elem().Elem().Field(0).Addr()} // *interface -> interface -> *struct -> struct
}
func (p pointer) asPointerTo(t reflect.Type) reflect.Value {
// TODO: check that p.v.Type().Elem() == t?
return p.v
func (p word64Slice) Len() int {
return p.v.Len()
}
func atomicLoadUnmarshalInfo(p **unmarshalInfo) *unmarshalInfo {
atomicLock.Lock()
defer atomicLock.Unlock()
return *p
}
func atomicStoreUnmarshalInfo(p **unmarshalInfo, v *unmarshalInfo) {
atomicLock.Lock()
defer atomicLock.Unlock()
*p = v
}
func atomicLoadMarshalInfo(p **marshalInfo) *marshalInfo {
atomicLock.Lock()
defer atomicLock.Unlock()
return *p
}
func atomicStoreMarshalInfo(p **marshalInfo, v *marshalInfo) {
atomicLock.Lock()
defer atomicLock.Unlock()
*p = v
}
func atomicLoadMergeInfo(p **mergeInfo) *mergeInfo {
atomicLock.Lock()
defer atomicLock.Unlock()
return *p
}
func atomicStoreMergeInfo(p **mergeInfo, v *mergeInfo) {
atomicLock.Lock()
defer atomicLock.Unlock()
*p = v
}
func atomicLoadDiscardInfo(p **discardInfo) *discardInfo {
atomicLock.Lock()
defer atomicLock.Unlock()
return *p
}
func atomicStoreDiscardInfo(p **discardInfo, v *discardInfo) {
atomicLock.Lock()
defer atomicLock.Unlock()
*p = v
func (p word64Slice) Index(i int) uint64 {
elem := p.v.Index(i)
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return uint64(elem.Uint())
case reflect.Float64:
return math.Float64bits(float64(elem.Float()))
}
panic("unreachable")
}
var atomicLock sync.Mutex
func structPointer_Word64Slice(p structPointer, f field) word64Slice {
return word64Slice{structPointer_field(p, f)}
}

View File

@@ -29,7 +29,7 @@
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// +build !purego,!appengine,!js
// +build !appengine,!js
// This file contains the implementation of the proto field accesses using package unsafe.
@@ -37,13 +37,38 @@ package proto
import (
"reflect"
"sync/atomic"
"unsafe"
)
const unsafeAllowed = true
// NOTE: These type_Foo functions would more idiomatically be methods,
// but Go does not allow methods on pointer types, and we must preserve
// some pointer type for the garbage collector. We use these
// funcs with clunky names as our poor approximation to methods.
//
// An alternative would be
// type structPointer struct { p unsafe.Pointer }
// but that does not registerize as well.
// A field identifies a field in a struct, accessible from a pointer.
// A structPointer is a pointer to a struct.
type structPointer unsafe.Pointer
// toStructPointer returns a structPointer equivalent to the given reflect value.
func toStructPointer(v reflect.Value) structPointer {
return structPointer(unsafe.Pointer(v.Pointer()))
}
// IsNil reports whether p is nil.
func structPointer_IsNil(p structPointer) bool {
return p == nil
}
// Interface returns the struct pointer, assumed to have element type t,
// as an interface value.
func structPointer_Interface(p structPointer, t reflect.Type) interface{} {
return reflect.NewAt(t, unsafe.Pointer(p)).Interface()
}
// A field identifies a field in a struct, accessible from a structPointer.
// In this implementation, a field is identified by its byte offset from the start of the struct.
type field uintptr
@@ -55,254 +80,191 @@ func toField(f *reflect.StructField) field {
// invalidField is an invalid field identifier.
const invalidField = ^field(0)
// zeroField is a noop when calling pointer.offset.
const zeroField = field(0)
// IsValid reports whether the field identifier is valid.
func (f field) IsValid() bool {
return f != invalidField
return f != ^field(0)
}
// The pointer type below is for the new table-driven encoder/decoder.
// The implementation here uses unsafe.Pointer to create a generic pointer.
// In pointer_reflect.go we use reflect instead of unsafe to implement
// the same (but slower) interface.
type pointer struct {
p unsafe.Pointer
// Bytes returns the address of a []byte field in the struct.
func structPointer_Bytes(p structPointer, f field) *[]byte {
return (*[]byte)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// size of pointer
var ptrSize = unsafe.Sizeof(uintptr(0))
// toPointer converts an interface of pointer type to a pointer
// that points to the same target.
func toPointer(i *Message) pointer {
// Super-tricky - read pointer out of data word of interface value.
// Saves ~25ns over the equivalent:
// return valToPointer(reflect.ValueOf(*i))
return pointer{p: (*[2]unsafe.Pointer)(unsafe.Pointer(i))[1]}
// BytesSlice returns the address of a [][]byte field in the struct.
func structPointer_BytesSlice(p structPointer, f field) *[][]byte {
return (*[][]byte)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// toAddrPointer converts an interface to a pointer that points to
// the interface data.
func toAddrPointer(i *interface{}, isptr bool) pointer {
// Super-tricky - read or get the address of data word of interface value.
if isptr {
// The interface is of pointer type, thus it is a direct interface.
// The data word is the pointer data itself. We take its address.
return pointer{p: unsafe.Pointer(uintptr(unsafe.Pointer(i)) + ptrSize)}
// Bool returns the address of a *bool field in the struct.
func structPointer_Bool(p structPointer, f field) **bool {
return (**bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BoolVal returns the address of a bool field in the struct.
func structPointer_BoolVal(p structPointer, f field) *bool {
return (*bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BoolSlice returns the address of a []bool field in the struct.
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
return (*[]bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// String returns the address of a *string field in the struct.
func structPointer_String(p structPointer, f field) **string {
return (**string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StringVal returns the address of a string field in the struct.
func structPointer_StringVal(p structPointer, f field) *string {
return (*string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StringSlice returns the address of a []string field in the struct.
func structPointer_StringSlice(p structPointer, f field) *[]string {
return (*[]string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// ExtMap returns the address of an extension map field in the struct.
func structPointer_Extensions(p structPointer, f field) *XXX_InternalExtensions {
return (*XXX_InternalExtensions)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
return (*map[int32]Extension)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// NewAt returns the reflect.Value for a pointer to a field in the struct.
func structPointer_NewAt(p structPointer, f field, typ reflect.Type) reflect.Value {
return reflect.NewAt(typ, unsafe.Pointer(uintptr(p)+uintptr(f)))
}
// SetStructPointer writes a *struct field in the struct.
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
*(*structPointer)(unsafe.Pointer(uintptr(p) + uintptr(f))) = q
}
// GetStructPointer reads a *struct field in the struct.
func structPointer_GetStructPointer(p structPointer, f field) structPointer {
return *(*structPointer)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StructPointerSlice the address of a []*struct field in the struct.
func structPointer_StructPointerSlice(p structPointer, f field) *structPointerSlice {
return (*structPointerSlice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// A structPointerSlice represents a slice of pointers to structs (themselves submessages or groups).
type structPointerSlice []structPointer
func (v *structPointerSlice) Len() int { return len(*v) }
func (v *structPointerSlice) Index(i int) structPointer { return (*v)[i] }
func (v *structPointerSlice) Append(p structPointer) { *v = append(*v, p) }
// A word32 is the address of a "pointer to 32-bit value" field.
type word32 **uint32
// IsNil reports whether *v is nil.
func word32_IsNil(p word32) bool {
return *p == nil
}
// Set sets *v to point at a newly allocated word set to x.
func word32_Set(p word32, o *Buffer, x uint32) {
if len(o.uint32s) == 0 {
o.uint32s = make([]uint32, uint32PoolSize)
}
// The interface is not of pointer type. The data word is the pointer
// to the data.
return pointer{p: (*[2]unsafe.Pointer)(unsafe.Pointer(i))[1]}
o.uint32s[0] = x
*p = &o.uint32s[0]
o.uint32s = o.uint32s[1:]
}
// valToPointer converts v to a pointer. v must be of pointer type.
func valToPointer(v reflect.Value) pointer {
return pointer{p: unsafe.Pointer(v.Pointer())}
// Get gets the value pointed at by *v.
func word32_Get(p word32) uint32 {
return **p
}
// offset converts from a pointer to a structure to a pointer to
// one of its fields.
func (p pointer) offset(f field) pointer {
// For safety, we should panic if !f.IsValid, however calling panic causes
// this to no longer be inlineable, which is a serious performance cost.
/*
if !f.IsValid() {
panic("invalid field")
}
*/
return pointer{p: unsafe.Pointer(uintptr(p.p) + uintptr(f))}
// Word32 returns the address of a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32(p structPointer, f field) word32 {
return word32((**uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
func (p pointer) isNil() bool {
return p.p == nil
// A word32Val is the address of a 32-bit value field.
type word32Val *uint32
// Set sets *p to x.
func word32Val_Set(p word32Val, x uint32) {
*p = x
}
func (p pointer) toInt64() *int64 {
return (*int64)(p.p)
}
func (p pointer) toInt64Ptr() **int64 {
return (**int64)(p.p)
}
func (p pointer) toInt64Slice() *[]int64 {
return (*[]int64)(p.p)
}
func (p pointer) toInt32() *int32 {
return (*int32)(p.p)
// Get gets the value pointed at by p.
func word32Val_Get(p word32Val) uint32 {
return *p
}
// See pointer_reflect.go for why toInt32Ptr/Slice doesn't exist.
/*
func (p pointer) toInt32Ptr() **int32 {
return (**int32)(p.p)
// Word32Val returns the address of a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32Val(p structPointer, f field) word32Val {
return word32Val((*uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// A word32Slice is a slice of 32-bit values.
type word32Slice []uint32
func (v *word32Slice) Append(x uint32) { *v = append(*v, x) }
func (v *word32Slice) Len() int { return len(*v) }
func (v *word32Slice) Index(i int) uint32 { return (*v)[i] }
// Word32Slice returns the address of a []int32, []uint32, []float32, or []enum field in the struct.
func structPointer_Word32Slice(p structPointer, f field) *word32Slice {
return (*word32Slice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// word64 is like word32 but for 64-bit values.
type word64 **uint64
func word64_Set(p word64, o *Buffer, x uint64) {
if len(o.uint64s) == 0 {
o.uint64s = make([]uint64, uint64PoolSize)
}
func (p pointer) toInt32Slice() *[]int32 {
return (*[]int32)(p.p)
}
*/
func (p pointer) getInt32Ptr() *int32 {
return *(**int32)(p.p)
}
func (p pointer) setInt32Ptr(v int32) {
*(**int32)(p.p) = &v
o.uint64s[0] = x
*p = &o.uint64s[0]
o.uint64s = o.uint64s[1:]
}
// getInt32Slice loads a []int32 from p.
// The value returned is aliased with the original slice.
// This behavior differs from the implementation in pointer_reflect.go.
func (p pointer) getInt32Slice() []int32 {
return *(*[]int32)(p.p)
func word64_IsNil(p word64) bool {
return *p == nil
}
// setInt32Slice stores a []int32 to p.
// The value set is aliased with the input slice.
// This behavior differs from the implementation in pointer_reflect.go.
func (p pointer) setInt32Slice(v []int32) {
*(*[]int32)(p.p) = v
func word64_Get(p word64) uint64 {
return **p
}
// TODO: Can we get rid of appendInt32Slice and use setInt32Slice instead?
func (p pointer) appendInt32Slice(v int32) {
s := (*[]int32)(p.p)
*s = append(*s, v)
func structPointer_Word64(p structPointer, f field) word64 {
return word64((**uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
func (p pointer) toUint64() *uint64 {
return (*uint64)(p.p)
}
func (p pointer) toUint64Ptr() **uint64 {
return (**uint64)(p.p)
}
func (p pointer) toUint64Slice() *[]uint64 {
return (*[]uint64)(p.p)
}
func (p pointer) toUint32() *uint32 {
return (*uint32)(p.p)
}
func (p pointer) toUint32Ptr() **uint32 {
return (**uint32)(p.p)
}
func (p pointer) toUint32Slice() *[]uint32 {
return (*[]uint32)(p.p)
}
func (p pointer) toBool() *bool {
return (*bool)(p.p)
}
func (p pointer) toBoolPtr() **bool {
return (**bool)(p.p)
}
func (p pointer) toBoolSlice() *[]bool {
return (*[]bool)(p.p)
}
func (p pointer) toFloat64() *float64 {
return (*float64)(p.p)
}
func (p pointer) toFloat64Ptr() **float64 {
return (**float64)(p.p)
}
func (p pointer) toFloat64Slice() *[]float64 {
return (*[]float64)(p.p)
}
func (p pointer) toFloat32() *float32 {
return (*float32)(p.p)
}
func (p pointer) toFloat32Ptr() **float32 {
return (**float32)(p.p)
}
func (p pointer) toFloat32Slice() *[]float32 {
return (*[]float32)(p.p)
}
func (p pointer) toString() *string {
return (*string)(p.p)
}
func (p pointer) toStringPtr() **string {
return (**string)(p.p)
}
func (p pointer) toStringSlice() *[]string {
return (*[]string)(p.p)
}
func (p pointer) toBytes() *[]byte {
return (*[]byte)(p.p)
}
func (p pointer) toBytesSlice() *[][]byte {
return (*[][]byte)(p.p)
}
func (p pointer) toExtensions() *XXX_InternalExtensions {
return (*XXX_InternalExtensions)(p.p)
}
func (p pointer) toOldExtensions() *map[int32]Extension {
return (*map[int32]Extension)(p.p)
// word64Val is like word32Val but for 64-bit values.
type word64Val *uint64
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
*p = x
}
// getPointerSlice loads []*T from p as a []pointer.
// The value returned is aliased with the original slice.
// This behavior differs from the implementation in pointer_reflect.go.
func (p pointer) getPointerSlice() []pointer {
// Super-tricky - p should point to a []*T where T is a
// message type. We load it as []pointer.
return *(*[]pointer)(p.p)
func word64Val_Get(p word64Val) uint64 {
return *p
}
// setPointerSlice stores []pointer into p as a []*T.
// The value set is aliased with the input slice.
// This behavior differs from the implementation in pointer_reflect.go.
func (p pointer) setPointerSlice(v []pointer) {
// Super-tricky - p should point to a []*T where T is a
// message type. We store it as []pointer.
*(*[]pointer)(p.p) = v
func structPointer_Word64Val(p structPointer, f field) word64Val {
return word64Val((*uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// getPointer loads the pointer at p and returns it.
func (p pointer) getPointer() pointer {
return pointer{p: *(*unsafe.Pointer)(p.p)}
}
// word64Slice is like word32Slice but for 64-bit values.
type word64Slice []uint64
// setPointer stores the pointer q at p.
func (p pointer) setPointer(q pointer) {
*(*unsafe.Pointer)(p.p) = q.p
}
func (v *word64Slice) Append(x uint64) { *v = append(*v, x) }
func (v *word64Slice) Len() int { return len(*v) }
func (v *word64Slice) Index(i int) uint64 { return (*v)[i] }
// append q to the slice pointed to by p.
func (p pointer) appendPointer(q pointer) {
s := (*[]unsafe.Pointer)(p.p)
*s = append(*s, q.p)
}
// getInterfacePointer returns a pointer that points to the
// interface data of the interface pointed by p.
func (p pointer) getInterfacePointer() pointer {
// Super-tricky - read pointer out of data word of interface value.
return pointer{p: (*(*[2]unsafe.Pointer)(p.p))[1]}
}
// asPointerTo returns a reflect.Value that is a pointer to an
// object of type t stored at p.
func (p pointer) asPointerTo(t reflect.Type) reflect.Value {
return reflect.NewAt(t, p.p)
}
func atomicLoadUnmarshalInfo(p **unmarshalInfo) *unmarshalInfo {
return (*unmarshalInfo)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(p))))
}
func atomicStoreUnmarshalInfo(p **unmarshalInfo, v *unmarshalInfo) {
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(p)), unsafe.Pointer(v))
}
func atomicLoadMarshalInfo(p **marshalInfo) *marshalInfo {
return (*marshalInfo)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(p))))
}
func atomicStoreMarshalInfo(p **marshalInfo, v *marshalInfo) {
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(p)), unsafe.Pointer(v))
}
func atomicLoadMergeInfo(p **mergeInfo) *mergeInfo {
return (*mergeInfo)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(p))))
}
func atomicStoreMergeInfo(p **mergeInfo, v *mergeInfo) {
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(p)), unsafe.Pointer(v))
}
func atomicLoadDiscardInfo(p **discardInfo) *discardInfo {
return (*discardInfo)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(p))))
}
func atomicStoreDiscardInfo(p **discardInfo, v *discardInfo) {
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(p)), unsafe.Pointer(v))
func structPointer_Word64Slice(p structPointer, f field) *word64Slice {
return (*word64Slice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}

View File

@@ -58,6 +58,42 @@ const (
WireFixed32 = 5
)
const startSize = 10 // initial slice/string sizes
// Encoders are defined in encode.go
// An encoder outputs the full representation of a field, including its
// tag and encoder type.
type encoder func(p *Buffer, prop *Properties, base structPointer) error
// A valueEncoder encodes a single integer in a particular encoding.
type valueEncoder func(o *Buffer, x uint64) error
// Sizers are defined in encode.go
// A sizer returns the encoded size of a field, including its tag and encoder
// type.
type sizer func(prop *Properties, base structPointer) int
// A valueSizer returns the encoded size of a single integer in a particular
// encoding.
type valueSizer func(x uint64) int
// Decoders are defined in decode.go
// A decoder creates a value from its wire representation.
// Unrecognized subelements are saved in unrec.
type decoder func(p *Buffer, prop *Properties, base structPointer) error
// A valueDecoder decodes a single integer in a particular encoding.
type valueDecoder func(o *Buffer) (x uint64, err error)
// A oneofMarshaler does the marshaling for all oneof fields in a message.
type oneofMarshaler func(Message, *Buffer) error
// A oneofUnmarshaler does the unmarshaling for a oneof field in a message.
type oneofUnmarshaler func(Message, int, int, *Buffer) (bool, error)
// A oneofSizer does the sizing for all oneof fields in a message.
type oneofSizer func(Message) int
// tagMap is an optimization over map[int]int for typical protocol buffer
// use-cases. Encoded protocol buffers are often in tag order with small tag
// numbers.
@@ -104,6 +140,13 @@ type StructProperties struct {
decoderTags tagMap // map from proto tag to struct field number
decoderOrigNames map[string]int // map from original name to struct field number
order []int // list of struct field numbers in tag order
unrecField field // field id of the XXX_unrecognized []byte field
extendable bool // is this an extendable proto
oneofMarshaler oneofMarshaler
oneofUnmarshaler oneofUnmarshaler
oneofSizer oneofSizer
stype reflect.Type
// OneofTypes contains information about the oneof fields in this message.
// It is keyed by the original name of a field.
@@ -144,19 +187,36 @@ type Properties struct {
Default string // default value
HasDefault bool // whether an explicit default was provided
def_uint64 uint64
stype reflect.Type // set for struct types only
sprop *StructProperties // set for struct types only
enc encoder
valEnc valueEncoder // set for bool and numeric types only
field field
tagcode []byte // encoding of EncodeVarint((Tag<<3)|WireType)
tagbuf [8]byte
stype reflect.Type // set for struct types only
sprop *StructProperties // set for struct types only
isMarshaler bool
isUnmarshaler bool
mtype reflect.Type // set for map types only
mkeyprop *Properties // set for map types only
mvalprop *Properties // set for map types only
size sizer
valSize valueSizer // set for bool and numeric types only
dec decoder
valDec valueDecoder // set for bool and numeric types only
// If this is a packable field, this will be the decoder for the packed version of the field.
packedDec decoder
}
// String formats the properties in the protobuf struct field tag style.
func (p *Properties) String() string {
s := p.Wire
s += ","
s = ","
s += strconv.Itoa(p.Tag)
if p.Required {
s += ",req"
@@ -202,14 +262,29 @@ func (p *Properties) Parse(s string) {
switch p.Wire {
case "varint":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeVarint
p.valDec = (*Buffer).DecodeVarint
p.valSize = sizeVarint
case "fixed32":
p.WireType = WireFixed32
p.valEnc = (*Buffer).EncodeFixed32
p.valDec = (*Buffer).DecodeFixed32
p.valSize = sizeFixed32
case "fixed64":
p.WireType = WireFixed64
p.valEnc = (*Buffer).EncodeFixed64
p.valDec = (*Buffer).DecodeFixed64
p.valSize = sizeFixed64
case "zigzag32":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeZigzag32
p.valDec = (*Buffer).DecodeZigzag32
p.valSize = sizeZigzag32
case "zigzag64":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeZigzag64
p.valDec = (*Buffer).DecodeZigzag64
p.valSize = sizeZigzag64
case "bytes", "group":
p.WireType = WireBytes
// no numeric converter for non-numeric types
@@ -224,7 +299,6 @@ func (p *Properties) Parse(s string) {
return
}
outer:
for i := 2; i < len(fields); i++ {
f := fields[i]
switch {
@@ -252,28 +326,229 @@ outer:
if i+1 < len(fields) {
// Commas aren't escaped, and def is always last.
p.Default += "," + strings.Join(fields[i+1:], ",")
break outer
break
}
}
}
}
func logNoSliceEnc(t1, t2 reflect.Type) {
fmt.Fprintf(os.Stderr, "proto: no slice oenc for %T = []%T\n", t1, t2)
}
var protoMessageType = reflect.TypeOf((*Message)(nil)).Elem()
// setFieldProps initializes the field properties for submessages and maps.
func (p *Properties) setFieldProps(typ reflect.Type, f *reflect.StructField, lockGetProp bool) {
// Initialize the fields for encoding and decoding.
func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lockGetProp bool) {
p.enc = nil
p.dec = nil
p.size = nil
switch t1 := typ; t1.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no coders for %v\n", t1)
// proto3 scalar types
case reflect.Bool:
p.enc = (*Buffer).enc_proto3_bool
p.dec = (*Buffer).dec_proto3_bool
p.size = size_proto3_bool
case reflect.Int32:
p.enc = (*Buffer).enc_proto3_int32
p.dec = (*Buffer).dec_proto3_int32
p.size = size_proto3_int32
case reflect.Uint32:
p.enc = (*Buffer).enc_proto3_uint32
p.dec = (*Buffer).dec_proto3_int32 // can reuse
p.size = size_proto3_uint32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_proto3_int64
p.dec = (*Buffer).dec_proto3_int64
p.size = size_proto3_int64
case reflect.Float32:
p.enc = (*Buffer).enc_proto3_uint32 // can just treat them as bits
p.dec = (*Buffer).dec_proto3_int32
p.size = size_proto3_uint32
case reflect.Float64:
p.enc = (*Buffer).enc_proto3_int64 // can just treat them as bits
p.dec = (*Buffer).dec_proto3_int64
p.size = size_proto3_int64
case reflect.String:
p.enc = (*Buffer).enc_proto3_string
p.dec = (*Buffer).dec_proto3_string
p.size = size_proto3_string
case reflect.Ptr:
if t1.Elem().Kind() == reflect.Struct {
switch t2 := t1.Elem(); t2.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no encoder function for %v -> %v\n", t1, t2)
break
case reflect.Bool:
p.enc = (*Buffer).enc_bool
p.dec = (*Buffer).dec_bool
p.size = size_bool
case reflect.Int32:
p.enc = (*Buffer).enc_int32
p.dec = (*Buffer).dec_int32
p.size = size_int32
case reflect.Uint32:
p.enc = (*Buffer).enc_uint32
p.dec = (*Buffer).dec_int32 // can reuse
p.size = size_uint32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_int64
p.dec = (*Buffer).dec_int64
p.size = size_int64
case reflect.Float32:
p.enc = (*Buffer).enc_uint32 // can just treat them as bits
p.dec = (*Buffer).dec_int32
p.size = size_uint32
case reflect.Float64:
p.enc = (*Buffer).enc_int64 // can just treat them as bits
p.dec = (*Buffer).dec_int64
p.size = size_int64
case reflect.String:
p.enc = (*Buffer).enc_string
p.dec = (*Buffer).dec_string
p.size = size_string
case reflect.Struct:
p.stype = t1.Elem()
p.isMarshaler = isMarshaler(t1)
p.isUnmarshaler = isUnmarshaler(t1)
if p.Wire == "bytes" {
p.enc = (*Buffer).enc_struct_message
p.dec = (*Buffer).dec_struct_message
p.size = size_struct_message
} else {
p.enc = (*Buffer).enc_struct_group
p.dec = (*Buffer).dec_struct_group
p.size = size_struct_group
}
}
case reflect.Slice:
if t2 := t1.Elem(); t2.Kind() == reflect.Ptr && t2.Elem().Kind() == reflect.Struct {
p.stype = t2.Elem()
switch t2 := t1.Elem(); t2.Kind() {
default:
logNoSliceEnc(t1, t2)
break
case reflect.Bool:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_bool
p.size = size_slice_packed_bool
} else {
p.enc = (*Buffer).enc_slice_bool
p.size = size_slice_bool
}
p.dec = (*Buffer).dec_slice_bool
p.packedDec = (*Buffer).dec_slice_packed_bool
case reflect.Int32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int32
p.size = size_slice_packed_int32
} else {
p.enc = (*Buffer).enc_slice_int32
p.size = size_slice_int32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case reflect.Uint32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_uint32
p.size = size_slice_packed_uint32
} else {
p.enc = (*Buffer).enc_slice_uint32
p.size = size_slice_uint32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case reflect.Int64, reflect.Uint64:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int64
p.size = size_slice_packed_int64
} else {
p.enc = (*Buffer).enc_slice_int64
p.size = size_slice_int64
}
p.dec = (*Buffer).dec_slice_int64
p.packedDec = (*Buffer).dec_slice_packed_int64
case reflect.Uint8:
p.dec = (*Buffer).dec_slice_byte
if p.proto3 {
p.enc = (*Buffer).enc_proto3_slice_byte
p.size = size_proto3_slice_byte
} else {
p.enc = (*Buffer).enc_slice_byte
p.size = size_slice_byte
}
case reflect.Float32, reflect.Float64:
switch t2.Bits() {
case 32:
// can just treat them as bits
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_uint32
p.size = size_slice_packed_uint32
} else {
p.enc = (*Buffer).enc_slice_uint32
p.size = size_slice_uint32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case 64:
// can just treat them as bits
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int64
p.size = size_slice_packed_int64
} else {
p.enc = (*Buffer).enc_slice_int64
p.size = size_slice_int64
}
p.dec = (*Buffer).dec_slice_int64
p.packedDec = (*Buffer).dec_slice_packed_int64
default:
logNoSliceEnc(t1, t2)
break
}
case reflect.String:
p.enc = (*Buffer).enc_slice_string
p.dec = (*Buffer).dec_slice_string
p.size = size_slice_string
case reflect.Ptr:
switch t3 := t2.Elem(); t3.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no ptr oenc for %T -> %T -> %T\n", t1, t2, t3)
break
case reflect.Struct:
p.stype = t2.Elem()
p.isMarshaler = isMarshaler(t2)
p.isUnmarshaler = isUnmarshaler(t2)
if p.Wire == "bytes" {
p.enc = (*Buffer).enc_slice_struct_message
p.dec = (*Buffer).dec_slice_struct_message
p.size = size_slice_struct_message
} else {
p.enc = (*Buffer).enc_slice_struct_group
p.dec = (*Buffer).dec_slice_struct_group
p.size = size_slice_struct_group
}
}
case reflect.Slice:
switch t2.Elem().Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no slice elem oenc for %T -> %T -> %T\n", t1, t2, t2.Elem())
break
case reflect.Uint8:
p.enc = (*Buffer).enc_slice_slice_byte
p.dec = (*Buffer).dec_slice_slice_byte
p.size = size_slice_slice_byte
}
}
case reflect.Map:
p.enc = (*Buffer).enc_new_map
p.dec = (*Buffer).dec_new_map
p.size = size_new_map
p.mtype = t1
p.mkeyprop = &Properties{}
p.mkeyprop.init(reflect.PtrTo(p.mtype.Key()), "Key", f.Tag.Get("protobuf_key"), nil, lockGetProp)
@@ -287,6 +562,20 @@ func (p *Properties) setFieldProps(typ reflect.Type, f *reflect.StructField, loc
p.mvalprop.init(vtype, "Value", f.Tag.Get("protobuf_val"), nil, lockGetProp)
}
// precalculate tag code
wire := p.WireType
if p.Packed {
wire = WireBytes
}
x := uint32(p.Tag)<<3 | uint32(wire)
i := 0
for i = 0; x > 127; i++ {
p.tagbuf[i] = 0x80 | uint8(x&0x7F)
x >>= 7
}
p.tagbuf[i] = uint8(x)
p.tagcode = p.tagbuf[0 : i+1]
if p.stype != nil {
if lockGetProp {
p.sprop = GetProperties(p.stype)
@@ -297,9 +586,32 @@ func (p *Properties) setFieldProps(typ reflect.Type, f *reflect.StructField, loc
}
var (
marshalerType = reflect.TypeOf((*Marshaler)(nil)).Elem()
marshalerType = reflect.TypeOf((*Marshaler)(nil)).Elem()
unmarshalerType = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
)
// isMarshaler reports whether type t implements Marshaler.
func isMarshaler(t reflect.Type) bool {
// We're checking for (likely) pointer-receiver methods
// so if t is not a pointer, something is very wrong.
// The calls above only invoke isMarshaler on pointer types.
if t.Kind() != reflect.Ptr {
panic("proto: misuse of isMarshaler")
}
return t.Implements(marshalerType)
}
// isUnmarshaler reports whether type t implements Unmarshaler.
func isUnmarshaler(t reflect.Type) bool {
// We're checking for (likely) pointer-receiver methods
// so if t is not a pointer, something is very wrong.
// The calls above only invoke isUnmarshaler on pointer types.
if t.Kind() != reflect.Ptr {
panic("proto: misuse of isUnmarshaler")
}
return t.Implements(unmarshalerType)
}
// Init populates the properties from a protocol buffer struct tag.
func (p *Properties) Init(typ reflect.Type, name, tag string, f *reflect.StructField) {
p.init(typ, name, tag, f, true)
@@ -309,11 +621,14 @@ func (p *Properties) init(typ reflect.Type, name, tag string, f *reflect.StructF
// "bytes,49,opt,def=hello!"
p.Name = name
p.OrigName = name
if f != nil {
p.field = toField(f)
}
if tag == "" {
return
}
p.Parse(tag)
p.setFieldProps(typ, f, lockGetProp)
p.setEncAndDec(typ, f, lockGetProp)
}
var (
@@ -363,6 +678,9 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
propertiesMap[t] = prop
// build properties
prop.extendable = reflect.PtrTo(t).Implements(extendableProtoType) ||
reflect.PtrTo(t).Implements(extendableProtoV1Type)
prop.unrecField = invalidField
prop.Prop = make([]*Properties, t.NumField())
prop.order = make([]int, t.NumField())
@@ -372,6 +690,17 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
name := f.Name
p.init(f.Type, name, f.Tag.Get("protobuf"), &f, false)
if f.Name == "XXX_InternalExtensions" { // special case
p.enc = (*Buffer).enc_exts
p.dec = nil // not needed
p.size = size_exts
} else if f.Name == "XXX_extensions" { // special case
p.enc = (*Buffer).enc_map
p.dec = nil // not needed
p.size = size_map
} else if f.Name == "XXX_unrecognized" { // special case
prop.unrecField = toField(&f)
}
oneof := f.Tag.Get("protobuf_oneof") // special case
if oneof != "" {
// Oneof fields don't use the traditional protobuf tag.
@@ -386,6 +715,9 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
}
print("\n")
}
if p.enc == nil && !strings.HasPrefix(f.Name, "XXX_") && oneof == "" {
fmt.Fprintln(os.Stderr, "proto: no encoder for", f.Name, f.Type.String(), "[GetProperties]")
}
}
// Re-order prop.order.
@@ -396,7 +728,8 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
}
if om, ok := reflect.Zero(reflect.PtrTo(t)).Interface().(oneofMessage); ok {
var oots []interface{}
_, _, _, oots = om.XXX_OneofFuncs()
prop.oneofMarshaler, prop.oneofUnmarshaler, prop.oneofSizer, oots = om.XXX_OneofFuncs()
prop.stype = t
// Interpret oneof metadata.
prop.OneofTypes = make(map[string]*OneofProperties)
@@ -446,6 +779,30 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
return prop
}
// Return the Properties object for the x[0]'th field of the structure.
func propByIndex(t reflect.Type, x []int) *Properties {
if len(x) != 1 {
fmt.Fprintf(os.Stderr, "proto: field index dimension %d (not 1) for type %s\n", len(x), t)
return nil
}
prop := GetProperties(t)
return prop.Prop[x[0]]
}
// Get the address and type of a pointer to a struct from an interface.
func getbase(pb Message) (t reflect.Type, b structPointer, err error) {
if pb == nil {
err = ErrNil
return
}
// get the reflect type of the pointer to the struct.
t = reflect.TypeOf(pb)
// get the address of the struct.
value := reflect.ValueOf(pb)
b = toStructPointer(value)
return
}
// A global registry of enum types.
// The generated code will register the generated maps by calling RegisterEnum.
@@ -469,42 +826,20 @@ func EnumValueMap(enumType string) map[string]int32 {
// A registry of all linked message types.
// The string is a fully-qualified proto name ("pkg.Message").
var (
protoTypedNils = make(map[string]Message) // a map from proto names to typed nil pointers
protoMapTypes = make(map[string]reflect.Type) // a map from proto names to map types
revProtoTypes = make(map[reflect.Type]string)
protoTypes = make(map[string]reflect.Type)
revProtoTypes = make(map[reflect.Type]string)
)
// RegisterType is called from generated code and maps from the fully qualified
// proto name to the type (pointer to struct) of the protocol buffer.
func RegisterType(x Message, name string) {
if _, ok := protoTypedNils[name]; ok {
if _, ok := protoTypes[name]; ok {
// TODO: Some day, make this a panic.
log.Printf("proto: duplicate proto type registered: %s", name)
return
}
t := reflect.TypeOf(x)
if v := reflect.ValueOf(x); v.Kind() == reflect.Ptr && v.Pointer() == 0 {
// Generated code always calls RegisterType with nil x.
// This check is just for extra safety.
protoTypedNils[name] = x
} else {
protoTypedNils[name] = reflect.Zero(t).Interface().(Message)
}
revProtoTypes[t] = name
}
// RegisterMapType is called from generated code and maps from the fully qualified
// proto name to the native map type of the proto map definition.
func RegisterMapType(x interface{}, name string) {
if reflect.TypeOf(x).Kind() != reflect.Map {
panic(fmt.Sprintf("RegisterMapType(%T, %q); want map", x, name))
}
if _, ok := protoMapTypes[name]; ok {
log.Printf("proto: duplicate proto type registered: %s", name)
return
}
t := reflect.TypeOf(x)
protoMapTypes[name] = t
protoTypes[name] = t
revProtoTypes[t] = name
}
@@ -520,14 +855,7 @@ func MessageName(x Message) string {
}
// MessageType returns the message type (pointer to struct) for a named message.
// The type is not guaranteed to implement proto.Message if the name refers to a
// map entry.
func MessageType(name string) reflect.Type {
if t, ok := protoTypedNils[name]; ok {
return reflect.TypeOf(t)
}
return protoMapTypes[name]
}
func MessageType(name string) reflect.Type { return protoTypes[name] }
// A registry of all linked proto files.
var (

File diff suppressed because it is too large Load Diff

View File

@@ -1,654 +0,0 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
import (
"fmt"
"reflect"
"strings"
"sync"
"sync/atomic"
)
// Merge merges the src message into dst.
// This assumes that dst and src of the same type and are non-nil.
func (a *InternalMessageInfo) Merge(dst, src Message) {
mi := atomicLoadMergeInfo(&a.merge)
if mi == nil {
mi = getMergeInfo(reflect.TypeOf(dst).Elem())
atomicStoreMergeInfo(&a.merge, mi)
}
mi.merge(toPointer(&dst), toPointer(&src))
}
type mergeInfo struct {
typ reflect.Type
initialized int32 // 0: only typ is valid, 1: everything is valid
lock sync.Mutex
fields []mergeFieldInfo
unrecognized field // Offset of XXX_unrecognized
}
type mergeFieldInfo struct {
field field // Offset of field, guaranteed to be valid
// isPointer reports whether the value in the field is a pointer.
// This is true for the following situations:
// * Pointer to struct
// * Pointer to basic type (proto2 only)
// * Slice (first value in slice header is a pointer)
// * String (first value in string header is a pointer)
isPointer bool
// basicWidth reports the width of the field assuming that it is directly
// embedded in the struct (as is the case for basic types in proto3).
// The possible values are:
// 0: invalid
// 1: bool
// 4: int32, uint32, float32
// 8: int64, uint64, float64
basicWidth int
// Where dst and src are pointers to the types being merged.
merge func(dst, src pointer)
}
var (
mergeInfoMap = map[reflect.Type]*mergeInfo{}
mergeInfoLock sync.Mutex
)
func getMergeInfo(t reflect.Type) *mergeInfo {
mergeInfoLock.Lock()
defer mergeInfoLock.Unlock()
mi := mergeInfoMap[t]
if mi == nil {
mi = &mergeInfo{typ: t}
mergeInfoMap[t] = mi
}
return mi
}
// merge merges src into dst assuming they are both of type *mi.typ.
func (mi *mergeInfo) merge(dst, src pointer) {
if dst.isNil() {
panic("proto: nil destination")
}
if src.isNil() {
return // Nothing to do.
}
if atomic.LoadInt32(&mi.initialized) == 0 {
mi.computeMergeInfo()
}
for _, fi := range mi.fields {
sfp := src.offset(fi.field)
// As an optimization, we can avoid the merge function call cost
// if we know for sure that the source will have no effect
// by checking if it is the zero value.
if unsafeAllowed {
if fi.isPointer && sfp.getPointer().isNil() { // Could be slice or string
continue
}
if fi.basicWidth > 0 {
switch {
case fi.basicWidth == 1 && !*sfp.toBool():
continue
case fi.basicWidth == 4 && *sfp.toUint32() == 0:
continue
case fi.basicWidth == 8 && *sfp.toUint64() == 0:
continue
}
}
}
dfp := dst.offset(fi.field)
fi.merge(dfp, sfp)
}
// TODO: Make this faster?
out := dst.asPointerTo(mi.typ).Elem()
in := src.asPointerTo(mi.typ).Elem()
if emIn, err := extendable(in.Addr().Interface()); err == nil {
emOut, _ := extendable(out.Addr().Interface())
mIn, muIn := emIn.extensionsRead()
if mIn != nil {
mOut := emOut.extensionsWrite()
muIn.Lock()
mergeExtension(mOut, mIn)
muIn.Unlock()
}
}
if mi.unrecognized.IsValid() {
if b := *src.offset(mi.unrecognized).toBytes(); len(b) > 0 {
*dst.offset(mi.unrecognized).toBytes() = append([]byte(nil), b...)
}
}
}
func (mi *mergeInfo) computeMergeInfo() {
mi.lock.Lock()
defer mi.lock.Unlock()
if mi.initialized != 0 {
return
}
t := mi.typ
n := t.NumField()
props := GetProperties(t)
for i := 0; i < n; i++ {
f := t.Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
mfi := mergeFieldInfo{field: toField(&f)}
tf := f.Type
// As an optimization, we can avoid the merge function call cost
// if we know for sure that the source will have no effect
// by checking if it is the zero value.
if unsafeAllowed {
switch tf.Kind() {
case reflect.Ptr, reflect.Slice, reflect.String:
// As a special case, we assume slices and strings are pointers
// since we know that the first field in the SliceSlice or
// StringHeader is a data pointer.
mfi.isPointer = true
case reflect.Bool:
mfi.basicWidth = 1
case reflect.Int32, reflect.Uint32, reflect.Float32:
mfi.basicWidth = 4
case reflect.Int64, reflect.Uint64, reflect.Float64:
mfi.basicWidth = 8
}
}
// Unwrap tf to get at its most basic type.
var isPointer, isSlice bool
if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 {
isSlice = true
tf = tf.Elem()
}
if tf.Kind() == reflect.Ptr {
isPointer = true
tf = tf.Elem()
}
if isPointer && isSlice && tf.Kind() != reflect.Struct {
panic("both pointer and slice for basic type in " + tf.Name())
}
switch tf.Kind() {
case reflect.Int32:
switch {
case isSlice: // E.g., []int32
mfi.merge = func(dst, src pointer) {
// NOTE: toInt32Slice is not defined (see pointer_reflect.go).
/*
sfsp := src.toInt32Slice()
if *sfsp != nil {
dfsp := dst.toInt32Slice()
*dfsp = append(*dfsp, *sfsp...)
if *dfsp == nil {
*dfsp = []int64{}
}
}
*/
sfs := src.getInt32Slice()
if sfs != nil {
dfs := dst.getInt32Slice()
dfs = append(dfs, sfs...)
if dfs == nil {
dfs = []int32{}
}
dst.setInt32Slice(dfs)
}
}
case isPointer: // E.g., *int32
mfi.merge = func(dst, src pointer) {
// NOTE: toInt32Ptr is not defined (see pointer_reflect.go).
/*
sfpp := src.toInt32Ptr()
if *sfpp != nil {
dfpp := dst.toInt32Ptr()
if *dfpp == nil {
*dfpp = Int32(**sfpp)
} else {
**dfpp = **sfpp
}
}
*/
sfp := src.getInt32Ptr()
if sfp != nil {
dfp := dst.getInt32Ptr()
if dfp == nil {
dst.setInt32Ptr(*sfp)
} else {
*dfp = *sfp
}
}
}
default: // E.g., int32
mfi.merge = func(dst, src pointer) {
if v := *src.toInt32(); v != 0 {
*dst.toInt32() = v
}
}
}
case reflect.Int64:
switch {
case isSlice: // E.g., []int64
mfi.merge = func(dst, src pointer) {
sfsp := src.toInt64Slice()
if *sfsp != nil {
dfsp := dst.toInt64Slice()
*dfsp = append(*dfsp, *sfsp...)
if *dfsp == nil {
*dfsp = []int64{}
}
}
}
case isPointer: // E.g., *int64
mfi.merge = func(dst, src pointer) {
sfpp := src.toInt64Ptr()
if *sfpp != nil {
dfpp := dst.toInt64Ptr()
if *dfpp == nil {
*dfpp = Int64(**sfpp)
} else {
**dfpp = **sfpp
}
}
}
default: // E.g., int64
mfi.merge = func(dst, src pointer) {
if v := *src.toInt64(); v != 0 {
*dst.toInt64() = v
}
}
}
case reflect.Uint32:
switch {
case isSlice: // E.g., []uint32
mfi.merge = func(dst, src pointer) {
sfsp := src.toUint32Slice()
if *sfsp != nil {
dfsp := dst.toUint32Slice()
*dfsp = append(*dfsp, *sfsp...)
if *dfsp == nil {
*dfsp = []uint32{}
}
}
}
case isPointer: // E.g., *uint32
mfi.merge = func(dst, src pointer) {
sfpp := src.toUint32Ptr()
if *sfpp != nil {
dfpp := dst.toUint32Ptr()
if *dfpp == nil {
*dfpp = Uint32(**sfpp)
} else {
**dfpp = **sfpp
}
}
}
default: // E.g., uint32
mfi.merge = func(dst, src pointer) {
if v := *src.toUint32(); v != 0 {
*dst.toUint32() = v
}
}
}
case reflect.Uint64:
switch {
case isSlice: // E.g., []uint64
mfi.merge = func(dst, src pointer) {
sfsp := src.toUint64Slice()
if *sfsp != nil {
dfsp := dst.toUint64Slice()
*dfsp = append(*dfsp, *sfsp...)
if *dfsp == nil {
*dfsp = []uint64{}
}
}
}
case isPointer: // E.g., *uint64
mfi.merge = func(dst, src pointer) {
sfpp := src.toUint64Ptr()
if *sfpp != nil {
dfpp := dst.toUint64Ptr()
if *dfpp == nil {
*dfpp = Uint64(**sfpp)
} else {
**dfpp = **sfpp
}
}
}
default: // E.g., uint64
mfi.merge = func(dst, src pointer) {
if v := *src.toUint64(); v != 0 {
*dst.toUint64() = v
}
}
}
case reflect.Float32:
switch {
case isSlice: // E.g., []float32
mfi.merge = func(dst, src pointer) {
sfsp := src.toFloat32Slice()
if *sfsp != nil {
dfsp := dst.toFloat32Slice()
*dfsp = append(*dfsp, *sfsp...)
if *dfsp == nil {
*dfsp = []float32{}
}
}
}
case isPointer: // E.g., *float32
mfi.merge = func(dst, src pointer) {
sfpp := src.toFloat32Ptr()
if *sfpp != nil {
dfpp := dst.toFloat32Ptr()
if *dfpp == nil {
*dfpp = Float32(**sfpp)
} else {
**dfpp = **sfpp
}
}
}
default: // E.g., float32
mfi.merge = func(dst, src pointer) {
if v := *src.toFloat32(); v != 0 {
*dst.toFloat32() = v
}
}
}
case reflect.Float64:
switch {
case isSlice: // E.g., []float64
mfi.merge = func(dst, src pointer) {
sfsp := src.toFloat64Slice()
if *sfsp != nil {
dfsp := dst.toFloat64Slice()
*dfsp = append(*dfsp, *sfsp...)
if *dfsp == nil {
*dfsp = []float64{}
}
}
}
case isPointer: // E.g., *float64
mfi.merge = func(dst, src pointer) {
sfpp := src.toFloat64Ptr()
if *sfpp != nil {
dfpp := dst.toFloat64Ptr()
if *dfpp == nil {
*dfpp = Float64(**sfpp)
} else {
**dfpp = **sfpp
}
}
}
default: // E.g., float64
mfi.merge = func(dst, src pointer) {
if v := *src.toFloat64(); v != 0 {
*dst.toFloat64() = v
}
}
}
case reflect.Bool:
switch {
case isSlice: // E.g., []bool
mfi.merge = func(dst, src pointer) {
sfsp := src.toBoolSlice()
if *sfsp != nil {
dfsp := dst.toBoolSlice()
*dfsp = append(*dfsp, *sfsp...)
if *dfsp == nil {
*dfsp = []bool{}
}
}
}
case isPointer: // E.g., *bool
mfi.merge = func(dst, src pointer) {
sfpp := src.toBoolPtr()
if *sfpp != nil {
dfpp := dst.toBoolPtr()
if *dfpp == nil {
*dfpp = Bool(**sfpp)
} else {
**dfpp = **sfpp
}
}
}
default: // E.g., bool
mfi.merge = func(dst, src pointer) {
if v := *src.toBool(); v {
*dst.toBool() = v
}
}
}
case reflect.String:
switch {
case isSlice: // E.g., []string
mfi.merge = func(dst, src pointer) {
sfsp := src.toStringSlice()
if *sfsp != nil {
dfsp := dst.toStringSlice()
*dfsp = append(*dfsp, *sfsp...)
if *dfsp == nil {
*dfsp = []string{}
}
}
}
case isPointer: // E.g., *string
mfi.merge = func(dst, src pointer) {
sfpp := src.toStringPtr()
if *sfpp != nil {
dfpp := dst.toStringPtr()
if *dfpp == nil {
*dfpp = String(**sfpp)
} else {
**dfpp = **sfpp
}
}
}
default: // E.g., string
mfi.merge = func(dst, src pointer) {
if v := *src.toString(); v != "" {
*dst.toString() = v
}
}
}
case reflect.Slice:
isProto3 := props.Prop[i].proto3
switch {
case isPointer:
panic("bad pointer in byte slice case in " + tf.Name())
case tf.Elem().Kind() != reflect.Uint8:
panic("bad element kind in byte slice case in " + tf.Name())
case isSlice: // E.g., [][]byte
mfi.merge = func(dst, src pointer) {
sbsp := src.toBytesSlice()
if *sbsp != nil {
dbsp := dst.toBytesSlice()
for _, sb := range *sbsp {
if sb == nil {
*dbsp = append(*dbsp, nil)
} else {
*dbsp = append(*dbsp, append([]byte{}, sb...))
}
}
if *dbsp == nil {
*dbsp = [][]byte{}
}
}
}
default: // E.g., []byte
mfi.merge = func(dst, src pointer) {
sbp := src.toBytes()
if *sbp != nil {
dbp := dst.toBytes()
if !isProto3 || len(*sbp) > 0 {
*dbp = append([]byte{}, *sbp...)
}
}
}
}
case reflect.Struct:
switch {
case !isPointer:
panic(fmt.Sprintf("message field %s without pointer", tf))
case isSlice: // E.g., []*pb.T
mi := getMergeInfo(tf)
mfi.merge = func(dst, src pointer) {
sps := src.getPointerSlice()
if sps != nil {
dps := dst.getPointerSlice()
for _, sp := range sps {
var dp pointer
if !sp.isNil() {
dp = valToPointer(reflect.New(tf))
mi.merge(dp, sp)
}
dps = append(dps, dp)
}
if dps == nil {
dps = []pointer{}
}
dst.setPointerSlice(dps)
}
}
default: // E.g., *pb.T
mi := getMergeInfo(tf)
mfi.merge = func(dst, src pointer) {
sp := src.getPointer()
if !sp.isNil() {
dp := dst.getPointer()
if dp.isNil() {
dp = valToPointer(reflect.New(tf))
dst.setPointer(dp)
}
mi.merge(dp, sp)
}
}
}
case reflect.Map:
switch {
case isPointer || isSlice:
panic("bad pointer or slice in map case in " + tf.Name())
default: // E.g., map[K]V
mfi.merge = func(dst, src pointer) {
sm := src.asPointerTo(tf).Elem()
if sm.Len() == 0 {
return
}
dm := dst.asPointerTo(tf).Elem()
if dm.IsNil() {
dm.Set(reflect.MakeMap(tf))
}
switch tf.Elem().Kind() {
case reflect.Ptr: // Proto struct (e.g., *T)
for _, key := range sm.MapKeys() {
val := sm.MapIndex(key)
val = reflect.ValueOf(Clone(val.Interface().(Message)))
dm.SetMapIndex(key, val)
}
case reflect.Slice: // E.g. Bytes type (e.g., []byte)
for _, key := range sm.MapKeys() {
val := sm.MapIndex(key)
val = reflect.ValueOf(append([]byte{}, val.Bytes()...))
dm.SetMapIndex(key, val)
}
default: // Basic type (e.g., string)
for _, key := range sm.MapKeys() {
val := sm.MapIndex(key)
dm.SetMapIndex(key, val)
}
}
}
}
case reflect.Interface:
// Must be oneof field.
switch {
case isPointer || isSlice:
panic("bad pointer or slice in interface case in " + tf.Name())
default: // E.g., interface{}
// TODO: Make this faster?
mfi.merge = func(dst, src pointer) {
su := src.asPointerTo(tf).Elem()
if !su.IsNil() {
du := dst.asPointerTo(tf).Elem()
typ := su.Elem().Type()
if du.IsNil() || du.Elem().Type() != typ {
du.Set(reflect.New(typ.Elem())) // Initialize interface if empty
}
sv := su.Elem().Elem().Field(0)
if sv.Kind() == reflect.Ptr && sv.IsNil() {
return
}
dv := du.Elem().Elem().Field(0)
if dv.Kind() == reflect.Ptr && dv.IsNil() {
dv.Set(reflect.New(sv.Type().Elem())) // Initialize proto message if empty
}
switch sv.Type().Kind() {
case reflect.Ptr: // Proto struct (e.g., *T)
Merge(dv.Interface().(Message), sv.Interface().(Message))
case reflect.Slice: // E.g. Bytes type (e.g., []byte)
dv.Set(reflect.ValueOf(append([]byte{}, sv.Bytes()...)))
default: // Basic type (e.g., string)
dv.Set(sv)
}
}
}
}
default:
panic(fmt.Sprintf("merger not found for type:%s", tf))
}
mi.fields = append(mi.fields, mfi)
}
mi.unrecognized = invalidField
if f, ok := t.FieldByName("XXX_unrecognized"); ok {
if f.Type != reflect.TypeOf([]byte{}) {
panic("expected XXX_unrecognized to be of type []byte")
}
mi.unrecognized = toField(&f)
}
atomic.StoreInt32(&mi.initialized, 1)
}

File diff suppressed because it is too large Load Diff

View File

@@ -50,6 +50,7 @@ import (
var (
newline = []byte("\n")
spaces = []byte(" ")
gtNewline = []byte(">\n")
endBraceNewline = []byte("}\n")
backslashN = []byte{'\\', 'n'}
backslashR = []byte{'\\', 'r'}
@@ -169,6 +170,11 @@ func writeName(w *textWriter, props *Properties) error {
return nil
}
// raw is the interface satisfied by RawMessage.
type raw interface {
Bytes() []byte
}
func requiresQuotes(u string) bool {
// When type URL contains any characters except [0-9A-Za-z./\-]*, it must be quoted.
for _, ch := range u {
@@ -263,10 +269,6 @@ func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error {
props := sprops.Prop[i]
name := st.Field(i).Name
if name == "XXX_NoUnkeyedLiteral" {
continue
}
if strings.HasPrefix(name, "XXX_") {
// There are two XXX_ fields:
// XXX_unrecognized []byte
@@ -434,6 +436,12 @@ func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error {
return err
}
}
if b, ok := fv.Interface().(raw); ok {
if err := writeRaw(w, b.Bytes()); err != nil {
return err
}
continue
}
// Enums have a String method, so writeAny will work fine.
if err := tm.writeAny(w, fv, props); err != nil {
@@ -447,7 +455,7 @@ func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error {
// Extensions (the XXX_extensions field).
pv := sv.Addr()
if _, err := extendable(pv.Interface()); err == nil {
if _, ok := extendable(pv.Interface()); ok {
if err := tm.writeExtensions(w, pv); err != nil {
return err
}
@@ -456,6 +464,27 @@ func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error {
return nil
}
// writeRaw writes an uninterpreted raw message.
func writeRaw(w *textWriter, b []byte) error {
if err := w.WriteByte('<'); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte('\n'); err != nil {
return err
}
}
w.indent()
if err := writeUnknownStruct(w, b); err != nil {
return err
}
w.unindent()
if err := w.WriteByte('>'); err != nil {
return err
}
return nil
}
// writeAny writes an arbitrary field.
func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Properties) error {
v = reflect.Indirect(v)
@@ -506,19 +535,6 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
}
}
w.indent()
if v.CanAddr() {
// Calling v.Interface on a struct causes the reflect package to
// copy the entire struct. This is racy with the new Marshaler
// since we atomically update the XXX_sizecache.
//
// Thus, we retrieve a pointer to the struct if possible to avoid
// a race since v.Interface on the pointer doesn't copy the struct.
//
// If v is not addressable, then we are not worried about a race
// since it implies that the binary Marshaler cannot possibly be
// mutating this value.
v = v.Addr()
}
if etm, ok := v.Interface().(encoding.TextMarshaler); ok {
text, err := etm.MarshalText()
if err != nil {
@@ -527,13 +543,8 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
if _, err = w.Write(text); err != nil {
return err
}
} else {
if v.Kind() == reflect.Ptr {
v = v.Elem()
}
if err := tm.writeStruct(w, v); err != nil {
return err
}
} else if err := tm.writeStruct(w, v); err != nil {
return err
}
w.unindent()
if err := w.WriteByte(ket); err != nil {

View File

@@ -206,6 +206,7 @@ func (p *textParser) advance() {
var (
errBadUTF8 = errors.New("proto: bad UTF-8")
errBadHex = errors.New("proto: bad hexadecimal")
)
func unquoteC(s string, quote rune) (string, error) {
@@ -276,47 +277,60 @@ func unescape(s string) (ch string, tail string, err error) {
return "?", s, nil // trigraph workaround
case '\'', '"', '\\':
return string(r), s, nil
case '0', '1', '2', '3', '4', '5', '6', '7':
case '0', '1', '2', '3', '4', '5', '6', '7', 'x', 'X':
if len(s) < 2 {
return "", "", fmt.Errorf(`\%c requires 2 following digits`, r)
}
ss := string(r) + s[:2]
base := 8
ss := s[:2]
s = s[2:]
i, err := strconv.ParseUint(ss, 8, 8)
if r == 'x' || r == 'X' {
base = 16
} else {
ss = string(r) + ss
}
i, err := strconv.ParseUint(ss, base, 8)
if err != nil {
return "", "", fmt.Errorf(`\%s contains non-octal digits`, ss)
return "", "", err
}
return string([]byte{byte(i)}), s, nil
case 'x', 'X', 'u', 'U':
var n int
switch r {
case 'x', 'X':
n = 2
case 'u':
n = 4
case 'U':
case 'u', 'U':
n := 4
if r == 'U' {
n = 8
}
if len(s) < n {
return "", "", fmt.Errorf(`\%c requires %d following digits`, r, n)
return "", "", fmt.Errorf(`\%c requires %d digits`, r, n)
}
bs := make([]byte, n/2)
for i := 0; i < n; i += 2 {
a, ok1 := unhex(s[i])
b, ok2 := unhex(s[i+1])
if !ok1 || !ok2 {
return "", "", errBadHex
}
bs[i/2] = a<<4 | b
}
ss := s[:n]
s = s[n:]
i, err := strconv.ParseUint(ss, 16, 64)
if err != nil {
return "", "", fmt.Errorf(`\%c%s contains non-hexadecimal digits`, r, ss)
}
if r == 'x' || r == 'X' {
return string([]byte{byte(i)}), s, nil
}
if i > utf8.MaxRune {
return "", "", fmt.Errorf(`\%c%s is not a valid Unicode code point`, r, ss)
}
return string(i), s, nil
return string(bs), s, nil
}
return "", "", fmt.Errorf(`unknown escape \%c`, r)
}
// Adapted from src/pkg/strconv/quote.go.
func unhex(b byte) (v byte, ok bool) {
switch {
case '0' <= b && b <= '9':
return b - '0', true
case 'a' <= b && b <= 'f':
return b - 'a' + 10, true
case 'A' <= b && b <= 'F':
return b - 'A' + 10, true
}
return 0, false
}
// Back off the parser by one token. Can only be done between calls to next().
// It makes the next advance() a no-op.
func (p *textParser) back() { p.backed = true }
@@ -714,9 +728,6 @@ func (p *textParser) consumeExtName() (string, error) {
if tok.err != nil {
return "", p.errorf("unrecognized type_url or extension name: %s", tok.err)
}
if p.done && tok.value != "]" {
return "", p.errorf("unclosed type_url or extension name")
}
}
return strings.Join(parts, ""), nil
}
@@ -854,7 +865,7 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
return p.readStruct(fv, terminator)
case reflect.Uint32:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
fv.SetUint(uint64(x))
fv.SetUint(x)
return nil
}
case reflect.Uint64:
@@ -872,9 +883,13 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
// UnmarshalText returns *RequiredNotSetError.
func UnmarshalText(s string, pb Message) error {
if um, ok := pb.(encoding.TextUnmarshaler); ok {
return um.UnmarshalText([]byte(s))
err := um.UnmarshalText([]byte(s))
return err
}
pb.Reset()
v := reflect.ValueOf(pb)
return newTextParser(s).readStruct(v.Elem(), "")
if pe := newTextParser(s).readStruct(v.Elem(), ""); pe != nil {
return pe
}
return nil
}

View File

@@ -1,6 +1,15 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: google/protobuf/any.proto
/*
Package any is a generated protocol buffer package.
It is generated from these files:
google/protobuf/any.proto
It has these top-level messages:
Any
*/
package any
import proto "github.com/golang/protobuf/proto"
@@ -123,36 +132,14 @@ type Any struct {
//
TypeUrl string `protobuf:"bytes,1,opt,name=type_url,json=typeUrl" json:"type_url,omitempty"`
// Must be a valid serialized protocol buffer of the above specified type.
Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
}
func (m *Any) Reset() { *m = Any{} }
func (m *Any) String() string { return proto.CompactTextString(m) }
func (*Any) ProtoMessage() {}
func (*Any) Descriptor() ([]byte, []int) {
return fileDescriptor_any_744b9ca530f228db, []int{0}
}
func (*Any) XXX_WellKnownType() string { return "Any" }
func (m *Any) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Any.Unmarshal(m, b)
}
func (m *Any) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Any.Marshal(b, m, deterministic)
}
func (dst *Any) XXX_Merge(src proto.Message) {
xxx_messageInfo_Any.Merge(dst, src)
}
func (m *Any) XXX_Size() int {
return xxx_messageInfo_Any.Size(m)
}
func (m *Any) XXX_DiscardUnknown() {
xxx_messageInfo_Any.DiscardUnknown(m)
}
var xxx_messageInfo_Any proto.InternalMessageInfo
func (m *Any) Reset() { *m = Any{} }
func (m *Any) String() string { return proto.CompactTextString(m) }
func (*Any) ProtoMessage() {}
func (*Any) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*Any) XXX_WellKnownType() string { return "Any" }
func (m *Any) GetTypeUrl() string {
if m != nil {
@@ -172,9 +159,9 @@ func init() {
proto.RegisterType((*Any)(nil), "google.protobuf.Any")
}
func init() { proto.RegisterFile("google/protobuf/any.proto", fileDescriptor_any_744b9ca530f228db) }
func init() { proto.RegisterFile("google/protobuf/any.proto", fileDescriptor0) }
var fileDescriptor_any_744b9ca530f228db = []byte{
var fileDescriptor0 = []byte{
// 185 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4c, 0xcf, 0xcf, 0x4f,
0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x4f, 0xcc, 0xab, 0xd4,

View File

@@ -1,6 +1,15 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: google/protobuf/duration.proto
/*
Package duration is a generated protocol buffer package.
It is generated from these files:
google/protobuf/duration.proto
It has these top-level messages:
Duration
*/
package duration
import proto "github.com/golang/protobuf/proto"
@@ -89,36 +98,14 @@ type Duration struct {
// of one second or more, a non-zero value for the `nanos` field must be
// of the same sign as the `seconds` field. Must be from -999,999,999
// to +999,999,999 inclusive.
Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
}
func (m *Duration) Reset() { *m = Duration{} }
func (m *Duration) String() string { return proto.CompactTextString(m) }
func (*Duration) ProtoMessage() {}
func (*Duration) Descriptor() ([]byte, []int) {
return fileDescriptor_duration_e7d612259e3f0613, []int{0}
}
func (*Duration) XXX_WellKnownType() string { return "Duration" }
func (m *Duration) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Duration.Unmarshal(m, b)
}
func (m *Duration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Duration.Marshal(b, m, deterministic)
}
func (dst *Duration) XXX_Merge(src proto.Message) {
xxx_messageInfo_Duration.Merge(dst, src)
}
func (m *Duration) XXX_Size() int {
return xxx_messageInfo_Duration.Size(m)
}
func (m *Duration) XXX_DiscardUnknown() {
xxx_messageInfo_Duration.DiscardUnknown(m)
}
var xxx_messageInfo_Duration proto.InternalMessageInfo
func (m *Duration) Reset() { *m = Duration{} }
func (m *Duration) String() string { return proto.CompactTextString(m) }
func (*Duration) ProtoMessage() {}
func (*Duration) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*Duration) XXX_WellKnownType() string { return "Duration" }
func (m *Duration) GetSeconds() int64 {
if m != nil {
@@ -138,11 +125,9 @@ func init() {
proto.RegisterType((*Duration)(nil), "google.protobuf.Duration")
}
func init() {
proto.RegisterFile("google/protobuf/duration.proto", fileDescriptor_duration_e7d612259e3f0613)
}
func init() { proto.RegisterFile("google/protobuf/duration.proto", fileDescriptor0) }
var fileDescriptor_duration_e7d612259e3f0613 = []byte{
var fileDescriptor0 = []byte{
// 190 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4b, 0xcf, 0xcf, 0x4f,
0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x4f, 0x29, 0x2d, 0x4a,

43
vendor/github.com/golang/protobuf/ptypes/regen.sh generated vendored Executable file
View File

@@ -0,0 +1,43 @@
#!/bin/bash -e
#
# This script fetches and rebuilds the "well-known types" protocol buffers.
# To run this you will need protoc and goprotobuf installed;
# see https://github.com/golang/protobuf for instructions.
# You also need Go and Git installed.
PKG=github.com/golang/protobuf/ptypes
UPSTREAM=https://github.com/google/protobuf
UPSTREAM_SUBDIR=src/google/protobuf
PROTO_FILES=(any duration empty struct timestamp wrappers)
function die() {
echo 1>&2 $*
exit 1
}
# Sanity check that the right tools are accessible.
for tool in go git protoc protoc-gen-go; do
q=$(which $tool) || die "didn't find $tool"
echo 1>&2 "$tool: $q"
done
tmpdir=$(mktemp -d -t regen-wkt.XXXXXX)
trap 'rm -rf $tmpdir' EXIT
echo -n 1>&2 "finding package dir... "
pkgdir=$(go list -f '{{.Dir}}' $PKG)
echo 1>&2 $pkgdir
base=$(echo $pkgdir | sed "s,/$PKG\$,,")
echo 1>&2 "base: $base"
cd "$base"
echo 1>&2 "fetching latest protos... "
git clone -q $UPSTREAM $tmpdir
for file in ${PROTO_FILES[@]}; do
echo 1>&2 "* $file"
protoc --go_out=. -I$tmpdir/src $tmpdir/src/google/protobuf/$file.proto || die
cp $tmpdir/src/google/protobuf/$file.proto $PKG/$file
done
echo 1>&2 "All OK"

View File

@@ -1,6 +1,15 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: google/protobuf/timestamp.proto
/*
Package timestamp is a generated protocol buffer package.
It is generated from these files:
google/protobuf/timestamp.proto
It has these top-level messages:
Timestamp
*/
package timestamp
import proto "github.com/golang/protobuf/proto"
@@ -92,7 +101,7 @@ const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// to this format using [`strftime`](https://docs.python.org/2/library/time.html#time.strftime)
// with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one
// can use the Joda Time's [`ISODateTimeFormat.dateTime()`](
// http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime--)
// http://joda-time.sourceforge.net/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime())
// to obtain a formatter capable of generating timestamps in this format.
//
//
@@ -105,36 +114,14 @@ type Timestamp struct {
// second values with fractions must still have non-negative nanos values
// that count forward in time. Must be from 0 to 999,999,999
// inclusive.
Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
}
func (m *Timestamp) Reset() { *m = Timestamp{} }
func (m *Timestamp) String() string { return proto.CompactTextString(m) }
func (*Timestamp) ProtoMessage() {}
func (*Timestamp) Descriptor() ([]byte, []int) {
return fileDescriptor_timestamp_b826e8e5fba671a8, []int{0}
}
func (*Timestamp) XXX_WellKnownType() string { return "Timestamp" }
func (m *Timestamp) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Timestamp.Unmarshal(m, b)
}
func (m *Timestamp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Timestamp.Marshal(b, m, deterministic)
}
func (dst *Timestamp) XXX_Merge(src proto.Message) {
xxx_messageInfo_Timestamp.Merge(dst, src)
}
func (m *Timestamp) XXX_Size() int {
return xxx_messageInfo_Timestamp.Size(m)
}
func (m *Timestamp) XXX_DiscardUnknown() {
xxx_messageInfo_Timestamp.DiscardUnknown(m)
}
var xxx_messageInfo_Timestamp proto.InternalMessageInfo
func (m *Timestamp) Reset() { *m = Timestamp{} }
func (m *Timestamp) String() string { return proto.CompactTextString(m) }
func (*Timestamp) ProtoMessage() {}
func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*Timestamp) XXX_WellKnownType() string { return "Timestamp" }
func (m *Timestamp) GetSeconds() int64 {
if m != nil {
@@ -154,11 +141,9 @@ func init() {
proto.RegisterType((*Timestamp)(nil), "google.protobuf.Timestamp")
}
func init() {
proto.RegisterFile("google/protobuf/timestamp.proto", fileDescriptor_timestamp_b826e8e5fba671a8)
}
func init() { proto.RegisterFile("google/protobuf/timestamp.proto", fileDescriptor0) }
var fileDescriptor_timestamp_b826e8e5fba671a8 = []byte{
var fileDescriptor0 = []byte{
// 191 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4f, 0xcf, 0xcf, 0x4f,
0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0xc9, 0xcc, 0x4d,

View File

@@ -114,7 +114,7 @@ option objc_class_prefix = "GPB";
// to this format using [`strftime`](https://docs.python.org/2/library/time.html#time.strftime)
// with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one
// can use the Joda Time's [`ISODateTimeFormat.dateTime()`](
// http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime--)
// http://joda-time.sourceforge.net/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime())
// to obtain a formatter capable of generating timestamps in this format.
//
//

View File

@@ -34,27 +34,21 @@ type Fuzzer struct {
nilChance float64
minElements int
maxElements int
maxDepth int
}
// New returns a new Fuzzer. Customize your Fuzzer further by calling Funcs,
// RandSource, NilChance, or NumElements in any order.
func New() *Fuzzer {
return NewWithSeed(time.Now().UnixNano())
}
func NewWithSeed(seed int64) *Fuzzer {
f := &Fuzzer{
defaultFuzzFuncs: fuzzFuncMap{
reflect.TypeOf(&time.Time{}): reflect.ValueOf(fuzzTime),
},
fuzzFuncs: fuzzFuncMap{},
r: rand.New(rand.NewSource(seed)),
r: rand.New(rand.NewSource(time.Now().UnixNano())),
nilChance: .2,
minElements: 1,
maxElements: 10,
maxDepth: 100,
}
return f
}
@@ -142,14 +136,6 @@ func (f *Fuzzer) genShouldFill() bool {
return f.r.Float64() > f.nilChance
}
// MaxDepth sets the maximum number of recursive fuzz calls that will be made
// before stopping. This includes struct members, pointers, and map and slice
// elements.
func (f *Fuzzer) MaxDepth(d int) *Fuzzer {
f.maxDepth = d
return f
}
// Fuzz recursively fills all of obj's fields with something random. First
// this tries to find a custom fuzz function (see Funcs). If there is no
// custom function this tests whether the object implements fuzz.Interface and,
@@ -158,19 +144,17 @@ func (f *Fuzzer) MaxDepth(d int) *Fuzzer {
// fails, this will generate random values for all primitive fields and then
// recurse for all non-primitives.
//
// This is safe for cyclic or tree-like structs, up to a limit. Use the
// MaxDepth method to adjust how deep you need it to recurse.
// Not safe for cyclic or tree-like structs!
//
// obj must be a pointer. Only exported (public) fields can be set (thanks,
// golang :/ ) Intended for tests, so will panic on bad input or unimplemented
// fields.
// obj must be a pointer. Only exported (public) fields can be set (thanks, golang :/ )
// Intended for tests, so will panic on bad input or unimplemented fields.
func (f *Fuzzer) Fuzz(obj interface{}) {
v := reflect.ValueOf(obj)
if v.Kind() != reflect.Ptr {
panic("needed ptr!")
}
v = v.Elem()
f.fuzzWithContext(v, 0)
f.doFuzz(v, 0)
}
// FuzzNoCustom is just like Fuzz, except that any custom fuzz function for
@@ -186,7 +170,7 @@ func (f *Fuzzer) FuzzNoCustom(obj interface{}) {
panic("needed ptr!")
}
v = v.Elem()
f.fuzzWithContext(v, flagNoCustomFuzz)
f.doFuzz(v, flagNoCustomFuzz)
}
const (
@@ -194,87 +178,69 @@ const (
flagNoCustomFuzz uint64 = 1 << iota
)
func (f *Fuzzer) fuzzWithContext(v reflect.Value, flags uint64) {
fc := &fuzzerContext{fuzzer: f}
fc.doFuzz(v, flags)
}
// fuzzerContext carries context about a single fuzzing run, which lets Fuzzer
// be thread-safe.
type fuzzerContext struct {
fuzzer *Fuzzer
curDepth int
}
func (fc *fuzzerContext) doFuzz(v reflect.Value, flags uint64) {
if fc.curDepth >= fc.fuzzer.maxDepth {
return
}
fc.curDepth++
defer func() { fc.curDepth-- }()
func (f *Fuzzer) doFuzz(v reflect.Value, flags uint64) {
if !v.CanSet() {
return
}
if flags&flagNoCustomFuzz == 0 {
// Check for both pointer and non-pointer custom functions.
if v.CanAddr() && fc.tryCustom(v.Addr()) {
if v.CanAddr() && f.tryCustom(v.Addr()) {
return
}
if fc.tryCustom(v) {
if f.tryCustom(v) {
return
}
}
if fn, ok := fillFuncMap[v.Kind()]; ok {
fn(v, fc.fuzzer.r)
fn(v, f.r)
return
}
switch v.Kind() {
case reflect.Map:
if fc.fuzzer.genShouldFill() {
if f.genShouldFill() {
v.Set(reflect.MakeMap(v.Type()))
n := fc.fuzzer.genElementCount()
n := f.genElementCount()
for i := 0; i < n; i++ {
key := reflect.New(v.Type().Key()).Elem()
fc.doFuzz(key, 0)
f.doFuzz(key, 0)
val := reflect.New(v.Type().Elem()).Elem()
fc.doFuzz(val, 0)
f.doFuzz(val, 0)
v.SetMapIndex(key, val)
}
return
}
v.Set(reflect.Zero(v.Type()))
case reflect.Ptr:
if fc.fuzzer.genShouldFill() {
if f.genShouldFill() {
v.Set(reflect.New(v.Type().Elem()))
fc.doFuzz(v.Elem(), 0)
f.doFuzz(v.Elem(), 0)
return
}
v.Set(reflect.Zero(v.Type()))
case reflect.Slice:
if fc.fuzzer.genShouldFill() {
n := fc.fuzzer.genElementCount()
if f.genShouldFill() {
n := f.genElementCount()
v.Set(reflect.MakeSlice(v.Type(), n, n))
for i := 0; i < n; i++ {
fc.doFuzz(v.Index(i), 0)
f.doFuzz(v.Index(i), 0)
}
return
}
v.Set(reflect.Zero(v.Type()))
case reflect.Array:
if fc.fuzzer.genShouldFill() {
if f.genShouldFill() {
n := v.Len()
for i := 0; i < n; i++ {
fc.doFuzz(v.Index(i), 0)
f.doFuzz(v.Index(i), 0)
}
return
}
v.Set(reflect.Zero(v.Type()))
case reflect.Struct:
for i := 0; i < v.NumField(); i++ {
fc.doFuzz(v.Field(i), 0)
f.doFuzz(v.Field(i), 0)
}
case reflect.Chan:
fallthrough
@@ -289,20 +255,20 @@ func (fc *fuzzerContext) doFuzz(v reflect.Value, flags uint64) {
// tryCustom searches for custom handlers, and returns true iff it finds a match
// and successfully randomizes v.
func (fc *fuzzerContext) tryCustom(v reflect.Value) bool {
func (f *Fuzzer) tryCustom(v reflect.Value) bool {
// First: see if we have a fuzz function for it.
doCustom, ok := fc.fuzzer.fuzzFuncs[v.Type()]
doCustom, ok := f.fuzzFuncs[v.Type()]
if !ok {
// Second: see if it can fuzz itself.
if v.CanInterface() {
intf := v.Interface()
if fuzzable, ok := intf.(Interface); ok {
fuzzable.Fuzz(Continue{fc: fc, Rand: fc.fuzzer.r})
fuzzable.Fuzz(Continue{f: f, Rand: f.r})
return true
}
}
// Finally: see if there is a default fuzz function.
doCustom, ok = fc.fuzzer.defaultFuzzFuncs[v.Type()]
doCustom, ok = f.defaultFuzzFuncs[v.Type()]
if !ok {
return false
}
@@ -328,8 +294,8 @@ func (fc *fuzzerContext) tryCustom(v reflect.Value) bool {
}
doCustom.Call([]reflect.Value{v, reflect.ValueOf(Continue{
fc: fc,
Rand: fc.fuzzer.r,
f: f,
Rand: f.r,
})})
return true
}
@@ -344,7 +310,7 @@ type Interface interface {
// Continue can be passed to custom fuzzing functions to allow them to use
// the correct source of randomness and to continue fuzzing their members.
type Continue struct {
fc *fuzzerContext
f *Fuzzer
// For convenience, Continue implements rand.Rand via embedding.
// Use this for generating any randomness if you want your fuzzing
@@ -359,7 +325,7 @@ func (c Continue) Fuzz(obj interface{}) {
panic("needed ptr!")
}
v = v.Elem()
c.fc.doFuzz(v, 0)
c.f.doFuzz(v, 0)
}
// FuzzNoCustom continues fuzzing obj, except that any custom fuzz function for
@@ -372,7 +338,7 @@ func (c Continue) FuzzNoCustom(obj interface{}) {
panic("needed ptr!")
}
v = v.Elem()
c.fc.doFuzz(v, flagNoCustomFuzz)
c.f.doFuzz(v, flagNoCustomFuzz)
}
// RandString makes a random string up to 20 characters long. The returned string

View File

@@ -30,9 +30,9 @@ type TwoQueueCache struct {
size int
recentSize int
recent simplelru.LRUCache
frequent simplelru.LRUCache
recentEvict simplelru.LRUCache
recent *simplelru.LRU
frequent *simplelru.LRU
recentEvict *simplelru.LRU
lock sync.RWMutex
}
@@ -84,8 +84,7 @@ func New2QParams(size int, recentRatio float64, ghostRatio float64) (*TwoQueueCa
return c, nil
}
// Get looks up a key's value from the cache.
func (c *TwoQueueCache) Get(key interface{}) (value interface{}, ok bool) {
func (c *TwoQueueCache) Get(key interface{}) (interface{}, bool) {
c.lock.Lock()
defer c.lock.Unlock()
@@ -106,7 +105,6 @@ func (c *TwoQueueCache) Get(key interface{}) (value interface{}, ok bool) {
return nil, false
}
// Add adds a value to the cache.
func (c *TwoQueueCache) Add(key, value interface{}) {
c.lock.Lock()
defer c.lock.Unlock()
@@ -162,15 +160,12 @@ func (c *TwoQueueCache) ensureSpace(recentEvict bool) {
c.frequent.RemoveOldest()
}
// Len returns the number of items in the cache.
func (c *TwoQueueCache) Len() int {
c.lock.RLock()
defer c.lock.RUnlock()
return c.recent.Len() + c.frequent.Len()
}
// Keys returns a slice of the keys in the cache.
// The frequently used keys are first in the returned slice.
func (c *TwoQueueCache) Keys() []interface{} {
c.lock.RLock()
defer c.lock.RUnlock()
@@ -179,7 +174,6 @@ func (c *TwoQueueCache) Keys() []interface{} {
return append(k1, k2...)
}
// Remove removes the provided key from the cache.
func (c *TwoQueueCache) Remove(key interface{}) {
c.lock.Lock()
defer c.lock.Unlock()
@@ -194,7 +188,6 @@ func (c *TwoQueueCache) Remove(key interface{}) {
}
}
// Purge is used to completely clear the cache.
func (c *TwoQueueCache) Purge() {
c.lock.Lock()
defer c.lock.Unlock()
@@ -203,17 +196,13 @@ func (c *TwoQueueCache) Purge() {
c.recentEvict.Purge()
}
// Contains is used to check if the cache contains a key
// without updating recency or frequency.
func (c *TwoQueueCache) Contains(key interface{}) bool {
c.lock.RLock()
defer c.lock.RUnlock()
return c.frequent.Contains(key) || c.recent.Contains(key)
}
// Peek is used to inspect the cache value of a key
// without updating recency or frequency.
func (c *TwoQueueCache) Peek(key interface{}) (value interface{}, ok bool) {
func (c *TwoQueueCache) Peek(key interface{}) (interface{}, bool) {
c.lock.RLock()
defer c.lock.RUnlock()
if val, ok := c.frequent.Peek(key); ok {

View File

@@ -18,11 +18,11 @@ type ARCCache struct {
size int // Size is the total capacity of the cache
p int // P is the dynamic preference towards T1 or T2
t1 simplelru.LRUCache // T1 is the LRU for recently accessed items
b1 simplelru.LRUCache // B1 is the LRU for evictions from t1
t1 *simplelru.LRU // T1 is the LRU for recently accessed items
b1 *simplelru.LRU // B1 is the LRU for evictions from t1
t2 simplelru.LRUCache // T2 is the LRU for frequently accessed items
b2 simplelru.LRUCache // B2 is the LRU for evictions from t2
t2 *simplelru.LRU // T2 is the LRU for frequently accessed items
b2 *simplelru.LRU // B2 is the LRU for evictions from t2
lock sync.RWMutex
}
@@ -60,11 +60,11 @@ func NewARC(size int) (*ARCCache, error) {
}
// Get looks up a key's value from the cache.
func (c *ARCCache) Get(key interface{}) (value interface{}, ok bool) {
func (c *ARCCache) Get(key interface{}) (interface{}, bool) {
c.lock.Lock()
defer c.lock.Unlock()
// If the value is contained in T1 (recent), then
// Ff the value is contained in T1 (recent), then
// promote it to T2 (frequent)
if val, ok := c.t1.Peek(key); ok {
c.t1.Remove(key)
@@ -153,7 +153,7 @@ func (c *ARCCache) Add(key, value interface{}) {
// Remove from B2
c.b2.Remove(key)
// Add the key to the frequently used list
// Add the key to the frequntly used list
c.t2.Add(key, value)
return
}
@@ -247,7 +247,7 @@ func (c *ARCCache) Contains(key interface{}) bool {
// Peek is used to inspect the cache value of a key
// without updating recency or frequency.
func (c *ARCCache) Peek(key interface{}) (value interface{}, ok bool) {
func (c *ARCCache) Peek(key interface{}) (interface{}, bool) {
c.lock.RLock()
defer c.lock.RUnlock()
if val, ok := c.t1.Peek(key); ok {

View File

@@ -1,21 +0,0 @@
// Package lru provides three different LRU caches of varying sophistication.
//
// Cache is a simple LRU cache. It is based on the
// LRU implementation in groupcache:
// https://github.com/golang/groupcache/tree/master/lru
//
// TwoQueueCache tracks frequently used and recently used entries separately.
// This avoids a burst of accesses from taking out frequently used entries,
// at the cost of about 2x computational overhead and some extra bookkeeping.
//
// ARCCache is an adaptive replacement cache. It tracks recent evictions as
// well as recent usage in both the frequent and recent caches. Its
// computational overhead is comparable to TwoQueueCache, but the memory
// overhead is linear with the size of the cache.
//
// ARC has been patented by IBM, so do not use it if that is problematic for
// your program.
//
// All caches in this package take locks while operating, and are therefore
// thread-safe for consumers.
package lru

View File

@@ -1 +0,0 @@
module github.com/hashicorp/golang-lru

View File

@@ -1,3 +1,6 @@
// This package provides a simple LRU cache. It is based on the
// LRU implementation in groupcache:
// https://github.com/golang/groupcache/tree/master/lru
package lru
import (
@@ -8,11 +11,11 @@ import (
// Cache is a thread-safe fixed size LRU cache.
type Cache struct {
lru simplelru.LRUCache
lru *simplelru.LRU
lock sync.RWMutex
}
// New creates an LRU of the given size.
// New creates an LRU of the given size
func New(size int) (*Cache, error) {
return NewWithEvict(size, nil)
}
@@ -30,7 +33,7 @@ func NewWithEvict(size int, onEvicted func(key interface{}, value interface{}))
return c, nil
}
// Purge is used to completely clear the cache.
// Purge is used to completely clear the cache
func (c *Cache) Purge() {
c.lock.Lock()
c.lru.Purge()
@@ -38,30 +41,30 @@ func (c *Cache) Purge() {
}
// Add adds a value to the cache. Returns true if an eviction occurred.
func (c *Cache) Add(key, value interface{}) (evicted bool) {
func (c *Cache) Add(key, value interface{}) bool {
c.lock.Lock()
defer c.lock.Unlock()
return c.lru.Add(key, value)
}
// Get looks up a key's value from the cache.
func (c *Cache) Get(key interface{}) (value interface{}, ok bool) {
func (c *Cache) Get(key interface{}) (interface{}, bool) {
c.lock.Lock()
defer c.lock.Unlock()
return c.lru.Get(key)
}
// Contains checks if a key is in the cache, without updating the
// recent-ness or deleting it for being stale.
// Check if a key is in the cache, without updating the recent-ness
// or deleting it for being stale.
func (c *Cache) Contains(key interface{}) bool {
c.lock.RLock()
defer c.lock.RUnlock()
return c.lru.Contains(key)
}
// Peek returns the key value (or undefined if not found) without updating
// Returns the key value (or undefined if not found) without updating
// the "recently used"-ness of the key.
func (c *Cache) Peek(key interface{}) (value interface{}, ok bool) {
func (c *Cache) Peek(key interface{}) (interface{}, bool) {
c.lock.RLock()
defer c.lock.RUnlock()
return c.lru.Peek(key)
@@ -70,15 +73,16 @@ func (c *Cache) Peek(key interface{}) (value interface{}, ok bool) {
// ContainsOrAdd checks if a key is in the cache without updating the
// recent-ness or deleting it for being stale, and if not, adds the value.
// Returns whether found and whether an eviction occurred.
func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evicted bool) {
func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evict bool) {
c.lock.Lock()
defer c.lock.Unlock()
if c.lru.Contains(key) {
return true, false
} else {
evict := c.lru.Add(key, value)
return false, evict
}
evicted = c.lru.Add(key, value)
return false, evicted
}
// Remove removes the provided key from the cache.

View File

@@ -36,7 +36,7 @@ func NewLRU(size int, onEvict EvictCallback) (*LRU, error) {
return c, nil
}
// Purge is used to completely clear the cache.
// Purge is used to completely clear the cache
func (c *LRU) Purge() {
for k, v := range c.items {
if c.onEvict != nil {
@@ -47,8 +47,8 @@ func (c *LRU) Purge() {
c.evictList.Init()
}
// Add adds a value to the cache. Returns true if an eviction occurred.
func (c *LRU) Add(key, value interface{}) (evicted bool) {
// Add adds a value to the cache. Returns true if an eviction occured.
func (c *LRU) Add(key, value interface{}) bool {
// Check for existing item
if ent, ok := c.items[key]; ok {
c.evictList.MoveToFront(ent)
@@ -78,18 +78,17 @@ func (c *LRU) Get(key interface{}) (value interface{}, ok bool) {
return
}
// Contains checks if a key is in the cache, without updating the recent-ness
// Check if a key is in the cache, without updating the recent-ness
// or deleting it for being stale.
func (c *LRU) Contains(key interface{}) (ok bool) {
_, ok = c.items[key]
return ok
}
// Peek returns the key value (or undefined if not found) without updating
// Returns the key value (or undefined if not found) without updating
// the "recently used"-ness of the key.
func (c *LRU) Peek(key interface{}) (value interface{}, ok bool) {
var ent *list.Element
if ent, ok = c.items[key]; ok {
if ent, ok := c.items[key]; ok {
return ent.Value.(*entry).value, true
}
return nil, ok
@@ -97,7 +96,7 @@ func (c *LRU) Peek(key interface{}) (value interface{}, ok bool) {
// Remove removes the provided key from the cache, returning if the
// key was contained.
func (c *LRU) Remove(key interface{}) (present bool) {
func (c *LRU) Remove(key interface{}) bool {
if ent, ok := c.items[key]; ok {
c.removeElement(ent)
return true
@@ -106,7 +105,7 @@ func (c *LRU) Remove(key interface{}) (present bool) {
}
// RemoveOldest removes the oldest item from the cache.
func (c *LRU) RemoveOldest() (key interface{}, value interface{}, ok bool) {
func (c *LRU) RemoveOldest() (interface{}, interface{}, bool) {
ent := c.evictList.Back()
if ent != nil {
c.removeElement(ent)
@@ -117,7 +116,7 @@ func (c *LRU) RemoveOldest() (key interface{}, value interface{}, ok bool) {
}
// GetOldest returns the oldest entry
func (c *LRU) GetOldest() (key interface{}, value interface{}, ok bool) {
func (c *LRU) GetOldest() (interface{}, interface{}, bool) {
ent := c.evictList.Back()
if ent != nil {
kv := ent.Value.(*entry)

View File

@@ -1,36 +0,0 @@
package simplelru
// LRUCache is the interface for simple LRU cache.
type LRUCache interface {
// Adds a value to the cache, returns true if an eviction occurred and
// updates the "recently used"-ness of the key.
Add(key, value interface{}) bool
// Returns key's value from the cache and
// updates the "recently used"-ness of the key. #value, isFound
Get(key interface{}) (value interface{}, ok bool)
// Check if a key exsists in cache without updating the recent-ness.
Contains(key interface{}) (ok bool)
// Returns key's value without updating the "recently used"-ness of the key.
Peek(key interface{}) (value interface{}, ok bool)
// Removes a key from the cache.
Remove(key interface{}) bool
// Removes the oldest entry from cache.
RemoveOldest() (interface{}, interface{}, bool)
// Returns the oldest entry from the cache. #key, value, isFound
GetOldest() (interface{}, interface{}, bool)
// Returns a slice of the keys in the cache, from oldest to newest.
Keys() []interface{}
// Returns the number of items in the cache.
Len() int
// Clear all cache entries
Purge()
}

Some files were not shown because too many files have changed in this diff Show More