Kubernetes Publisher 9aa972043f Merge pull request #64122 from ixdy/update-rules_go-and-gazelle
Automatic merge from submit-queue (batch tested with PRs 64122, 64936, 65288, 65383). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Update to rules_go 0.12.1 and gazelle 0.12.0 and perform related cleanups

**What this PR does / why we need it**: my initial intent was to simply update to rules_go 0.12.1 and gazelle 0.12.0.

A few internal changes / deprecations meant that I finally needed to clean up some technical debt. This also fixes #64122.
I've attempted to keep the steps as separate commits to make it easier to review:

1. Disable gazelle proto rule generation; legacy proto rules are deprecated, and we don't (currently) build protos at build time anyway, instead generating them with `hack/update-generated-protobuf.sh` and then checking them in. We can revisit this in the future if we'd like.
2. Remove the legacy `go_default_library_protos` filegroups using [buildozer](https://github.com/bazelbuild/buildtools/tree/master/buildozer). We don't use these, anyway.
3. Update the rules_go bazel workspace dependency to 0.12.1.
4. Vendor gazelle 0.12.0 and update BUILD files with `hack/update-bazel.sh`. This causes a lot of diffs, because `select()`s are no longer used in `srcs` attributes, external tests are folded into non-external tests, and vendored targets get an `importmap` attribute.
5. Set `gazelle:prefix` on `staging/src/BUILD` to make gazelle treat these correctly(ish). This allows us to remove the sed rewrite hack in `hack/update-bazel.sh`.
6. Explicitly set `# gazelle:importmap_prefix k8s.io/kubernetes/vendor` on `vendor/`, so that all vendored dependencies get the right importmap. gazelle 0.12.0 uses the bazel workspace name + `vendor/` as a prefix, which doesn't work with native go. Newer gazelle will use the go prefix (https://github.com/bazelbuild/bazel-gazelle/pull/207), but it's not released yet. Setting this correctly now also fixes later `BUILD` churn.
7. Re-run `hack/update-bazel.sh`. This causes a bunch of diffs, since anything under `staging/src` now uses the `staging/src/` path instead of `vendor/`. (Both would work for bazel, but gazelle uses the former, since `vendor/` uses symlinks.) Also `importmap`s under `vendor/` are fixed.
8. Reformat a few files (using [buildifier](https://github.com/bazelbuild/buildtools/tree/master/buildifier)) to make later diffs easier to read.
9. Rework the `go_genrule` rules to use the new `go_genrule` from https://github.com/kubernetes/repo-infra/pull/72, which is more bazely, since it uses the rules_go `go_path` rule instead of lots of shell.
10. Remove the deprecated `go_prefix` rule from the root BUILD.bazel file.
11. Set `# gazelle:importmap_prefix k8s.io/kubernetes/vendor` on `staging/src` as well, which ensures that these repos are treated as vendored dependencies. (It's basically the bazel-y way of doing the `vendor/k8s.io` symlinks.)
12. Run `hack/update-bazel.sh` one last time to fix all of the `importmap`s under `staging/src`.

Note re: point 6 above - we're pretty much ignoring the `vendor/k8s.io` symlinks entirely now under bazel. Using the `gazelle:prefix` directive ensures these get mapped into the right go importpath, and the `go_path` rule installs these correctly now too.

**Special notes for your reviewer**: this should not be submitted before https://github.com/kubernetes/repo-infra/pull/72, obviously.

**Release note**:

```release-note
NONE
```

/assign @BenTheElder @fejta @thockin
cc @cblecker @jayconrod

Kubernetes-commit: 1ad1c8c7f80d99b9625924b2102a04a555162bfb
2018-06-24 20:27:30 +00:00
2018-06-13 09:53:47 +02:00
2018-02-01 19:11:19 +08:00
2017-10-19 18:31:46 +03:00

sample-controller

This repository implements a simple controller for watching Foo resources as defined with a CustomResourceDefinition (CRD).

Note: go-get or vendor this package as k8s.io/sample-controller.

This particular example demonstrates how to perform basic operations such as:

  • How to register a new custom resource (custom resource type) of type Foo using a CustomResourceDefinition.
  • How to create/get/list instances of your new resource type Foo.
  • How to setup a controller on resource handling create/update/delete events.

It makes use of the generators in k8s.io/code-generator to generate a typed client, informers, listers and deep-copy functions. You can do this yourself using the ./hack/update-codegen.sh script.

The update-codegen script will automatically generate the following files & directories:

  • pkg/apis/samplecontroller/v1alpha1/zz_generated.deepcopy.go
  • pkg/client/

Changes should not be made to these files manually, and when creating your own controller based off of this implementation you should not copy these files and instead run the update-codegen script to generate your own.

Details

The sample controller uses client-go library extensively. The details of interaction points of the sample controller with various mechanisms from this library are explained here.

Purpose

This is an example of how to build a kube-like controller with a single type.

Running

Prerequisite: Since the sample-controller uses apps/v1 deployments, the Kubernetes cluster version should be greater than 1.9.

# assumes you have a working kubeconfig, not required if operating in-cluster
$ go run *.go -kubeconfig=$HOME/.kube/config

# create a CustomResourceDefinition
$ kubectl create -f artifacts/examples/crd.yaml

# create a custom resource of type Foo
$ kubectl create -f artifacts/examples/example-foo.yaml

# check deployments created through the custom resource
$ kubectl get deployments

Use Cases

CustomResourceDefinitions can be used to implement custom resource types for your Kubernetes cluster. These act like most other Resources in Kubernetes, and may be kubectl apply'd, etc.

Some example use cases:

  • Provisioning/Management of external datastores/databases (eg. CloudSQL/RDS instances)
  • Higher level abstractions around Kubernetes primitives (eg. a single Resource to define an etcd cluster, backed by a Service and a ReplicationController)

Defining types

Each instance of your custom resource has an attached Spec, which should be defined via a struct{} to provide data format validation. In practice, this Spec is arbitrary key-value data that specifies the configuration/behavior of your Resource.

For example, if you were implementing a custom resource for a Database, you might provide a DatabaseSpec like the following:

type DatabaseSpec struct {
	Databases []string `json:"databases"`
	Users     []User   `json:"users"`
	Version   string   `json:"version"`
}

type User struct {
	Name     string `json:"name"`
	Password string `json:"password"`
}

Validation

To validate custom resources, use the CustomResourceValidation feature.

This feature is beta and enabled by default in v1.9.

Example

The schema in crd-validation.yaml applies the following validation on the custom resource: spec.replicas must be an integer and must have a minimum value of 1 and a maximum value of 10.

In the above steps, use crd-validation.yaml to create the CRD:

# create a CustomResourceDefinition supporting validation
$ kubectl create -f artifacts/examples/crd-validation.yaml

Subresources

Custom Resources support /status and /scale subresources as an alpha feature in v1.10. Enable this feature using the CustomResourceSubresources feature gate on the kube-apiserver:

--feature-gates=CustomResourceSubresources=true

Example

The CRD in crd-status-subresource.yaml enables the /status subresource for custom resources. This means that UpdateStatus can be used by the controller to update only the status part of the custom resource.

To understand why only the status part of the custom resource should be updated, please refer to the Kubernetes API conventions.

In the above steps, use crd-status-subresource.yaml to create the CRD:

# create a CustomResourceDefinition supporting the status subresource
$ kubectl create -f artifacts/examples/crd-status-subresource.yaml

Cleanup

You can clean up the created CustomResourceDefinition with:

$ kubectl delete crd foos.samplecontroller.k8s.io

Compatibility

HEAD of this repository will match HEAD of k8s.io/apimachinery and k8s.io/client-go.

Where does it come from?

sample-controller is synced from https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/sample-controller. Code changes are made in that location, merged into k8s.io/kubernetes and later synced here.

Description
No description provided
Readme Apache-2.0 43 MiB
Languages
Go 93.7%
Shell 6.3%