Skip to content

Commit

Permalink
Merge pull request #429 from justaugustus/renames
Browse files Browse the repository at this point in the history
Update references to default development branch
  • Loading branch information
k8s-ci-robot authored Sep 20, 2021
2 parents 15659fa + 272930c commit 9c6e1a1
Show file tree
Hide file tree
Showing 3 changed files with 69 additions and 58 deletions.
60 changes: 34 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -310,81 +310,83 @@ submitting a new PR.

As the promoter uses a combination of network API calls and shell-instantiated
processes, we have to fake them for the unit tests. To make this happen, these
mechanisms all use a `stream.Producer` [interface](lib/stream/types.go). The
real-world code uses either the [http](lib/stream/http.go) or
[subprocess](lib/stream/subprocess.go) implementations of this interface to
mechanisms all use a `stream.Producer` [interface](/legacy/stream/types.go). The
real-world code uses either the [http](/legacy/stream/http.go) or
[subprocess](/legacy/stream/subprocess.go) implementations of this interface to
create streams of data (JSON or not) which we can interpret and use.

For tests, the [fake](lib/stream/fake.go) implementation is used instead, which
For tests, the [fake](/legacy/stream/fake.go) implementation is used instead, which
predefines how that stream will behave, for the purposes of each unit test. A
good example of this is the [`TestReadRegistries`
test](lib/dockerregistry/inventory_test.go).
test](/legacy/dockerregistry/inventory_test.go).

### Automated builds

The `gcr.io/k8s-staging-artifact-promoter` GCR is a staging repo for Docker
image build artifacts from this project. Every update to the `master` branch in
this Github repo results in a new set of 2 images in the staging GCR repo:
image build artifacts from this project. Every update to the default
development branch of this Github repo results in three images being built in
the staging GCR repo:

1. `gcr.io/k8s-staging-artifact-promoter/cip`
1. `gcr.io/k8s-staging-artifact-promoter/cip-auditor`
1. `gcr.io/k8s-staging-artifact-promoter/kpromo`

These images get built and pushed up there by GCB using the [build file
here][cloudbuild.yaml]. There are also production versions of these images here:

1. `{asia,eu,us}.gcr.io/k8s-artifacts-prod/artifact-promoter/cip`
1. `{asia,eu,us}.gcr.io/k8s-artifacts-prod/artifact-promoter/cip-auditor`
1. `{asia,eu,us}.gcr.io/k8s-artifacts-prod/artifact-promoter/kpromo`

The images from the staging GCR end up in `k8s-artifacts-prod` using the
promoter image running in
[Prow](https://github.com/kubernetes/test-infra/tree/master/prow). "Using the
promoter" here means creating a PR in the [k8sio-manifests-dir][k8s.io Github
repo] to promote versions from staging to production, such as in [this
PR](https://github.com/kubernetes/k8s.io/pull/704).
[Prow](https://github.com/kubernetes/test-infra/prow). "Using the
promoter" here means creating a PR in the [k8s.io Github repo][k8sio-manifests-dir]
to promote versions from staging to production, such as in
[this PR](https://github.com/kubernetes/k8s.io/pull/704).

#### Connection with Prow

There are a number of Prow jobs that consume the production Docker images of
`cip` or `cip-auditor`. These jobs are defined [cip-prow-integration][here].
There are a number of Prow jobs that consume the production container images
of `cip`, `cip-auditor`, or `kpromo`. These jobs are defined
[cip-prow-integration][here].

The important thing to note is that ultimately the jobs there are downstream
consumers of the production `cip` and `cip-auditor` images discussed above. So
if there is a breaking change where the Docker images don't work any more for
these Prow jobs, the sequence of events required to fix those Prow jobs are:

1. fix the bug in this codebase
2. generate new `cip` and `cip-auditor` images in `gcr.io/k8s-staging-artifact-promoter` (automated)
2. generate new `cip` and `cip-auditor` images in
`gcr.io/k8s-staging-artifact-promoter` (automated)
3. promote images into production
4. update Prow jobs to use the new images from Step 3

Step 1 is done in this Github repo. Step 3 is done in [the k8s.io Github
repo](https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io). Step 4 is
done in the [test-infra Github repo](https://github.com/kubernetes/test-infra).
repo][k/k8s.io].

Step 4 is done in the [test-infra Github repo][k/test-infra].

## Versioning

We follow [Semver](https://semver.org/) for versioning. For each new release,
We follow [SemVer](https://semver.org/) for versioning. For each new release,
create a new release on GitHub with:

- Update VERSION file to bump the semver version (e.g., `1.0.0`)
- Create a new commit for the 1-liner change above with this command with `git commit -m "cip 1.0.0"`
- Create an annotated tag at this point with `git tag -a "v1.0.0" -m "cip 1.0.0"`
- Push this version to the `master` branch (requires write access)
- Create a new commit for the 1-liner change above with this command with
`git commit --signoff -m "v1.0.0: Release commit"`
- Create a signed tag at this point with `git tag -s -m "v1.0.0" "v1.0.0"`
- Push this version to the default development branch (requires write access)

### Default versioning

The Docker images that are produced by this repo are automatically tagged in the
following format: `YYYYMMDD-<git-describe>`. As such, there is no need to bump
the VERSION file often as the Docker images will always get a unique identifier.

[docker]:https://docs.docker.com/get-docker
[golang]:https://golang.org/doc/install
[k8sio-manifests-dir]:https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io
[cip-prow-integration]:https://github.com/kubernetes/k8s.io/blob/main/k8s.gcr.io/Vanity-Domain-Flip.md#prow-integration

## Checks Interface

Read more [here](https://github.com/kubernetes-sigs/promo-tools/blob/master/checks_interface.md).
Read more [here](/checks_interface.md).

The addition of the checks interface to the Container Image Promoter is meant
to make it easy to add checks against pull requests affecting the promoter
Expand All @@ -399,5 +401,11 @@ The vulnerability dashboard (`vulndash`) has moved to [`kubernetes/release`][k/r

Read more [here][vulndash-readme].

[cip-prow-integration]: https://git.k8s.io/k8s.io/k8s.gcr.io/Vanity-Domain-Flip.md#prow-integration
[docker]: https://docs.docker.com/get-docker
[golang]: https://golang.org/doc/install
[k/k8s.io]: https://git.k8s.io/k8s.io
[k/release]: https://git.k8s.io/release
[k/test-infra]: https://git.k8s.io/test-infra
[k8sio-manifests-dir]: https://git.k8s.io/k8s.io/k8s.gcr.io
[vulndash-readme]: https://git.k8s.io/release/docs/vuln-dashboard.md
58 changes: 29 additions & 29 deletions checks_interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ to make it easy to add checks against pull requests affecting the promoter
manifests. The interface allows engineers to add checks without worrying about
any pre-existing checks and test their own checks individually, while also
giving freedom as to what conditionals or tags might be necessary for the
check to occur. Aditionally, using an interface means easy expandability of
check to occur. Additionally, using an interface means easy expandability of
check requirements in the future.

## Interface Explanation
The `PreCheck` interface is implemented like so in the
[types.go](https://github.com/kubernetes-sigs/promo-tools/blob/master/lib/dockerregistry/types.go)
[types.go](/legacy/dockerregistry/types.go)
file. The `Run` function is the method used in order to actually execute the
check that implements this interface.

Expand All @@ -33,15 +33,13 @@ func (sc *SyncContext) RunChecks(
```

#### Integration With PROW
The Container Image Promoter has several Prow jobs that run whenever a pull
request attempts to modify the promoter manifests. The
[*pull-k8s-cip*](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-release/cip/container-image-promoter.yaml)
and the
[*pull-k8s-cip-vuln*](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-release/cip/container-image-promoter.yaml)
Prow jobs call the `RunChecks` function and actually run their respective checks.
New Prow jobs can be [added](https://github.com/kubernetes/test-infra/blob/master/config/jobs/README.md#adding-or-updating-jobs)
to run an individual check in the future if that check requires it's own separate
job for some reason.
The Container Image Promoter has several Prow jobs that run whenever a pull
request attempts to modify the promoter manifests. The
[*pull-k8sio-cip*][k8sio-presubmits] and the
[*pull-k8sio-cip-vuln*][k8sio-presubmits] Prow jobs call the `RunChecks`
function and actually run their respective checks. New Prow jobs can be
[added][add-prow-job] to run an individual check in the future if that check
requires a separate job.

### How To Add A Check
In order to add a check, all you need to do is create a check type that
Expand All @@ -52,13 +50,12 @@ type foo struct {}
...
func (f *foo) Run() error
```
Then add that check type you've created to the input list of PreChecks for
the RunChecks method, which is called in the
[cip.go](https://github.com/kubernetes-sigs/promo-tools/blob/master/cip.go)
file.

Then add that check type you've created to the input list of PreChecks for
the `RunChecks` method [here](/legacy/dockerregistry/inventory.go).

Note that the `Run` method of the precheck interface does not accept any
paramaters, so any information that you need for your check should be passed
parameters, so any information that you need for your check should be passed
into the check type as a field. For example, if you are running a check over
promotion edges, then you can set up your check like so:

Expand All @@ -81,11 +78,11 @@ func (f * foo) Run() error {
Images that have been promoted are pushed to production; and once pushed to
production, they should never be removed. The `ImageRemovalCheck` checks if
any images are removed in the pull request by comparing the state of the
promoter manifests in the pull request's branch to the master branch. Two sets
of Promotion Edges are generated (one for both the master branch and pull
request) and then compared to make sure that every destination image (defined
by its image tag and digest) found in the master branch is found in the pull
request.
promoter manifests in the pull request's branch to the default development
branch. Two sets of Promotion Edges are generated (one for both the default
development branch and pull request) and then compared to make sure that every
destination image (defined by its image tag and digest) found in the default
development branch is found in the pull request.

This method for detecting removed images should ensure that pull requests are
only rejected if an image is completely removed from production, while still
Expand All @@ -112,25 +109,28 @@ all images for any vulnerabilities they might already have before promoting
them. A vulnerability check also serves as a method for surfacing all
vulnerabilities regardless if they have a fix to the user. To emphasize this
point, the vulnerability check has been implemented in it's own separate Prow
job
[*pull-k8s-cip-vuln*](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-release/cip/container-image-promoter.yaml)
job [*pull-k8sio-cip-vuln*][k8sio-presubmits]
so that the check's logs (which will detail all the vulnerabilities that exist
in the new images to be promoted) won't get mixed in with the logs from the
promoter's other checks.

The vulnerability check makes use of the Container Analysis API in order to
1. scan all new staging images for vulnerabilities whenever they are added to
an image staging project
2. get vulnerability information when we are
checking a the images to be promoted from a PR
The vulnerability check makes use of the Container Analysis API in order to:

1. scan all new staging images for vulnerabilities whenever they are added to
an image staging project
2. get vulnerability information when we are checking the images to be promoted
from a PR

To make use of this API, key pieces of infrastructure must be put in place,
such as enabling the Container Analysis API on all image staging projects
and authenticating the Prow job (pull-k8s-cip-vuln) with a Google service
and authenticating the Prow job (pull-k8sio-cip-vuln) with a Google service
account that is authorized to access the vulnerability data for each
staging project.

The vulnerability check will reject a pull request if it finds any
vulnerabilities that are both beyond the severity threshold (defined by the
*-vuln-severity-threshold*) and have a known fix; otherwise the check will
accept the PR.

[add-prow-job]: https://git.k8s.io/test-infra/config/jobs/README.md#adding-or-updating-jobs
[k8sio-presubmits]: https://git.k8s.io/test-infra/config/jobs/kubernetes/wg-k8s-infra/releng/artifact-promotion-presubmits.yaml
9 changes: 6 additions & 3 deletions cmd/gh2gcs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ Google Cloud has [documentation on installing and configuring the Google Cloud S

The simplest way to install the `gh2gcs` CLI is via `go get`:

```
$ go get k8s.io/release/cmd/gh2gcs
```console
go get k8s.io/release/cmd/gh2gcs
```

This will install `gh2gcs` to `$(go env GOPATH)/bin/gh2gcs`.
Expand Down Expand Up @@ -46,4 +46,7 @@ The following GCS buckets are managed by SIG Release:
- k8s-artifacts-cni - contains [CNI plugins](https://github.com/containernetworking/plugins) artifacts
- k8s-artifacts-cri-tools - contains [CRI tools](https://github.com/kubernetes-sigs/cri-tools) artifacts (`crictl` and `critest`)

The artifacts are pushed to GCS by [Release Managers](https://github.com/kubernetes/sig-release/blob/master/release-managers.md). The pushing is done manually by running the appropriate `gh2gcs` command. It's recommended for Release Managers to watch the appropriate repositories for new releases.
The artifacts are pushed to GCS by
[Release Managers](https://k8s.io/releases/release-managers/). The pushing is
done manually by running the appropriate `gh2gcs` command. It's recommended for
Release Managers to watch the appropriate repositories for new releases.

0 comments on commit 9c6e1a1

Please sign in to comment.