From 79e98e243edf1854132aea07950e3b88594682b1 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Sat, 18 Sep 2021 00:40:18 -0400 Subject: [PATCH 1/3] sig-release(1732): Update references to sigs.k8s.io/promo-tools Signed-off-by: Stephen Augustus --- keps/sig-release/1732-artifact-management/README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/keps/sig-release/1732-artifact-management/README.md b/keps/sig-release/1732-artifact-management/README.md index 8ada8bbf918..348327315b3 100644 --- a/keps/sig-release/1732-artifact-management/README.md +++ b/keps/sig-release/1732-artifact-management/README.md @@ -58,8 +58,7 @@ staging area. For each artifact, there will be a configuration file checked into this repository. When a project wants to promote an image, they will file a PR in this repository to update their image promotion configuration to promote an artifact from staging to production. Once this -PR is approved, automation that is running in the k8s project infrastructure (e.g. -https://github.com/GoogleCloudPlatform/k8s-container-image-promoter) will pick up this new +PR is approved, automation that is running in the k8s project infrastructure (built using our [artifact promotion tooling][promo-tools]) will pick up this new configuration file and copy the relevant bits out to the production serving locations. Importantly, if a project needs to roll-back or remove an artifact, the same process will @@ -128,3 +127,5 @@ manage the images for a Kubernetes release. mirrors? What is the performance impact (latency, throughput) of serving everything from GCLB? Is GCLB reachable from everywhere (including China)? Can we support private mirrors (i.e. non-coordinated mirrors)? + +[promo-tools]: https://sigs.k8s.io/promo-tools From c10630b2261339500ba6e720e5800e831bcaa2ea Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Sat, 18 Sep 2021 00:51:24 -0400 Subject: [PATCH 2/3] sig-release(1732): Clean up lint warnings Signed-off-by: Stephen Augustus --- .../1732-artifact-management/README.md | 31 ++++++++++--------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/keps/sig-release/1732-artifact-management/README.md b/keps/sig-release/1732-artifact-management/README.md index 348327315b3..206ec8d4123 100644 --- a/keps/sig-release/1732-artifact-management/README.md +++ b/keps/sig-release/1732-artifact-management/README.md @@ -14,10 +14,10 @@ ## Summary + This document describes how official artifacts (Container Images, Binaries) for the Kubernetes project are managed and distributed. - ## Motivation The motivation for this KEP is to describe a process by which artifacts (container images, binaries) @@ -29,8 +29,9 @@ and that anyone in the project is capable (if given the right authority) to dist ### Goals The goals of this process are to enable: - * Anyone in the community (with the right permissions) to manage the distribution of Kubernetes images and binaries. - * Fast, cost-efficient access to artifacts around the world through appropriate mirrors and distribution + +- Anyone in the community (with the right permissions) to manage the distribution of Kubernetes images and binaries +- Fast, cost-efficient access to artifacts around the world through appropriate mirrors and distribution This KEP will have succeeded when artifacts are all managed in the same manner and anyone in the community (with the right permissions) can manage these artifacts. @@ -66,6 +67,7 @@ occur, so that the promotion tool needs to be capable of deleting images and art well as promoting them. ### HTTP Redirector Design + To facilitate world-wide distribution of artifacts from a single (virtual) location we will ideally run a replicated redirector service in the United States, Europe and Asia. Each of these redirectors @@ -75,6 +77,7 @@ address and a dns record indicating their location (e.g. `europe.artifacts.k8s.i We will use Geo DNS to route requests to `artifacts.k8s.io` to the correct redirector. This is necessary to ensure that we always route to a server which is accessible no matter what region we are in. We will need to extend or enhance the existing DNS synchronization tooling to handle creation of the GeoDNS records. #### Configuring the HTTP Redirector + THe HTTP Redirector service will be driven from a YAML configuration that specifies a path to mirror mapping. For now the redirector will serve content based on continent, for example: @@ -96,33 +99,33 @@ manage the images for a Kubernetes release. ### Milestone 0 (MVP): In progress -(Described in terms of kops, our first candidate; other candidates welcome!) +(Described in terms of kOps, our first candidate; other candidates welcome!) -* k8s-infra creates a "staging" GCS bucket for each project +- k8s-infra creates a "staging" GCS bucket for each project (e.g. `k8s-artifacts-staging-`) and a "prod" GCS bucket for promoted artifacts (e.g. `k8s-artifacts`, one bucket for all projects). -* We grant write-access to the staging GCS bucket to trusted jobs / people in - each project (e.g. kops OWNERS and prow jobs can push to +- We grant write-access to the staging GCS bucket to trusted jobs / people in + each project (e.g. kOps OWNERS and prow jobs can push to `k8s-artifacts-staging-kops`). We can encourage use of CI & reproducible builds, but we do not block on it. -* We grant write-access to the prod bucket only to the infra-admins & the +- We grant write-access to the prod bucket only to the infra-admins & the promoter process. -* Promotion of artifacts to the "prod" GCS bucket is via a script / utility (as +- Promotion of artifacts to the "prod" GCS bucket is via a script / utility (as we do today). For v1 we can promote based on a sha256sum file (only copy the files listed), similarly to the image promoter. We will experiment to develop that script / utility in this milestone, along with prow jobs (?) to publish to the staging buckets, and to figure out how best to run the promoter. Hopefully we can copy the image-promotion work closely. -* We create a bucket-backed GCLB for serving, with a single url-map entry for +- We create a bucket-backed GCLB for serving, with a single url-map entry for `binaries/` pointing to the prod bucket. (The URL prefix gives us some flexibility to e.g. add dynamic content later) -* We create the artifacts.k8s.io DNS name pointing to the GCLB. (Unclear whether +- We create the artifacts.k8s.io DNS name pointing to the GCLB. (Unclear whether we want one for staging, or just encourage pulling from GCS directly). -* Projects start using the mirrors e.g. kops adds the +- Projects start using the mirrors e.g. kOps adds the https://artifacts.k8s.io/binaries/kops mirror into the (upcoming) mirror-list - support, so that it will get real traffic but not break kops should this + support, so that it will get real traffic but not break kOps should this infrastructure break -* We start to collect data from the GCLB logs. Questions we would like to +- We start to collect data from the GCLB logs. Questions we would like to understand: What are the costs, and what would the costs be for localized mirrors? What is the performance impact (latency, throughput) of serving everything from GCLB? Is GCLB reachable from everywhere (including China)? From 5aaddf8da5a93b76d7a705b49cc4842dfc61fac3 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Sat, 18 Sep 2021 01:30:17 -0400 Subject: [PATCH 3/3] sig-release(1732): Line-wrap at 80 chars Signed-off-by: Stephen Augustus --- .../1732-artifact-management/README.md | 157 ++++++++++-------- 1 file changed, 89 insertions(+), 68 deletions(-) diff --git a/keps/sig-release/1732-artifact-management/README.md b/keps/sig-release/1732-artifact-management/README.md index 206ec8d4123..3d7881ea234 100644 --- a/keps/sig-release/1732-artifact-management/README.md +++ b/keps/sig-release/1732-artifact-management/README.md @@ -15,71 +15,87 @@ ## Summary -This document describes how official artifacts (Container Images, Binaries) for the Kubernetes -project are managed and distributed. +This document describes how official artifacts (Container Images, Binaries) for +the Kubernetes project are managed and distributed. ## Motivation -The motivation for this KEP is to describe a process by which artifacts (container images, binaries) -can be distributed by the community. Currently the process by which images is both ad-hoc in nature -and limited to an arbitrary set of people who have the keys to the relevant repositories. Standardize -access will ensure that people around the world have access to the same artifacts by the same names -and that anyone in the project is capable (if given the right authority) to distribute images. +The motivation for this KEP is to describe a process by which artifacts +(container images, binaries) can be distributed by the community. Currently, +the process by which images is both ad hoc in nature and limited to an +arbitrary set of people who have the keys to the relevant repositories. +Standardized access will ensure that people around the world have access to the +same artifacts by the same names and that anyone in the project is capable (if +given the right authority) to distribute images. ### Goals The goals of this process are to enable: -- Anyone in the community (with the right permissions) to manage the distribution of Kubernetes images and binaries -- Fast, cost-efficient access to artifacts around the world through appropriate mirrors and distribution +- Anyone in the community (with the right permissions) to manage the + distribution of Kubernetes images and binaries +- Fast, cost-efficient access to artifacts around the world through appropriate + mirrors and distribution -This KEP will have succeeded when artifacts are all managed in the same manner and anyone in the community -(with the right permissions) can manage these artifacts. +This KEP will have succeeded when artifacts are all managed in the same manner +and anyone in the community (with the right permissions) can manage these +artifacts. ### Non-Goals -The actual process and tooling for promoting images, building packages or otherwise assembling artifacts -is beyond the scope of this KEP. This KEP deals with the infrastructure for serving these things via -HTTP as well as a generic description of how promotion will be accomplished. +The actual process and tooling for promoting images, building packages or +otherwise assembling artifacts is beyond the scope of this KEP. This KEP deals +with the infrastructure for serving these things via HTTP as well as a generic +description of how promotion will be accomplished. ## Proposal -The top level design will be to set up a global redirector HTTP service (`artifacts.k8s.io`) -which knows how to serve HTTP and redirect requests to an appropriate mirror. This redirector -will serve both binary and container image downloads. For container images, the HTTP redirector -will redirect users to the appropriate geo-located container registry. For binary artifacts, -the HTTP redirector will redirect to appropriate geo-located storage buckets. - -To facilitate artifact promotion, each project, as necessary, will be given access to a -project staging area relevant to their particular artifacts (either storage bucket or image -registry). Each project is free to manage their assets in the staging area however they feel -it is best to do so. However, end-users are not expected to access artifacts through the -staging area. - -For each artifact, there will be a configuration file checked into this repository. When a -project wants to promote an image, they will file a PR in this repository to update their -image promotion configuration to promote an artifact from staging to production. Once this -PR is approved, automation that is running in the k8s project infrastructure (built using our [artifact promotion tooling][promo-tools]) will pick up this new -configuration file and copy the relevant bits out to the production serving locations. - -Importantly, if a project needs to roll-back or remove an artifact, the same process will -occur, so that the promotion tool needs to be capable of deleting images and artifacts as -well as promoting them. +The top level design will be to set up a global redirector HTTP service +(`artifacts.k8s.io`) which knows how to serve HTTP and redirect requests to an +appropriate mirror. This redirector will serve both binary and container image +downloads. For container images, the HTTP redirector will redirect users to the +appropriate geo-located container registry. For binary artifacts, the HTTP +redirector will redirect to appropriate geo-located storage buckets. + +To facilitate artifact promotion, each project, as necessary, will be given +access to a project staging area relevant to their particular artifacts (either +storage bucket or image registry). Each project is free to manage their assets +in the staging area however they feel it is best to do so. However, end-users +are not expected to access artifacts through the staging area. + +For each artifact, there will be a configuration file checked into this +repository. When a project wants to promote an image, they will file a PR in +this repository to update their image promotion configuration to promote an +artifact from staging to production. Once this PR is approved, automation that +is running in the k8s project infrastructure (built using our +[artifact promotion tooling][promo-tools]) will pick up this new configuration +file and copy the relevant bits out to the production serving locations. + +Importantly, if a project needs to roll-back or remove an artifact, the same +process will occur, so that the promotion tool needs to be capable of deleting +images and artifacts as well as promoting them. ### HTTP Redirector Design -To facilitate world-wide distribution of artifacts from a single (virtual) location we will -ideally run a replicated redirector service in the United States, Europe and Asia. -Each of these redirectors -services will be deployed in a Kubernetes cluster and they will be exposed via a public IP -address and a dns record indicating their location (e.g. `europe.artifacts.k8s.io`). +To facilitate world-wide distribution of artifacts from a single (virtual) +location we will ideally run a replicated redirector service in the United +States, Europe and Asia. + +Each of these redirectors services will be deployed in a Kubernetes cluster and +they will be exposed via a public IP address and a dns record indicating their +location (e.g. `europe.artifacts.k8s.io`). -We will use Geo DNS to route requests to `artifacts.k8s.io` to the correct redirector. This is necessary to ensure that we always route to a server which is accessible no matter what region we are in. We will need to extend or enhance the existing DNS synchronization tooling to handle creation of the GeoDNS records. +We will use Geo DNS to route requests to `artifacts.k8s.io` to the correct +redirector. This is necessary to ensure that we always route to a server which +is accessible no matter what region we are in. We will need to extend or +enhance the existing DNS synchronization tooling to handle creation of the +GeoDNS records. #### Configuring the HTTP Redirector -THe HTTP Redirector service will be driven from a YAML configuration that specifies a path to mirror -mapping. For now the redirector will serve content based on continent, for example: +THe HTTP Redirector service will be driven from a YAML configuration that +specifies a path to mirror mapping. For now the redirector will serve content +based on continent, for example: ```yaml /kops @@ -88,12 +104,15 @@ mapping. For now the redirector will serve content based on continent, for examp - default: americas.artificats.k8s.io ``` -The redirector will use this data to redirect a request to the relevant mirror using HTTP 302 responses. The implementation of the mirrors themselves are details left to the service implementor and may be different depending on the artifacts being exposed (binaries vs. container images) +The redirector will use this data to redirect a request to the relevant mirror +using HTTP 302 responses. The implementation of the mirrors themselves are +details left to the service implementor and may be different depending on the +artifacts being exposed (binaries vs. container images). ## Graduation Criteria -This KEP will graduate when the process is implemented and has been successfully used to -manage the images for a Kubernetes release. +This KEP will graduate when the process is implemented and has been +successfully used to manage the images for a Kubernetes release. ## Implementation History @@ -101,34 +120,36 @@ manage the images for a Kubernetes release. (Described in terms of kOps, our first candidate; other candidates welcome!) -- k8s-infra creates a "staging" GCS bucket for each project - (e.g. `k8s-artifacts-staging-`) and a "prod" GCS bucket for promoted - artifacts (e.g. `k8s-artifacts`, one bucket for all projects). -- We grant write-access to the staging GCS bucket to trusted jobs / people in - each project (e.g. kOps OWNERS and prow jobs can push to - `k8s-artifacts-staging-kops`). We can encourage use of CI & reproducible +- k8s-infra creates a "staging" GCS bucket for each project (e.g., + `k8s-artifacts-staging-`) and a "prod" GCS bucket for promoted + artifacts (e.g., `k8s-artifacts`, one bucket for all projects) +- We grant write access to the staging GCS bucket to trusted jobs/people in + each project (e.g., kOps OWNERS and prow jobs can push to + `k8s-artifacts-staging-kops`). We can encourage use of CI & reproducible builds, but we do not block on it. -- We grant write-access to the prod bucket only to the infra-admins & the - promoter process. -- Promotion of artifacts to the "prod" GCS bucket is via a script / utility (as - we do today). For v1 we can promote based on a sha256sum file (only copy the - files listed), similarly to the image promoter. We will experiment to develop - that script / utility in this milestone, along with prow jobs (?) to publish - to the staging buckets, and to figure out how best to run the promoter. - Hopefully we can copy the image-promotion work closely. +- We grant write access to the prod bucket only to the infra-admins & the + promoter process +- Promotion of artifacts to the "prod" GCS bucket is via a script/utility (as + we do today). For v1, we can promote based on a sha256sum file (only copy the + files listed), similarly to the image promoter. We will experiment to develop + that script/utility in this milestone, along with prow jobs (?) to publish to + the staging buckets, and to figure out how best to run the promoter. + Hopefully, we can copy the image-promotion work closely. - We create a bucket-backed GCLB for serving, with a single url-map entry for `binaries/` pointing to the prod bucket. (The URL prefix gives us some flexibility to e.g. add dynamic content later) -- We create the artifacts.k8s.io DNS name pointing to the GCLB. (Unclear whether - we want one for staging, or just encourage pulling from GCS directly). -- Projects start using the mirrors e.g. kOps adds the +- We create the artifacts.k8s.io DNS name pointing to the GCLB (unclear whether + we want one for staging, or just encourage pulling from GCS directly) +- Projects start using the mirrors e.g., kOps adds the https://artifacts.k8s.io/binaries/kops mirror into the (upcoming) mirror-list support, so that it will get real traffic but not break kOps should this infrastructure break -- We start to collect data from the GCLB logs. Questions we would like to - understand: What are the costs, and what would the costs be for localized - mirrors? What is the performance impact (latency, throughput) of serving - everything from GCLB? Is GCLB reachable from everywhere (including China)? - Can we support private mirrors (i.e. non-coordinated mirrors)? +- We start to collect data from the GCLB logs +- Questions we would like to understand: + - What are the costs and what would the costs be for localized mirrors? + - What is the performance impact (latency, throughput) of serving everything + from GCLB? + - Is GCLB reachable from everywhere (including China)? + - Can we support private (non-coordinated) mirrors? [promo-tools]: https://sigs.k8s.io/promo-tools