"Cluster Gateway" is a gateway apiserver for routing kubernetes api traffic
to multiple kubernetes clusters. Additionally, the gateway is completely
pluggable for a running kubernetes cluster natively because it is developed
based on the native api extensibility named apiserver-aggregation.
A new extended resource "gateway.open-cluster-management.io/ClusterGateway" will be
registered into the hosting cluster after properly applying corresponding
APIService
objects, and a new subresource named "proxy" will be available
for every existing "Cluster Gateway" resource which is inspired by the
original kubernetes "service/proxy", "pod/proxy" subresource.
Overall our "Cluster Gateway" also has the following merits as a multi-cluster api-gateway solution:
-
Etcd Free: Normally an aggregated apiserver must be deployed along with a dedicated etcd cluster which is bringing extra costs for the admins. While our "Cluster Gateway" can be running completely without etcd instances, because the extended "ClusterGateway" resource are virtual read-only kubernetes resource which is converted from secret resources from a namespace in the hosting cluster.
-
Scalability: Our "Cluster Gateway" can scale out to arbitrary instances to deal with the increasing loads
$ docker pull ghcr.io/kluster-manager/cluster-gateway:v1.1.12 # Or other newer tags
- Run locally: https://github.com/kluster-manager/cluster-gateway/blob/master/docs/local-run.md
- Sample cluster-gateway converting secret:
- ServiceAccountToken type secret: https://github.com/kluster-manager/cluster-gateway/blob/master/hack/samples/cluster-gateway-secret-serviceaccount-token.yaml
- X.509 certificate type secret: https://github.com/kluster-manager/cluster-gateway/blob/master/hack/samples/cluster-gateway-secret-x509.yaml
Compile the e2e benchmark suite by:
$ make e2e-benchmark-binary
The benchmark suite will be creating-updating-deleting configmaps in a flow repeatly for 100 times. Here's a comparison of the performance we observed in a local experiment:
Bandwidth | Direct | ClusterGateway | ClusterGateway(over Konnectivity) |
---|---|---|---|
Fastest | 0.059s | 0.190s | 0.428s |
Slowest | 0.910s | 0.856s | 1.356s |
Average | 0.583s ± 0.104s | 0.581s ± 0.087s | 0.608s ± 0.135s |
Cluster-gateway has native integration with Open-Cluster-Management(OCM) to provide the KubeVela admin a more coherent user experience in distributing applications across multiple clusters:
The official vela addon named ocm-cluster-manager will help you easily bootstrap the OCM control plane (in the hosting cluster where your KubeVela control plane lives). Note that the OCM environment installed from the addon above will not take any effect until we opt-in to enable the functional integration between KubeVela and OCM as is elaborated below. It's just a minimal trial setup to try out OCM instantly, while in order to enable the further integration with OCM, we will need to adjust the configuration of cluster-gateway to make it detect and aware of the local OCM environment.
By opt-in to the flag --ocm-integration=true
, the cluster-gateway will be
detecting and loading the OCM environment in the hosting cluster and connecting
each ClusterGateway
custom resource from cluster-gateway to OCM's original
cluster model in OCM named ManagedCluster
. The ClusterGateway
is a
gateway "ingress" abstraction for the Kubernetes clusters managed by KubeVela,
so after integrating with OCM it's intuitive to regard the gateway resource
as a "satellite" child resource around ManagedCluster
. Setting the flag will
make the cluster-gateway filter out those dangling ClusterGateway
that doesn't
have a valid ManagedCluster
bound with. In addition to that, we won't need to
explicitly set the master URL in the cluster secret because the cluster-gateway
will be merging the URL list from the corresponding ManagedCluster
.
Furthermore, by enabling the integration, we will also reflect/aggregate the
healthiness of the corresponding clusters by partially merging the original
healthiness status from OCM's ManagedCluster
. So we can save the troubles
before attempting to talk to an unavailable cluster.
Installing the cluster-gateway via the standalone chart or KubeVela's chart provides us a one-time light-weighting setup of cluster-gateway, but sadly there are still some missing puzzles we should notice before we bring the cluster-gateway into sustainable production environment:
- The rotation of cluster-gateway's server TLS certificate.
- Automatic addition/removal of the
ClusterGateway
resource upon cluster discovery.
In order to fill the blanks in cluster-gateway above, optionally we can delegate the management of cluster-gateway to OCM by introducing a new component named cluster-gateway-manager to the hosting cluster which is basically responsible for:
- Sustainable installation as a typical "operator" dedicated for cluster-gateway.
- Modelling cluster-gateway as an OCM addon.
The addon-manager can be installed via simple helm commands, please refer to the installation guide here.
When feature flag ClientIdentityPenetration
is enabled, cluster-gateway will
recognize the identity in the incoming requests and use the impersonation mechanism
to send requests to managed clusters with identity impersonated. By default,
the impersonated identity is consistent with the identity in the incoming requests.
In the cases that the identity in different clusters are not aligned, the ClientIdentityExchanger
feature would be helpful to make projections. You can use either the global configuration
or the cluster configuration for declaring the identity exchange rules, like the given
example.
For global configuration, you need to set up the --cluster-gateway-proxy-config=<the configuration file path>
to enable it. For cluster configuration, you can set the annotation gateway.open-cluster-management.io/cluster-gateway-proxy-configuration
value to enable the configuration for the requests to the attached cluster.