-
Notifications
You must be signed in to change notification settings - Fork 8.1k
Description
Is this the right place to submit this?
- This is not a security vulnerability or a crashing bug
- This is not a question about how to use Istio
Bug Description
We are facing an issue when trying to expose our internal application via gateway api on GKE, same setup is working fine on Azure aks cluster.
Azure we are using istio ambient 1.24.1 , however, on gke we have installed same version of istio but default profile with sidecar enabled.
Additionally, we are using a revision based istio approach.
In aks we are using below annotations to rewrite the health probe request paths, protocol and port,
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz/ready
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/port_80_health-probe_port: "15021"
service.beta.kubernetes.io/port_80_health-probe_protocol: http
service.beta.kubernetes.io/port_443_health-probe_port: "15021"
service.beta.kubernetes.io/port_443_health-probe_protocol: http
internal communication is working fine, i can reach my application service from within istio-gateway pods fine
curl -v -H "Host: <webapp-link>" http://<gatway-api-ip>
kubectl run curl-test-1 --image=nicolaka/netshoot --rm -it -- curl -v http://webapp.<ns>:<port>
working fine
do we have similar configurations for gke as well?
I did look for similar issues or articles in github and noticed that we can follow a cm based approach to pass custom configurations to our gateway api setup?
can you help us with some more detail here?
Gateway manifest:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: istio-gateway
namespace: istio-system
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
gatewayClassName: istio
addresses:
- value:
type: IPAddress
listeners: - name: catch-all-http
hostname: "*."
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
# shared-gateway-access: "true"
istio.io/rev: default - name: catch-all-https
hostname: "*."
port: 443
protocol: HTTPS
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
# shared-gateway-access: "true"
istio.io/rev: default
tls:
mode: Terminate
certificateRefs:- group: ""
kind: Secret
name:
- group: ""
Gateway AKS:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz/ready
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/port_80_health-probe_port: "15021"
service.beta.kubernetes.io/port_80_health-probe_protocol: http
service.beta.kubernetes.io/port_443_health-probe_port: "15021"
service.beta.kubernetes.io/port_443_health-probe_protocol: http
name: istio-gateway
namespace: istio-system
labels:
istio.io/rev: ambient-trace
spec:
addresses:
- type: IPAddress
value:
gatewayClassName: istio
listeners: - name: catch-all-http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
# shared-gateway-access: "true"
istio.io/dataplane-mode: ambient-trace - name: catch-all-https
hostname: "*" # Ensure this hostname is appropriate for your AKS environment
port: 443
protocol: HTTPS
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
# shared-gateway-access: "true"
istio.io/dataplane-mode: ambient-trace
tls:
certificateRefs:- group: ""
kind: Secret
name:
mode: Terminate
- group: ""
Version
istioctl version
client version: 1.24.1
control plane version: 1.24.1
data plane version: 1.24.1 (10 proxies)
Additional Information
For azure i followed this,
https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#custom-load-balancer-health-probe
for GCP i found the below links,