Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NetworkPolicy blocks liveness probe #3875

Open
dvob opened this issue Feb 25, 2025 · 3 comments
Open

NetworkPolicy blocks liveness probe #3875

dvob opened this issue Feb 25, 2025 · 3 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@dvob
Copy link

dvob commented Feb 25, 2025

What happened:
NetworkPolicies let liveness probes fail.

What you expected to happen:
NetworkPolicies do not affect liveness probes.

How to reproduce it (as minimally and precisely as possible):

kind create cluster

# create default deny policy in default namespace
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
EOF

# deploy sample workload in default namespace
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - command:
            - bash
            - -c
            - apt update && apt install -y netcat-openbsd && nc -kl 1234
          image: debian
          livenessProbe:
            failureThreshold: 5
            periodSeconds: 2
            tcpSocket:
              port: app
          name: myapp
          ports:
            - containerPort: 1234
              name: app
EOF

# observe failing liveness probe events and restarting pod
kubectl get events -w
kubectl get pod -l app=myapp

For the reproduce example I took a default deny policy. Initially I observed this with policies similar to the following. So its not just a problem on default deny.

# allow access to app
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: all-to-myapp
spec:
  ingress:
    - from:
        - ipBlock:
            cidr: 0.0.0.0/0
      ports:
        - port: 1234
          protocol: TCP
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
    - Ingress
---
# restrict egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-egress
spec:
  podSelector:
    matchLabels:
      app: myapp
  egress:
    - ports:
        - port: 5432
          protocol: TCP
      to:
        - podSelector:
            matchLabels:
              app: postgres
  policyTypes:
  - Egress

When only the all-to-myapp policy is active it still works. Only after applying the NetworkPolicy which restricts the egress the liveness probe starts to fail.

On testing this I always restarted the pod as otherwise the policies did not become active.

Environment:

  • kind version: kind v0.27.0 go1.24.0 linux/amd64
  • Runtime info: docker info:
Server:                                                                                                                                                                                            
 Server Version: 27.3.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: true
  Native Overlay Diff: false
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: c507a0257ea6462fbd6f5ba4f5c74facb04021f4.m
 runc version:
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.13.1-arch1-1
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
  • Kubernetes version: (use kubectl version):
Client Version: v1.32.1
Kustomize Version: v5.5.0
Server Version: v1.32.2
@dvob dvob added the kind/bug Categorizes issue or PR as related to a bug. label Feb 25, 2025
@aojea
Copy link
Contributor

aojea commented Feb 25, 2025

Interesting, this issue was reported some time ago and fixed kubernetes-sigs/kube-network-policies#65 and in kind #3698

@dvob are you using kindnet or installing a different cni plugin?

@aojea aojea self-assigned this Feb 25, 2025
@dvob
Copy link
Author

dvob commented Feb 25, 2025

I using the default which comes with kind v0.27.0: docker.io/kindest/kindnetd:v20250214-acbabc1a

@BenTheElder
Copy link
Member

@aojea we should probably compare the behavior of this to other implementations? It sort of makes sense that a default deny policy wouldn't allow any traffic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants