-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Permission Denied Error attempting to start S3 Node Pod #211
Comments
Hi @AlecAttwood , what underlying operating system are you using for your hosts? |
Amazon linux
|
@monthonk I've had a better look, pretty sure our nodes have SELinux policy rules which are blocking the S3 CSI Driver from mounting on the /proc folder. I tried multiple combinations of SELinux config in the charts values.yaml, which didn't help. And from the SELinux audit, changing the seLinux option in the values.yaml, it didn't look like it was applying the securityContext to the pod properly. Can you suggest anything else to try, or potential seLinux configs which might work on a more locked down node? |
Hey @AlecAttwood I'm trying to reproduce the issue you're having. I created a new EKS cluster:
mp-csi-testing-cluster-helm.yaml
and installed Mountpoint CSI driver Helm chart: $ helm install aws-mountpoint-s3-csi-driver aws-mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver \
--namespace kube-system One thing might not be clear in our installation instructions if you're using IAM Roles for Service Accounts (IRSA) is that, it asks you to create the role only (i.e., without service account): $ eksctl create iamserviceaccount \
--name s3-csi-driver-sa \
--namespace kube-system \
--cluster $CLUSTER_NAME \
--attach-policy-arn $ROLE_ARN \
--approve \
--role-name $ROLE_NAME \
--region $REGION \
--role-only # <-- Here and our Helm chart creates service account named $ kubectl describe sa s3-csi-driver-sa -n kube-system | rg eks.amazonaws.com/role-arn
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::account:role/eks-s3-csi-driver-role in I tried with both 1.7.0 (latest) and 1.6.0 versions of CSI driver and also used Amazon Linux as my node OS but couldn't reproduce the issue you're having. Did you do some other configuration you think it might be related? Btw, you should be able to override $ helm install aws-mountpoint-s3-csi-driver aws-mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver \
--namespace kube-system \
--set node.seLinuxOptions.level="..." \
--dry-run --debug |
Hi @unexge,
I mentioned above our nodes are hardened with extra CIS hardening, and have extra SELinux policies added. I'm pretty sure this is what causing the issues. The pods never actually start, and there's SELinux audit logs on our nodes which list denied actions when mounting on the
I did this, then quired the security contexts for all the pods and didn't see anything. Every time I changed the seLinux options and re-deployed, I looked at the SELinux Audit logs and they were exactly the same. Implying that setting the values didn't change anything or didn't apply the security context. Which is weird, I'm not 100% sure if that an issue with the helm chart or with the logging. I'll continue to investigate, I'm still convinced if I can set the right SELinux permissions it should work, even with our hardened AMI. Are there any other combinations of SELinux config to try? I'm not that familiar with it, I'm sure the default config should be enough, but it's not working in this case. |
Hey @AlecAttwood, I was able to reproduce the issue with Looking at the audit log I got from my host:
Seems like it fails when runc/containerd tries to mount I'm looking into whether is it possible to get rid of |
Thanks for looking into it. That audit logs looks almost identical to what I was seeing. On my side we're going to add a temporary SELinux policy on our nodes to allow it to mount on |
Any updates on this issue? I am experiencing a similar problem with mounting /proc/mounts inside a kind cluster, though I am not using SELinux |
@DWS-guy no updates yet unfortunately. Are you getting |
@unexge Correct, I am getting that exact error. SELinux is not present on my system |
We are hoping to remove the dependency on I'd recommend opening a new bug report with your logs so that we can investigate, although I would note that we don't officially support kind clusters, only open-source Kubernetes (K8S) and Amazon EKS. |
/kind bug
When deploying the CSI S3 Driver to an EKS Cluster the node pod is failing to start. Specifically the
s3-plugin
container is in crash loop backoff due to issues mounting a volume.What happened?
When deploying the mountpoint-s3-csi-driver helm chart, encountering this error:
What you expected to happen?
The Pods start correctly
How to reproduce it (as minimally and precisely as possible)?
Create a new EKS cluster and deploy the moutnpoint-s3-csi-driver helm chart
Anything else we need to know?:
Since this is a permissions issues with the s3 plugin trying to mount a volume on the host I assume it's due to my EKS setup. However I haven't found much available info into potential fixes. Usually issues with mounting specifically in
/proc
folders are easy, just don't mount to that path since it's locked down, but the s3-plugin mount paths cannot be changed via the values file.mountpoint-s3-csi-driver/charts/aws-mountpoint-s3-csi-driver/templates/node.yaml
Line 122 in 2bbdaad
Looking for any potential fixes or things to try. Thanks
Environment
kubectl version
): v1.29.4-eks-036c24bUsing default values
The text was updated successfully, but these errors were encountered: