Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

canary-on-1.16 block tests flaky #119

Closed
msau42 opened this issue Nov 8, 2019 · 8 comments
Closed

canary-on-1.16 block tests flaky #119

msau42 opened this issue Nov 8, 2019 · 8 comments
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@msau42
Copy link
Collaborator

msau42 commented Nov 8, 2019

https://k8s-testgrid.appspot.com/sig-storage-csi-ci#canary-on-1.16

Oddly enough, canary-on-master tests seem fine.

@jsafrane @pohly could you help take a look?

/kind failing-test

@k8s-ci-robot k8s-ci-robot added the kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. label Nov 8, 2019
@msau42
Copy link
Collaborator Author

msau42 commented Jan 10, 2020

Its flaky across all jobs, not just this.

I think it's related to running on kind, where hostpath volumes go to the host. There may be races due another test grabbing the same device

@msau42
Copy link
Collaborator Author

msau42 commented Jan 10, 2020

Although if that were the case, then intree parallel tests should be flaky too? Maybe it's a probability game since there a lot more tests running intree

@pohly
Copy link
Contributor

pohly commented Jan 13, 2020

There may be races due another test grabbing the same device

Creating a loop device is done by kubelet and the CSI hostpath driver with https://github.com/kubernetes/kubernetes/blob/8a09460c2f7ba8f6acd8a6fb7603ed3ac4805eb6/pkg/volume/util/volumepathhandler/volume_path_handler_linux.go#L96-L112.

The command invocation itself (= "losetup -f --show") shouldn't be racy. Let's look at its implementation:

openat(AT_FDCWD, "/dev/loop-control", O_RDWR|O_CLOEXEC) = 3
ioctl(3, LOOP_CTL_GET_FREE)             = 0
close(3)                                = 0
lstat("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=260, ...}) = 0
lstat("/tmp/test", {st_mode=S_IFREG|0644, st_size=1073741824, ...}) = 0
openat(AT_FDCWD, "/tmp/test", O_RDWR|O_CLOEXEC) = 3
openat(AT_FDCWD, "/dev/loop0", O_RDWR|O_CLOEXEC) = 4
ioctl(4, LOOP_SET_FD, 3)                = 0
ioctl(4, LOOP_SET_STATUS64, {lo_offset=0, lo_number=0, lo_flags=0, lo_file_name="/tmp/test", ...}) = 0

losetup first determines the next available loop device, then opens it and configures it. At first glance this looks like it might be racy, but that depends on how LOOP_SET_FD and losetup behave when someone else grabs the device in parallel.

I tested that by running losetup under gdb and pausing it in the LOOP_SET_FD ioctl, then taking the loop device with some other losetup call. The first invocation recovered gracefully from that by retrying with another loop device.

So my conclusion is that we don't have a race around loop device creation.

Perhaps KinD containers have the same issue that we also had in the CSI hostpath container where /dev is a static copy of the host and then new loop devices don't show up?

https://github.com/kubernetes/kubernetes/blob/8a09460c2f7ba8f6acd8a6fb7603ed3ac4805eb6/pkg/volume/util/volumepathhandler/volume_path_handler_linux.go#L96-L112

Indeed, when I just ran the CSI hostpath prow tests on my development machine after using all existing loop devices, the blockvolume tests fail because losetup fails:

Jan 13 08:31:51.654: INFO: At 2020-01-13 08:27:28 +0100 CET - event for security-context-6ade56c3-0fd9-4080-a3b5-e46a57584a4c: {kubelet csi-prow-worker} FailedMapVolume: MapVolume.AttachFileDevice failed for volume "pvc-228ac164-bec1-43fe-a3d1-7671ef0549f6" : exit status 1

I've not found more about it in the logs, but I can reproduce the problem manually by creating loop devices with docker exec f049e80f378a sh -c 'truncate -s 1G /tmp/testfile; losetup -f --show /tmp/testfile' until eventually all the ones that existed when the KinD container was started are in use and the command fails with:

losetup: /tmp/testfile: failed to set up loop device: No such file or directory

I'll file an issue against KinD.

@pohly
Copy link
Contributor

pohly commented Jan 13, 2020

I'll file an issue against KinD.

=> kubernetes-sigs/kind#1248

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 12, 2020
@msau42
Copy link
Collaborator Author

msau42 commented May 13, 2020

/close

@k8s-ci-robot
Copy link
Contributor

@msau42: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants