Skip to content

Fix flaky logger test #901

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 4, 2025
Merged

Fix flaky logger test #901

merged 1 commit into from
Jun 4, 2025

Conversation

solnic
Copy link
Collaborator

@solnic solnic commented May 28, 2025

This just adds a couple of timeouts for assert_receive to reduce the risk of random test failures.

Closes #896

#skip-changelog

@solnic solnic force-pushed the 896-fix-flaky-logger-test branch from 1ebf46e to 5b3d029 Compare May 28, 2025 12:15
@solnic solnic force-pushed the 896-fix-flaky-logger-test branch from 5b3d029 to f8d9893 Compare May 28, 2025 12:16
@solnic solnic marked this pull request as ready for review May 28, 2025 12:19
@solnic solnic requested a review from whatyouhide May 28, 2025 12:19
@solnic
Copy link
Collaborator Author

solnic commented May 28, 2025

@whatyouhide I ran it a bunch of times with --repeat-until-failure 100 and it pointed to another one so I added a timeout there too. Unfortunately running all tests from this file with this option yields random failures too, same with running the whole suite. These were harder to figure out so I'm gonna just report them separately whenever (and if) I see them when running regular CI builds.

@whatyouhide
Copy link
Collaborator

These were harder to figure out

What do you mean? Can we catalogue them? Also, did you use --repeat-until-failure in CI or locally? CI tends to be significantly slower.

@solnic
Copy link
Collaborator Author

solnic commented May 29, 2025

These were harder to figure out

What do you mean? Can we catalogue them? Also, did you use --repeat-until-failure in CI or locally? CI tends to be significantly slower.

This is outside of the scope of this PR. I just want to get that merged as it addresses a couple of flaky tests.

Other two failures I see when I run tests with --repeat-until-failure 100 option are:

  1) test send_event/2 dedupes events (Sentry.ClientTest)
     test/sentry/client_test.exs:407
     match (=) failed
     code:  assert {:ok, "340"} = Client.send_event(event, [])
     left:  {:ok, "340"}
     right: :excluded
     stacktrace:
       test/sentry/client_test.exs:432: anonymous fn/3 in Sentry.ClientTest."test send_event/2 dedupes events"/1
       (elixir 1.18.4) lib/enum.ex:2546: Enum."-reduce/3-lists^foldl/2-0-"/3
       test/sentry/client_test.exs:427: (test)

and

  1) test discard threshold discards logged messages (Sentry.LoggerHandlerTest)
     test/sentry/logger_handler_test.exs:650
     Assertion failed, no matching message after 5000ms
     The following variables were pinned:
       ref = #Reference<0.2667273741.3767795718.57688>
     The process mailbox is empty.
     code: assert_receive {^ref, %{message: %{formatted: "Second"}}}
     stacktrace:
       test/sentry/logger_handler_test.exs:657: (test)

     The following output was logged:
     
     12:56:42.568 [error] First
     
     12:56:42.569 [error] Second
     
     12:56:42.569 [error] Third

Both locally and on CI. It seems like it's specific to the repeat option though. I don't recall seeing these failures in normal CI builds.

@solnic solnic force-pushed the 896-fix-flaky-logger-test branch from 2e9a8f1 to f8d9893 Compare May 29, 2025 12:58
@solnic solnic merged commit 565ca03 into master Jun 4, 2025
19 of 20 checks passed
@solnic solnic deleted the 896-fix-flaky-logger-test branch June 4, 2025 06:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Flaky test
2 participants