Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tests] Rework journal size calculation for tests #3962

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jcastill
Copy link
Member

In 3853 it was discovered that size calculations via get_usage() were not corresponding to the expected size, and the tests were failing sometimes.
This commit tries to fix the problem by calling
'journalctl' directly and then calculating the size on the output.

Related: #3853


Please place an 'X' inside each '[]' to confirm you adhere to our Contributor Guidelines

  • Is the commit message split over multiple lines and hard-wrapped at 72 characters?
  • Is the subject and message clear and concise?
  • Does the subject start with [plugin_name] if submitting a plugin patch or a [section_name] if part of the core sosreport code?
  • Does the commit contain a Signed-off-by: First Lastname [email protected]?
  • Are any related Issues or existing PRs properly referenced via a Closes (Issue) or Resolved (PR) line?
  • Are all passwords or private data gathered by this PR obfuscated?

Copy link

Congratulations! One of the builds has completed. 🍾

You can install the built RPMs by following these steps:

  • sudo yum install -y dnf-plugins-core on RHEL 8
  • sudo dnf install -y dnf-plugins-core on Fedora
  • dnf copr enable packit/sosreport-sos-3962
  • And now you can install the packages.

Please note that the RPMs should be used only in a testing environment.

@jcastill
Copy link
Member Author

Starting this as a draft. This is a first attempt at solving the issue in 3853. I'm sure there are more elegant ways to do so, so please let me know your thoughts.
I used process.run from avocado.utils like we do in the teamd plugin test, I imagined that's the best way to get the output of journalctl.

@pmoravec
Copy link
Contributor

pmoravec commented Mar 13, 2025

I see just bad options here :) I dont want to undermine idea that I came up with. But I dont like running plain journalctl on a system that can have huge logs (the test will consume huge pile of memory, no?). But I dont have anything better at hand.. (apart an obvious idea "let generate 20M logs everytime" which has own gotchas as well (slow, spams logs on repeated runs).

@jcastill
Copy link
Member Author

I see just bad options here :) I dont want to undermine idea that I came up with. But I dont like running plain journalctl on a system that can have huge logs (the test will consume huge pile of memory, no?).

Yes, that's one possibility. We have a timeout as well so we can control the capture somehow.

But I dont have anything better at hand.. (apart an obvious idea "let generate 20M logs everytime" which has own gotchas as well (slow, spams logs on repeated runs).

We could modify the sizes on the actual command, so instead of:

-o logs --journal-size=20 --log-size=10

We reduce these to half of less than that. Unless 20M and 10M are there because of a specific scenario.

And a third option could be to generate at least 20M anyway, without checking the current size. We have already a disclaimer in the test:

    Note: this test will insert over 100MB of garbage into the test system's
    journal

So the expectation is set already.

@pmoravec
Copy link
Contributor

I see just bad options here :) I dont want to undermine idea that I came up with. But I dont like running plain journalctl on a system that can have huge logs (the test will consume huge pile of memory, no?).

Yes, that's one possibility. We have a timeout as well so we can control the capture somehow.

But I dont have anything better at hand.. (apart an obvious idea "let generate 20M logs everytime" which has own gotchas as well (slow, spams logs on repeated runs).

We could modify the sizes on the actual command, so instead of:

-o logs --journal-size=20 --log-size=10

We reduce these to half of less than that. Unless 20M and 10M are there because of a specific scenario.

And a third option could be to generate at least 20M anyway, without checking the current size.

When you repeatedly run same test on your system, you get journal logs redundantly bloated. Each option has its gotcha :)

Maybe a "compromise" in:

  • decrease tested sizes from --journal-size=20 --log-size=10 several times, e.g. to 10 / 5
  • modify the current test to "if stored journal is less than 30 (3times what we need), then generate 10M"
    ?

The 3times more than what we need should be sufficiently high (really?) to prevent the phenomena I discovered, while lowering the test's sizes and having a upper limit ("less than 30MB") limits the journal bloating..?

@jcastill
Copy link
Member Author

Maybe a "compromise" in:

  • decrease tested sizes from --journal-size=20 --log-size=10 several times, e.g. to 10 / 5
  • modify the current test to "if stored journal is less than 30 (3times what we need), then generate 10M"
    ?

The 3times more than what we need should be sufficiently high (really?) to prevent the phenomena I discovered, while lowering the test's sizes and having a upper limit ("less than 30MB") limits the journal bloating..?

I think this could work. I'll modify this PR and send it again

In 3853 it was discovered that size calculations via
get_usage() were not corresponding to the expected size,
and the tests were failing sometimes.
This commit tries to fix the problem by calling
'journalctl' directly and then calculating the size
on the output.

Related: sosreport#3853

Signed-off-by: Jose Castillo <[email protected]>
@jcastill jcastill force-pushed the jcastillo-fix-tests-journal branch from ce45fd5 to 556d845 Compare March 18, 2025 19:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants