Skip to content

Conversation

ayjayt
Copy link
Collaborator

@ayjayt ayjayt commented Aug 27, 2025

This PR implements testing for the entire public API.

It also fixes all the bugs found by that testing.

Bugs Fixed:

  • Allows users to pass file paths for plotlyjs/mathjax in url and path form
  • Explicitly handle mathjax=True
  • Better handle instances where users pass Path instead of str: (casting and isinstance flexibility)
  • close browser before closing kaleido, prevents hanging in certain instances
  • use named arguments in Kaleido constructor instead of generic kwargs (smell fix)
  • create temporary file during .open() given that it requires close() (prevents leak in instances where user doesn't .open(), real corner case)
  • properly check for an error log before trying to append to it.
  • deal with corner case where asyncio.return_task() doesn't return at ask

These were all bugs found from doing tests using the public API as its expected to be used.

Also, typing was added, a private attribute was changed to public to aid testing, and tests are added.

Comment on lines +221 to +222
if not find_spec("plotly"):
pytest.skip("Plotly not available - cannot test force_cdn override")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this test only run when plotly is available for import?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, here it says if it can't find plotly, it skips the test. Or "only run when plotly is available"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah what I mean is that I don't like the usage of if (condition): pytest.skip() because it could lead to essentially silently skipping this test entirely if Plotly is never available when the test is run.

Can't this if check be removed entirely?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, no, I'm happy to change it to a fail, I think that makes sense, but w/o plotly present, this test doesn't accomplish its goal which making sure "force_cdn" actually overrides plotly.py. We have another test to make sure that the CDN is also the default.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think changing it to a fail would be better (as long as that doesn't break the test)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, but I did it in a later PR


# Test user overrides
@settings(suppress_health_check=[HealthCheck.function_scoped_fixture])
@given(st.data())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does @given and data.draw() work? Is it testing all possible values returned from st_valid_path, or a random subset?

Copy link
Collaborator Author

@ayjayt ayjayt Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data.draw() is basically what hypothesis does normally, it just lets you do it during the test instead of during hypothesis setup. So like all hypothesis testing, it does a random sample of default value n=100.

the st.data() + data.draw() strategy allows you to do the sampling at runtime during your test function instead of during test setup.. I was having trouble with windows not recognizing Path(__file__) as a valid file (according to chat GPT, it can happen), and wanted to use the builtin-in tmp_path fixture (pretty robust) which is only available at runtime.

Comment on lines +391 to +393
# Test with regular path
with pytest.raises(FileNotFoundError):
PageGenerator(plotly=str(nonexistent_file_path))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should test the case where a Path is passed as well, right? For this test and the others.

Suggested change
# Test with regular path
with pytest.raises(FileNotFoundError):
PageGenerator(plotly=str(nonexistent_file_path))
# Test with Path object
with pytest.raises(FileNotFoundError):
PageGenerator(plotly=nonexistent_file_path)
# Test with path as string
with pytest.raises(FileNotFoundError):
PageGenerator(plotly=str(nonexistent_file_path))

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this found an error btw

with pytest.warns(RuntimeWarning, match="already"):
kaleido.start_sync_server(silence_warnings=False)

kaleido.start_sync_server(silence_warnings=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to explicitly test that no warning is emitted here, right?

Or have you set the Pytest configuration such that a warning will trigger a test failure?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, all warnings are upgraded to errors via CLI options.

with pytest.warns(RuntimeWarning, match="closed"):
kaleido.stop_sync_server(silence_warnings=False)

kaleido.stop_sync_server(silence_warnings=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Likewise here

@ayjayt ayjayt marked this pull request as ready for review August 28, 2025 21:55

_h_url = st.tuples(
st.sampled_from(["s", ""]),
st.text(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be misunderstanding here since I'm not familiar with hypothesis, but the number of possible values that could be sampled here (character strings from 1-20 characters) is huge. Could that space dwarf the other sampling being done here, and reduce the variety of the test cases? I'm not sure if that makes sense, I'm also not sure how hypothesis sampling logic works.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an extremely interesting question and I don't have a better response than "hypothesis is said to handle these things pretty reasonably, especially ensuring to try a good range of combinations and corner cases" which I take from AI.

I'm under the impression that In this particular case, everytime text is sampled, all the other options are sampled both independently and intentionally stratified to test all corner cases.

Copy link
Collaborator Author

@ayjayt ayjayt Aug 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been going back and forth on this, and no, I don't think the range of values makes it more favored in sampling. I think actually for continuous ranges hypothesis uses at least a statistical representation of the possible values (min, max, etc)

@gvwilson gvwilson added feature something new P2 needed for current cycle testing automated tests P1 needs immediate attention and removed P2 needed for current cycle labels Sep 9, 2025
Copy link
Collaborator

@emilykl emilykl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@ayjayt ayjayt merged commit e0d5907 into master Sep 10, 2025
4 checks passed
@ayjayt ayjayt deleted the andrew/test_coverage branch September 15, 2025 20:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature something new P1 needs immediate attention testing automated tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants