-
Notifications
You must be signed in to change notification settings - Fork 48
Andrew/test coverage #397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Andrew/test coverage #397
Conversation
if not find_spec("plotly"): | ||
pytest.skip("Plotly not available - cannot test force_cdn override") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this test only run when plotly
is available for import?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, here it says if it can't find plotly, it skips the test. Or "only run when plotly is available"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah what I mean is that I don't like the usage of if (condition): pytest.skip()
because it could lead to essentially silently skipping this test entirely if Plotly is never available when the test is run.
Can't this if
check be removed entirely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, no, I'm happy to change it to a fail, I think that makes sense, but w/o plotly present, this test doesn't accomplish its goal which making sure "force_cdn" actually overrides plotly.py. We have another test to make sure that the CDN is also the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think changing it to a fail would be better (as long as that doesn't break the test)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, but I did it in a later PR
|
||
# Test user overrides | ||
@settings(suppress_health_check=[HealthCheck.function_scoped_fixture]) | ||
@given(st.data()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does @given
and data.draw()
work? Is it testing all possible values returned from st_valid_path
, or a random subset?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
data.draw()
is basically what hypothesis does normally, it just lets you do it during the test instead of during hypothesis setup. So like all hypothesis testing, it does a random sample of default value n=100.
the st.data()
+ data.draw()
strategy allows you to do the sampling at runtime during your test function instead of during test setup.. I was having trouble with windows not recognizing Path(__file__)
as a valid file (according to chat GPT, it can happen), and wanted to use the builtin-in tmp_path
fixture (pretty robust) which is only available at runtime.
# Test with regular path | ||
with pytest.raises(FileNotFoundError): | ||
PageGenerator(plotly=str(nonexistent_file_path)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should test the case where a Path
is passed as well, right? For this test and the others.
# Test with regular path | |
with pytest.raises(FileNotFoundError): | |
PageGenerator(plotly=str(nonexistent_file_path)) | |
# Test with Path object | |
with pytest.raises(FileNotFoundError): | |
PageGenerator(plotly=nonexistent_file_path) | |
# Test with path as string | |
with pytest.raises(FileNotFoundError): | |
PageGenerator(plotly=str(nonexistent_file_path)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this found an error btw
with pytest.warns(RuntimeWarning, match="already"): | ||
kaleido.start_sync_server(silence_warnings=False) | ||
|
||
kaleido.start_sync_server(silence_warnings=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to explicitly test that no warning is emitted here, right?
Or have you set the Pytest configuration such that a warning will trigger a test failure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, all warnings are upgraded to errors via CLI options.
with pytest.warns(RuntimeWarning, match="closed"): | ||
kaleido.stop_sync_server(silence_warnings=False) | ||
|
||
kaleido.stop_sync_server(silence_warnings=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise here
|
||
_h_url = st.tuples( | ||
st.sampled_from(["s", ""]), | ||
st.text( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be misunderstanding here since I'm not familiar with hypothesis
, but the number of possible values that could be sampled here (character strings from 1-20 characters) is huge. Could that space dwarf the other sampling being done here, and reduce the variety of the test cases? I'm not sure if that makes sense, I'm also not sure how hypothesis
sampling logic works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an extremely interesting question and I don't have a better response than "hypothesis is said to handle these things pretty reasonably, especially ensuring to try a good range of combinations and corner cases" which I take from AI.
I'm under the impression that In this particular case, everytime text is sampled, all the other options are sampled both independently and intentionally stratified to test all corner cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been going back and forth on this, and no, I don't think the range of values makes it more favored in sampling. I think actually for continuous ranges hypothesis uses at least a statistical representation of the possible values (min, max, etc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
This PR implements testing for the entire public API.
It also fixes all the bugs found by that testing.
Bugs Fixed:
Path
instead ofstr
: (casting andisinstance
flexibility).open()
given that it requiresclose()
(prevents leak in instances where user doesn't.open()
, real corner case)append
to it.asyncio.return_task()
doesn't return at askThese were all bugs found from doing tests using the public API as its expected to be used.
Also, typing was added, a private attribute was changed to public to aid testing, and tests are added.