Skip to content

ref(transport): Add shared sync/async transport superclass and create a sync http subclass #4572

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: potel-base
Choose a base branch
from

Conversation

srothh
Copy link
Member

@srothh srothh commented Jul 11, 2025

Moved shared sync/async logic into superclass, and moved sync transport specific code into a new subclass(BaseSyncHttpTransport), from which the current transport implementations inherit. Note that currently the threaded worker is still being created in the superclass. In a next step, I want to add an abstract worker class for both the threaded and async task worker, however I wanted to keep this PR atomic and only re-implement the sync transport with the new hierarchy.

Fixes GH-4568

Copy link

codecov bot commented Jul 11, 2025

❌ 54 Tests Failed:

Tests completed Failed Passed Skipped
21122 54 21068 1098
View the top 3 failed test(s) by shortest run time
tests.integrations.huggingface_hub.test_huggingface_hub::test_nonstreaming_chat_completion[False-False-True]
Stack Traces | 0.184s run time
.../integrations/huggingface_hub/test_huggingface_hub.py:56: in test_nonstreaming_chat_completion
    response = client.text_generation(
sentry_sdk/integrations/huggingface_hub.py:84: in new_text_generation
    raise e from None
sentry_sdk/integrations/huggingface_hub.py:80: in new_text_generation
    res = f(*args, **kwargs)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../huggingface_hub/inference/_client.py:2297: in text_generation
    request_parameters = provider_helper.prepare_request(
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/_common.py:93: in prepare_request
    provider_mapping_info = self._prepare_mapping_info(model)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/hf_inference.py:38: in _prepare_mapping_info
    _check_supported_task(model_id, self.task)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/hf_inference.py:187: in _check_supported_task
    raise ValueError(
E   ValueError: Model 'mistralai/Mistral-Nemo-Instruct-2407' doesn't support task 'text-generation'. Supported tasks: 'None', got: 'text-generation'
tests.integrations.huggingface_hub.test_huggingface_hub::test_streaming_chat_completion[False-True-False]
Stack Traces | 0.184s run time
.../integrations/huggingface_hub/test_huggingface_hub.py:112: in test_streaming_chat_completion
    client.text_generation(
sentry_sdk/integrations/huggingface_hub.py:84: in new_text_generation
    raise e from None
sentry_sdk/integrations/huggingface_hub.py:80: in new_text_generation
    res = f(*args, **kwargs)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../huggingface_hub/inference/_client.py:2297: in text_generation
    request_parameters = provider_helper.prepare_request(
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/_common.py:93: in prepare_request
    provider_mapping_info = self._prepare_mapping_info(model)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/hf_inference.py:38: in _prepare_mapping_info
    _check_supported_task(model_id, self.task)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/hf_inference.py:187: in _check_supported_task
    raise ValueError(
E   ValueError: Model 'mistralai/Mistral-Nemo-Instruct-2407' doesn't support task 'text-generation'. Supported tasks: 'None', got: 'text-generation'
tests.integrations.huggingface_hub.test_huggingface_hub::test_streaming_chat_completion[True-True-True]
Stack Traces | 0.184s run time
.../integrations/huggingface_hub/test_huggingface_hub.py:112: in test_streaming_chat_completion
    client.text_generation(
sentry_sdk/integrations/huggingface_hub.py:84: in new_text_generation
    raise e from None
sentry_sdk/integrations/huggingface_hub.py:80: in new_text_generation
    res = f(*args, **kwargs)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../huggingface_hub/inference/_client.py:2297: in text_generation
    request_parameters = provider_helper.prepare_request(
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/_common.py:93: in prepare_request
    provider_mapping_info = self._prepare_mapping_info(model)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/hf_inference.py:38: in _prepare_mapping_info
    _check_supported_task(model_id, self.task)
.tox/py3.8-huggingface_hub-v0.33.4/lib/python3.8.../inference/_providers/hf_inference.py:187: in _check_supported_task
    raise ValueError(
E   ValueError: Model 'mistralai/Mistral-Nemo-Instruct-2407' doesn't support task 'text-generation'. Supported tasks: 'None', got: 'text-generation'

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@srothh srothh changed the title ref(transport): Added shared sync/async transport superclass and created a sync http subclass ref(transport): Add shared sync/async transport superclass and create a sync http subclass Jul 11, 2025
@srothh srothh marked this pull request as ready for review July 11, 2025 12:16
@srothh srothh requested a review from a team as a code owner July 11, 2025 12:16
@srothh srothh marked this pull request as draft July 11, 2025 12:17
@srothh srothh marked this pull request as ready for review July 11, 2025 12:17
@srothh srothh marked this pull request as draft July 17, 2025 09:43
@srothh srothh force-pushed the srothh/transport-class-hierarchy branch from 53b92f2 to 3607d44 Compare July 21, 2025 09:48
@srothh srothh marked this pull request as ready for review July 22, 2025 10:02
Copy link
Member

@antonpirker antonpirker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than my little nitpicking, this looks great!

@@ -162,7 +162,7 @@ def _parse_rate_limits(
continue


class BaseHttpTransport(Transport):
class HttpTransportCore(Transport):
"""The base HTTP transport."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also update the docstring

# remove all items from the envelope which are over quota
def _prepare_envelope(
self: Self, envelope: Envelope
) -> Optional[Tuple[Envelope, io.BytesIO, Dict[str, str]]]:
new_items = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please keep the # remove all items from the envelope which are over quota comment

Comment on lines 505 to 517
_prepared_envelope = self._prepare_envelope(envelope)
if _prepared_envelope is None:
return None
envelope, body, headers = _prepared_envelope
self._send_request(
body.getvalue(),
headers=headers,
endpoint_type=EndpointType.ENVELOPE,
envelope=envelope,
)
return None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a personal taste but I would change this to only have one return None so it is easier to read.

Suggested change
_prepared_envelope = self._prepare_envelope(envelope)
if _prepared_envelope is None:
return None
envelope, body, headers = _prepared_envelope
self._send_request(
body.getvalue(),
headers=headers,
endpoint_type=EndpointType.ENVELOPE,
envelope=envelope,
)
return None
_prepared_envelope = self._prepare_envelope(envelope)
if _prepared_envelope is not None:
envelope, body, headers = _prepared_envelope
self._send_request(
body.getvalue(),
headers=headers,
endpoint_type=EndpointType.ENVELOPE,
envelope=envelope,
)
return None

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, this looks nicer. Thanks!

@@ -641,3 +641,38 @@ def test_record_lost_event_transaction_item(capturing_server, make_client, span_
"reason": "test",
"quantity": span_count + 1,
} in discarded_events

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

srothh added 10 commits July 28, 2025 10:48
…ted a sync transport HTTP subclass

Moved shared sync/async logic into a new superclass (HttpTransportCore), and moved sync transport specific code into a new subclass(BaseSyncHttpTransport), from which the current transport implementations inherit

Fixes GH-4568
Removed an unnecessary TODO message and reverted a class name change for BaseHTTPTransport.

GH-4568
Adds test coverage for the error handling path when HTTP requests return
error status codes.

GH-4568
Restore comments accidentally removed during a previous commit.
Refactored class names such that BaseHttpTransport now has the same functionality as before the hierarchy refactor

GH-4568
Add a new flush_async method in the Transport ABC. This is needed for the async transport, as calling it from the client
while preserving execution order in close will require flush to be a coroutine, not a function.

GH-4568
Move flush_async down to the specific async transport subclass. This makes more sense anyway, as
this will only be required by the async transport. If more async transports are expected,
another shared superclass can be created.

GH-4568
Add necessary type annotations to the core HttpTransport to accomodate for async transport.

GH-4568
@srothh srothh force-pushed the srothh/transport-class-hierarchy branch from 062ab5b to 4e56e5c Compare July 28, 2025 08:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants