Skip to content

[Bug Fix] Remove eager re-exports from inference __init__ to avoid he…#56

Merged
yubofredwang merged 2 commits intomainfrom
ywang/fix-vllm-import-issue
Mar 30, 2026
Merged

[Bug Fix] Remove eager re-exports from inference __init__ to avoid he…#56
yubofredwang merged 2 commits intomainfrom
ywang/fix-vllm-import-issue

Conversation

@yubofredwang
Copy link
Copy Markdown
Collaborator

These imports are useless and block vllm to run without SGLang deps, removing

…avy import side-effects

The __init__.py files in torchspec/inference/ and torchspec/inference/engine/
eagerly imported all engine classes, causing vllm and other heavy dependencies
to load at package import time. Replace with direct module-path imports where
needed.
Copilot AI review requested due to automatic review settings March 30, 2026 05:00
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR removes eager package-level re-exports in torchspec.inference (and torchspec.inference.engine) to prevent import-time loading of optional inference backends (notably SGLang), and updates the training entrypoint to import the inference factory function directly.

Changes:

  • Update torchspec/train_entry.py to import prepare_inference_engines from torchspec.inference.factory.
  • Remove re-exported symbols and __all__ from torchspec/inference/__init__.py.
  • Remove re-exported engine symbols and __all__ from torchspec/inference/engine/__init__.py.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
torchspec/train_entry.py Switches to explicit import from torchspec.inference.factory after removing torchspec.inference re-exports.
torchspec/inference/init.py Removes inference factory re-exports to avoid eager imports and optional dependency loading.
torchspec/inference/engine/init.py Removes engine class re-exports to avoid eager imports and optional dependency loading.
Comments suppressed due to low confidence (2)

torchspec/inference/engine/init.py:20

  • Dropping all re-exports from torchspec.inference.engine breaks prior imports like from torchspec.inference.engine import VllmEngine/InferenceEngine. If you still want to avoid importing optional dependencies at module import time, a lazy __getattr__-based re-export (optionally guarded with try/except and a targeted ImportError message when sglang/vllm are missing) would preserve the API without the eager dependency loading.
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.

torchspec/inference/init.py:20

  • Removing the re-exports from torchspec.inference is a breaking API change (e.g., from torchspec.inference import prepare_inference_engines will now fail). To avoid eager imports while preserving the public entrypoints, consider using a lazy-export pattern (e.g., module-level __getattr__/__all__, like torchspec.transfer.mooncake.__init__) that imports factory only when the attribute is accessed and can surface a clearer error for missing optional deps.
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@yubofredwang yubofredwang merged commit 785e23a into main Mar 30, 2026
1 check passed
@yubofredwang yubofredwang deleted the ywang/fix-vllm-import-issue branch March 30, 2026 09:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants