-
Notifications
You must be signed in to change notification settings - Fork 678
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
installation issues for deployment #697
Comments
Hi @connormeaton, thanks a lot for sharing this.
So I'd recommend you to test the 1st approach. And, when |
Thank you so much @oguiza. This was very helpful. Unfortunately, I am realizing that fastai and pyinstaller do not mix well for creating a shipable binary. It looks like converting the model to pytorch and then compiling with ONNX may be the best method for compilation and deployment. Do you have any recommendations on using a tsai model for inference in a pytorch only environment? |
I'm not sure what you mean by a "Pytorch-only environment". |
Yes, I can do that. I was just curious if you had experience with running a tsai trained model with pytorch, and not calling tsai at all. I'm assuming this could be done because tsai uses pytorch (I think). The issue is that tsai/fastai do not interact with pyinstaller well. I want to turn my model into a binary and run on other machines. I know that this can be done with pytorch and onnx, so was just curious if you knew of any options. No worries if not, its very specific. Thank you for your help! |
Well, the issue is not so much converting the model to pure Pytorch or ONNX. You could do that when you train the model. |
Thank you so much for the info. I will review the notebook you referenced. A |
Hello, I am looking to deploy a tsai trained model for inference. I see you created the tsai.inference module for lightweight inference with load_learner. This looks like it will be helpful. However, I am having some issues building tsai in a fresh environment.
I prefer to use mamba to install and am running the command
mamba install -c timeseriesai tsai
. This appears to work as expected. However, when I run inference code in the fresh environmentfrom tsai.inference import load_learner
, I get a bunch of dependency errors. I receive messages saying I need Ipython, ipykernel, chardet, webrtcvad, etc. The docs state that the standard installation methods I am using install only what is required. Why am I being asked to download all of these other modules when I am not using them? I can install them, but I would prefer not to as to keep my production environment as small as possible.This happens with conda and pip as well.
Please let me know if I am doing something wrong. Thank you!
The text was updated successfully, but these errors were encountered: