Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when using torch_hub_yolov5 Custom Mode #2860

Closed
darkking-park opened this issue Aug 4, 2022 · 5 comments
Closed

Error when using torch_hub_yolov5 Custom Mode #2860

darkking-park opened this issue Aug 4, 2022 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@darkking-park
Copy link

Describe the bug

I inquired about the issue before ==> #2602

First of all, thank you so much for answering my inquiry and giving me a solution. https://github.com/bentoml/gallery/tree/main/custom_runner/torch_hub_yolov5

So I implemented your proposal.
I checked that it went normally if I did as you suggested.

2022-08-04T11:18:51+0000 [WARNING] [cli] Using lowercased runnable class name 'yolov5runnable' for runner.
2022-08-04T11:18:52+0000 [INFO] [cli] Starting development BentoServer from "service.py:svc" running on http://127.0.0.1:3000 (Press CTRL+C to quit)
2022-08-04T11:18:53+0000 [WARNING] [dev_api_server] Using lowercased runnable class name 'yolov5runnable' for runner.
start=========================================
Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /root/.cache/torch/hub/master.zip
2022-08-04T11:18:57+0000 [INFO] [dev_api_server] YOLOv5  2022-8-4 Python-3.7.5 torch-1.11.0+cu102 CUDA:0 (Tesla P40, 24452MiB)

YOLOv5  2022-8-4 Python-3.7.5 torch-1.11.0+cu102 CUDA:0 (Tesla P40, 24452MiB)

2022-08-04T11:18:58+0000 [INFO] [dev_api_server] Downloading https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt to yolov5s.pt...
Downloading https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt to yolov5s.pt...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.1M/14.1M [00:00<00:00, 22.3MB/s]
2022-08-04T11:18:59+0000 [INFO] [dev_api_server] 

2022-08-04T11:19:03+0000 [INFO] [dev_api_server] Fusing layers... 
Fusing layers... 
2022-08-04T11:19:03+0000 [INFO] [dev_api_server] YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
2022-08-04T11:19:03+0000 [INFO] [dev_api_server] Adding AutoShape... 
Adding AutoShape... 
-----------------------------------------------
GPU available
-----------------------------------------------
2022-08-04T11:19:03+0000 [INFO] [dev_api_server] Application startup complete.
Application startup complete.

But Error occurs when running with custom model.

I only changed the following from your service.py source.
service .py

torch.hub.load('ultralytics/yolov5', 'yolov5s') 
==> torch.hub.load("ultralytics/yolov5", 'custom', path='my_custom_yolov5_model.pt', force_reload=True)

Error occurs when using custom model as follows.

2022-08-04T11:20:33+0000 [WARNING] [cli] Using lowercased runnable class name 'yolov5runnable' for runner.
2022-08-04T11:20:33+0000 [INFO] [cli] Starting development BentoServer from "service.py:svc" running on http://127.0.0.1:3000 (Press CTRL+C to quit)
2022-08-04T11:20:35+0000 [WARNING] [dev_api_server] Using lowercased runnable class name 'yolov5runnable' for runner.
start=========================================
Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /root/.cache/torch/hub/master.zip
2022-08-04T11:20:37+0000 [INFO] [dev_api_server] YOLOv5  2022-8-4 Python-3.7.5 torch-1.11.0+cu102 CUDA:0 (Tesla P40, 24452MiB)

YOLOv5  2022-8-4 Python-3.7.5 torch-1.11.0+cu102 CUDA:0 (Tesla P40, 24452MiB)

2022-08-04T11:20:38+0000 [ERROR] [dev_api_server] Traceback (most recent call last):
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 47, in _create
    model = DetectMultiBackend(path, device=device, fuse=autoshape)  # detection model
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 334, in __init__
    model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/experimental.py", line 81, in attempt_load
    ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float()  # FP32 model
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1186, in __getattr__
    type(self).__name__, name))
AttributeError: 'DetectMultiBackend' object has no attribute 'get'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 51, in _create
    model = attempt_load(path, device=device, fuse=False)  # arbitrary model
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/experimental.py", line 81, in attempt_load
    ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float()  # FP32 model
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1186, in __getattr__
    type(self).__name__, name))
AttributeError: 'DetectMultiBackend' object has no attribute 'get'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/starlette/routing.py", line 645, in lifespan
    async with self.lifespan_context(app):
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/starlette/routing.py", line 540, in __aenter__
    await self._router.startup()
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/starlette/routing.py", line 624, in startup
    handler()
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/bentoml/_internal/runner/runner.py", line 226, in init_local
    self._init_local()
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/bentoml/_internal/runner/runner.py", line 217, in _init_local
    self._init(LocalRunnerRef)
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/bentoml/_internal/runner/runner.py", line 211, in _init
    runner_handle = handle_class(self)
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/bentoml/_internal/runner/runner_handle/local.py", line 25, in __init__
    self._runnable = runner.runnable_class(**runner.runnable_init_params)  # type: ignore
  File "/home/bentoml_pytorch/golf_service.py", line 14, in __init__
    self.model = torch.hub.load("ultralytics/yolov5", 'custom', path='/home/yolov5_torchserve_v1/saved_model.pt', force_reload=True, autoshape=True)
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/torch/hub.py", line 404, in load
    model = _load_local(repo_or_dir, model, *args, **kwargs)
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/torch/hub.py", line 433, in _load_local
    model = entry(*args, **kwargs)
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 74, in custom
    return _create(path, autoshape=autoshape, verbose=_verbose, device=device)
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 69, in _create
    raise Exception(s) from e
Exception: 'DetectMultiBackend' object has no attribute 'get'. Cache may be out of date, try `force_reload=True` or see https://github.com/ultralytics/yolov5/issues/36 for help.

Traceback (most recent call last):
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 47, in _create
    model = DetectMultiBackend(path, device=device, fuse=autoshape)  # detection model
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 334, in __init__
    model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/experimental.py", line 81, in attempt_load
    ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float()  # FP32 model
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1186, in __getattr__
    type(self).__name__, name))
AttributeError: 'DetectMultiBackend' object has no attribute 'get'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 51, in _create
    model = attempt_load(path, device=device, fuse=False)  # arbitrary model
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/experimental.py", line 81, in attempt_load
    ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float()  # FP32 model
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1186, in __getattr__
    type(self).__name__, name))
AttributeError: 'DetectMultiBackend' object has no attribute 'get'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/starlette/routing.py", line 645, in lifespan
    async with self.lifespan_context(app):
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/starlette/routing.py", line 540, in __aenter__
    await self._router.startup()
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/starlette/routing.py", line 624, in startup
    handler()
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/bentoml/_internal/runner/runner.py", line 226, in init_local
    self._init_local()
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/bentoml/_internal/runner/runner.py", line 217, in _init_local
    self._init(LocalRunnerRef)
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/bentoml/_internal/runner/runner.py", line 211, in _init
    runner_handle = handle_class(self)
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/bentoml/_internal/runner/runner_handle/local.py", line 25, in __init__
    self._runnable = runner.runnable_class(**runner.runnable_init_params)  # type: ignore
  File "/home/bentoml_pytorch/golf_service.py", line 14, in __init__
    self.model = torch.hub.load("ultralytics/yolov5", 'custom', path='/home/yolov5_torchserve_v1/saved_model.pt', force_reload=True, autoshape=True)
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/torch/hub.py", line 404, in load
    model = _load_local(repo_or_dir, model, *args, **kwargs)
  File "/opt/venv/tf2.3/lib/python3.7/site-packages/torch/hub.py", line 433, in _load_local
    model = entry(*args, **kwargs)
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 74, in custom
    return _create(path, autoshape=autoshape, verbose=_verbose, device=device)
  File "/root/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 69, in _create
    raise Exception(s) from e
Exception: 'DetectMultiBackend' object has no attribute 'get'. Cache may be out of date, try `force_reload=True` or see https://github.com/ultralytics/yolov5/issues/36 for help.

Please review the error

For your information, I confirmed that my model(my_custom_yolov5_model.pt) is serving normally with flask.

To reproduce

Same as the guide
https://github.com/bentoml/gallery/tree/main/custom_runner/torch_hub_yolov5

Expected behavior

Same as the guide
https://github.com/bentoml/gallery/tree/main/custom_runner/torch_hub_yolov5

Environment

Same as the guide
https://github.com/bentoml/gallery/tree/main/custom_runner/torch_hub_yolov5

@darkking-park darkking-park added the bug Something isn't working label Aug 4, 2022
@parano
Copy link
Member

parano commented Aug 9, 2022

@darkking-park does it work if you run torch.hub.load("ultralytics/yolov5", 'custom', path='my_custom_yolov5_model.pt', force_reload=True) in a python shell?

@darkking-park
Copy link
Author

@parano
The python flask is in use as below.

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Flask api exposing yolov5 model")
    parser.add_argument("--port", default=5000, type=int, help="port number")
    args = parser.parse_args()
    model = torch.hub.load("ultralytics/yolov5", "custom", path='./model/best.pt',  force_reload=True, autoshape=True)
    model.eval()
    app.run(host="0.0.0.0", port=args.port) 

I want to use bento rather than flask

@parano
Copy link
Member

parano commented Aug 10, 2022

@darkking-park were you able to run the example with a brand new torch hub model downloaded from the git repo?

Where did you put the 'my_custom_yolov5_model.pt file? is it under the same directory?

@darkking-park
Copy link
Author

@parano

were you able to run the example with a brand new torch hub model downloaded from the git repo?

https://github.com/ultralytics/yolov5
With the pretrained model in this github, I trained it myself and worked on it based on the github contents below.
ultralytics/yolov5#36
image

Where did you put the 'my_custom_yolov5_model.pt file? is it under the same directory?

A model exists in the path set in torch.hub.load.

path='./model/best.pt'

@parano
Copy link
Member

parano commented Aug 10, 2022

@darkking-park it seems to me it's an issue with torchhub local cache or an issue with the model path and project path. It would be helpful if you could try setup everything from scratch and share us steps to reproduce, or I'd be happy to hop on a zoom call to help with debugging the issue, feel free to ping me in our slack channel if you'd like to do that.

@aarnphm aarnphm closed this as not planned Won't fix, can't repro, duplicate, stale Mar 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants