-
Notifications
You must be signed in to change notification settings - Fork 530
Open
Description
python corridorkey_cli.py wizard /home/zapp/CorridorKey/ClipsForInference/myshot/
[03:10:47] INFO Auto-selected device: cuda device_utils.py:22
INFO Using device: cuda corridorkey_cli.py:218
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ CORRIDOR KEY — SMART WIZARD │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Windows Path: /home/zapp/CorridorKey/ClipsForInference/myshot/
Running locally: /home/zapp/CorridorKey/ClipsForInference/myshot/
Found 1 potential clip folders.
Status Report
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┓
┃ Category ┃ Count ┃ Clips ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━┩
│ Ready (AlphaHint) │ 1 │ — │
├────────────────────────────┼───────┼───────┤
│ Masked (VideoMaMaMaskHint) │ 0 │ — │
├────────────────────────────┼───────┼───────┤
│ Raw (Input only) │ 0 │ — │
└────────────────────────────┴───────┴───────┘
╭──────────────────────────────────────────────────── Actions ────────────────────────────────────────────────────╮
│ i — Run Inference (1 ready clips) │
│ r — Re-scan folders │
│ q — Quit │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Select action [v/g/b/i/r/q] (q): i
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Corridor Key Inference │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Inference Settings │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Input colorspace [linear/srgb] (srgb):
Despill strength (0–10, 10 = max despill) (5):
Enable auto-despeckle (removes tracking dots)? [y/n] (y):
Despeckle size (min pixels for a spot) (400):
Refiner strength multiplier (experimental) (1.0):
[03:10:57] INFO Found 1 clips ready for inference. clip_manager.py:612
INFO Not Apple Silicon — using torch backend backend.py:53
INFO Torch engine loaded: CorridorKey.pth (device=cuda) backend.py:238
INFO Loading CorridorKey from inference_engine.py:68
/home/zapp/CorridorKey/CorridorKeyModule/checkpoints/CorridorKey.pth
INFO Initializing hiera_base_plus_224.mae_in1k_ft_in1k (img_size=2048) model_transformer.py:159
[03:10:58] INFO Skipped downloading base weights (relying on custom checkpoint) model_transformer.py:164
INFO Patched input layer: 3 → 4 channels (extra initialized to 0) model_transformer.py:240
INFO Feature channels: [112, 224, 448, 896] model_transformer.py:177
In file included from /home/zapp/.local/share/uv/python/cpython-3.13.12-linux-x86_64-gnu/include/python3.13/Python.h:14,
from /tmp/tmp611v2r9t/__triton_launcher.c:7:
/home/zapp/.local/share/uv/python/cpython-3.13.12-linux-x86_64-gnu/include/python3.13/pyconfig.h:1967:9: warning: ‘_POSIX_C_SOURCE’ redefined
1967 | #define _POSIX_C_SOURCE 200809L
| ^~~~~~~~~~~~~~~
In file included from /usr/include/bits/libc-header-start.h:33,
from /usr/include/stdlib.h:26,
from /home/zapp/CorridorKey/.venv/lib/python3.13/site-packages/triton/backends/nvidia/include/cuda.h:56,
from /tmp/tmp611v2r9t/__triton_launcher.c:2:
/usr/include/features.h:319:10: note: this is the location of the previous definition
319 | # define _POSIX_C_SOURCE 202405L
| ^~~~~~~~~~~~~~~
In file included from /home/zapp/.local/share/uv/python/cpython-3.13.12-linux-x86_64-gnu/include/python3.13/Python.h:14,
from /tmp/tmp0fipe8x2/__triton_launcher.c:7:
/home/zapp/.local/share/uv/python/cpython-3.13.12-linux-x86_64-gnu/include/python3.13/pyconfig.h:1967:9: warning: ‘_POSIX_C_SOURCE’ redefined
1967 | #define _POSIX_C_SOURCE 200809L
| ^~~~~~~~~~~~~~~
In file included from /usr/include/bits/libc-header-start.h:33,
from /usr/include/stdlib.h:26,
from /home/zapp/CorridorKey/.venv/lib/python3.13/site-packages/triton/backends/nvidia/include/cuda.h:56,
from /tmp/tmp0fipe8x2/__triton_launcher.c:2:
/usr/include/features.h:319:10: note: this is the location of the previous definition
319 | # define _POSIX_C_SOURCE 202405L
| ^~~~~~~~~~~~~~~
[03:11:06] INFO Running Inference on: clip_manager.py:631
INFO Input frames: 1, Alpha frames: 1 -> Processing 1 frames clip_manager.py:646
⠏ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/1 0:00:00
Inference failed: Dynamo failed to run FX node with fake tensors: call_function <built-in method conv2d of type
object at 0x7fc79672e9e0>(*(FakeTensor(..., device='cuda:0', size=(1, s27, 2048, 2048)), Parameter(FakeTensor(...,
device='cuda:0', size=(112, 4, 7, 7), requires_grad=True)), Parameter(FakeTensor(..., device='cuda:0', size=(112,),
requires_grad=True)), (4, 4), (3, 3), (1, 1), 1), **{}): got RuntimeError('Given groups=1, weight of size [112, 4,
7, 7], expected input[1, s27, 2048, 2048] to have 4 channels, but got s27 channels instead')
from user code:
File "/home/zapp/CorridorKey/CorridorKeyModule/core/model_transformer.py", line 247, in forward
features = self.encoder(x) # Returns list of features
File "/home/zapp/CorridorKey/.venv/lib/python3.13/site-packages/timm/models/_features.py", line 476, in forward
features = self.model.forward_intermediates(
File "/home/zapp/CorridorKey/.venv/lib/python3.13/site-packages/timm/models/hiera.py", line 758, in
forward_intermediates
x = self.patch_embed(x, mask=patch_mask)
File "/home/zapp/CorridorKey/.venv/lib/python3.13/site-packages/timm/models/hiera.py", line 455, in forward
x = self.proj(x)
File "/home/zapp/CorridorKey/.venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 553, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/zapp/CorridorKey/.venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 548, in
_conv_forward
return F.conv2d(
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to
PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
Inference batch complete. Press Enter to re-scan:
Im using Linux with hyprland on arch. my system has RTX 5050 with 8gb of vram, 16gb of ddr4 ram
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels