Skip to content

Conversation

@mfuntowicz
Copy link
Member

@mfuntowicz mfuntowicz commented Nov 19, 2025

Should fix build errors like the one below because it is checking for #if defined(_WIN32) && (defined(USE_CUDA) || defined(USE_ROCM)) but USE_CUDA was not actually being set for various reasons.

C:/hostedtoolcache/windows/Python/3.11.9/x64/Lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1134): error C2872: 'std': ambiguous symbol [D:\a\kernels-community\kernels-community\relu\build\_relu_5c0099e.vcxproj]
  C:/hostedtoolcache/windows/Python/3.11.9/x64/Lib/site-packages/torch/include\c10/util/strong_type.h(1583): note: could be 'std'
  C:/hostedtoolcache/windows/Python/3.11.9/x64/Lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1134): note: or       'std'
  C:/hostedtoolcache/windows/Python/3.11.9/x64/Lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1134): note: the template instantiation context (the oldest one first) is
  C:/hostedtoolcache/windows/Python/3.11.9/x64/Lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1181): note: see reference to class template instantiation 'torch::dynamo::autograd::IValuePacker<__int64>' being compiled
  C:/hostedtoolcache/windows/Python/3.11.9/x64/Lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1108): note: while compiling class template member function 'c10::TypePtr torch::dynamo::autograd::IValuePacker<__int64>::packed_type(void)'
  C:/hostedtoolcache/windows/Python/3.11.9/x64/Lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1181): note: see the first reference to 'torch::dynamo::autograd::IValuePacker<__int64>::packed_type' in 'torch::dynamo::autograd::IValuePacker<unsigned __int64>::packed_type'

@mfuntowicz mfuntowicz force-pushed the fix_missing_win32_define branch from 158bffe to 775300c Compare November 19, 2025 09:58
@mfuntowicz mfuntowicz changed the title fix(windows): force _WIN32 definition to be sure guard on PyTorch are not bypassed fix(windows): force USE_CUDA/USE_ROCM definitions to ensure PyTorch guards are not bypassed Nov 19, 2025
@mfuntowicz mfuntowicz requested a review from danieldk November 19, 2025 11:13
@mfuntowicz mfuntowicz force-pushed the fix_missing_win32_define branch from 3b6264e to fcd838a Compare November 19, 2025 13:36
Copy link
Collaborator

@MekkCyber MekkCyber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot ! Could you update the PR description because it mentions _WIN32 not being set, but this is related to USE_CUDA and USE_ROCM please ? Otherwise lgtm

danieldk
danieldk previously approved these changes Nov 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants