Closed
Description
Steps to reproduce (on msys2):
export RUST_BACKTRACE=full
python x.py test src/libstd
The libstd testing executable raises an exception with code 0xc0000374.
Backtrace:
ntdll!NtWaitForMultipleObjects + 0x14
ntdll!WerpWaitForCrashReporting + 0xa8
ntdll!RtlReportExceptionHelper + 0x33e
ntdll!RtlReportException + 0x9d
ntdll!RtlReportCriticalFailure$filt$0 + 0x33
ntdll!_C_specific_handler + 0x96
ntdll!_GSHandlerCheck_SEH + 0x6a
ntdll!RtlpExecuteHandlerForException + 0xd
ntdll!RtlDispatchException + 0x358
ntdll!RtlRaiseException + 0x303
ntdll!RtlReportCriticalFailure + 0x97
ntdll!RtlpHeapHandleError + 0x12
ntdll!RtlpLogHeapFailure + 0x96
ntdll!RtlpAllocateHeap + 0x19c0
ntdll!RtlpAllocateHeapInternal + 0x5cb
dbghelp!SymInitializeW + 0x130
dbghelp!SymInitialize + 0x33
dbghelp!StackWalkEx + 0x84
dbghelp!StackWalk64 + 0xfa
std_b0d966d629cbd139!std::sys::imp::backtrace::unwind_backtrace + 0x337
std_b0d966d629cbd139!std::sys_common::backtrace::_print + 0x390
std_b0d966d629cbd139!std::sys_common::backtrace::print + 0x22
std_b0d966d629cbd139!std::panicking::default_hook::{{closure}} + 0x1a4
std_b0d966d629cbd139!std::panicking::default_hook + 0x1dd
std_b0d966d629cbd139!std::panicking::rust_panic_with_hook + 0x2e8
std_b0d966d629cbd139!std::panicking::begin_panic<str*> + 0x62
std_b0d966d629cbd139!std::sync::rwlock::tests::test_into_inner_poison::{{closure}} + 0x7b
std_b0d966d629cbd139!std::sys_common::backtrace::__rust_begin_short_backtrace<closure,!> + 0x96
std_b0d966d629cbd139!std::thread::{{impl}}::spawn::{{closure}}::{{closure}} + 0x5
std_b0d966d629cbd139!std::panic::{{impl}}::call_once + 0x5
std_b0d966d629cbd139!std::panicking::try::do_call<std::panic::AssertUnwindSafe<closure>,!> + 0x12
std_b0d966d629cbd139!panic_unwind::__rust_maybe_catch_panic + 0x22
std_b0d966d629cbd139!std::panicking::try + 0x2c
std_b0d966d629cbd139!std::panic::catch_unwind + 0x2c
std_b0d966d629cbd139!std::thread::{{impl}}::spawn::{{closure}} + 0x6a
std_b0d966d629cbd139!alloc::boxed::{{impl}}::call_box<(),closure> + 0x98
std_b0d966d629cbd139!alloc::boxed::{{impl}}::call_once + 0x7
std_b0d966d629cbd139!std::sys_common::thread::start_thread + 0x66
std_b0d966d629cbd139!std::sys::imp::thread::{{impl}}::new::thread_start + 0x7c
kernel32!BaseThreadInitThunk + 0x14
ntdll!RtlUserThreadStart + 0x21
I've also seen an access violation (0xc0000005).
Backtrace:
ntdll!NtWaitForMultipleObjects + 0x14
KERNELBASE!WaitForMultipleObjectsEx + 0x106
KERNELBASE!WaitForMultipleObjects + 0xe
kernel32!WerpReportFaultInternal + 0x3ce
kernel32!WerpReportFault + 0x73
KERNELBASE!UnhandledExceptionFilter + 0x35b
ntdll!RtlUserThreadStart$filt$0 + 0x38
ntdll!_C_specific_handler + 0x96
ntdll!RtlpExecuteHandlerForException + 0xd
ntdll!RtlDispatchException + 0x358
ntdll!KiUserExceptionDispatch + 0x2e
std_crash!core::str::run_utf8_validation + 0x40
std_crash!core::str::from_utf8 + 0x1f
std_crash!std::sys_common::backtrace::output_fileline + 0x184
std_crash!std::sys_common::backtrace::_print::{{closure}} + 0x31
std_crash!std::sys::imp::backtrace::printing::printing::foreach_symbol_fileline + 0xc9
std_crash!std::sys_common::backtrace::_print + 0xad6
std_crash!std::sys_common::backtrace::print + 0x22
std_crash!std::panicking::default_hook::{{closure}} + 0x1a4
std_crash!std::panicking::default_hook + 0x1dd
std_crash!std::panicking::rust_panic_with_hook + 0x2e8
std_crash!std::panicking::begin_panic<str*> + 0x62
std_crash!std::collections::hash::map::test_map::test_placement_panic::mkpanic + 0x18
std_crash!std::collections::hash::map::test_map::test_placement_panic::{{closure}} + 0x55
std_crash!core::ops::function::FnOnce::call_once + 0x55
std_crash!std::panic::{{impl}}::call_once + 0x55
std_crash!std::panicking::try::do_call<std::panic::AssertUnwindSafe<closure>,()> + 0x74
std_crash!panic_unwind::__rust_maybe_catch_panic + 0x22
std_crash!std::panicking::try + 0x32
std_crash!std::panic::catch_unwind + 0x32
std_crash!std::collections::hash::map::test_map::test_placement_panic + 0x1ef
std_crash!test::run_test::{{closure}} + 0x5
std_crash!core::ops::function::FnOnce::call_once + 0x5
std_crash!test::{{impl}}::call_box<(),closure> + 0x1e
std_crash!panic_unwind::__rust_maybe_catch_panic + 0x22
std_crash!std::panicking::try + 0x38
std_crash!std::panic::catch_unwind + 0x38
std_crash!test::run_test::run_test_inner::{{closure}} + 0x168
std_crash!std::sys_common::backtrace::__rust_begin_short_backtrace<closure,()> + 0x1df
std_crash!std::thread::{{impl}}::spawn::{{closure}}::{{closure}} + 0x48
std_crash!std::panic::{{impl}}::call_once + 0x48
std_crash!std::panicking::try::do_call<std::panic::AssertUnwindSafe<closure>,()> + 0x58
std_crash!panic_unwind::__rust_maybe_catch_panic + 0x22
std_crash!std::panicking::try + 0x7a
std_crash!std::panic::catch_unwind + 0x7a
std_crash!std::thread::{{impl}}::spawn::{{closure}} + 0xd8
std_crash!alloc::boxed::{{impl}}::call_box<(),closure> + 0x123
std_crash!alloc::boxed::{{impl}}::call_once + 0x7
std_crash!std::sys_common::thread::start_thread + 0x66
std_crash!std::sys::imp::thread::{{impl}}::new::thread_start + 0x7c
kernel32!BaseThreadInitThunk + 0x14
ntdll!RtlUserThreadStart + 0x21
Metadata
Metadata
Assignees
Labels
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
alexcrichton commentedon Nov 24, 2017
dbghelp on MSVC is single threaded, so we lock all backtrace generation on all platforms (makes for better printing anyway). When you're running stdtest there's two standard libraries, two locks, and hence this is probably just an "expected race" using dbghelp on two threads.
Zoxc commentedon Nov 24, 2017
Only the non-testing libstd is handling panics though. Do we have tests which takes the panic handler on the testing libstd and explicitly calls it with
PanicInfo
?retep998 commentedon Nov 27, 2017
Could we switch to using a named mutex with the process id in the name? That way it would be unique to the process, but shared across all copies of std within that process.
ChrisDenton commentedon Mar 9, 2022
I think this was fixed by rust-lang/backtrace-rs#242 which uses a session-local named mutex to synchronize use of dbghelp. cc @alexcrichton
ChrisDenton commentedon Jul 1, 2023
The original issue has been fixed and recently retep998's suggestion has also been implemented (see #113176).