3
3
//! This primitive is meant to be used to run one-time initialization. An
4
4
//! example use case would be for initializing an FFI library.
5
5
6
- // A "once" is a relatively simple primitive, and it's also typically provided
7
- // by the OS as well (see `pthread_once` or `InitOnceExecuteOnce`). The OS
8
- // primitives, however, tend to have surprising restrictions, such as the Unix
9
- // one doesn't allow an argument to be passed to the function.
10
- //
11
- // As a result, we end up implementing it ourselves in the standard library.
12
- // This also gives us the opportunity to optimize the implementation a bit which
13
- // should help the fast path on call sites. Consequently, let's explain how this
14
- // primitive works now!
15
- //
16
- // So to recap, the guarantees of a Once are that it will call the
17
- // initialization closure at most once, and it will never return until the one
18
- // that's running has finished running. This means that we need some form of
19
- // blocking here while the custom callback is running at the very least.
20
- // Additionally, we add on the restriction of **poisoning**. Whenever an
21
- // initialization closure panics, the Once enters a "poisoned" state which means
22
- // that all future calls will immediately panic as well.
23
- //
24
- // So to implement this, one might first reach for a `Mutex`, but those cannot
25
- // be put into a `static`. It also gets a lot harder with poisoning to figure
26
- // out when the mutex needs to be deallocated because it's not after the closure
27
- // finishes, but after the first successful closure finishes.
28
- //
29
- // All in all, this is instead implemented with atomics and lock-free
30
- // operations! Whee! Each `Once` has one word of atomic state, and this state is
31
- // CAS'd on to determine what to do. There are four possible state of a `Once`:
32
- //
33
- // * Incomplete - no initialization has run yet, and no thread is currently
34
- // using the Once.
35
- // * Poisoned - some thread has previously attempted to initialize the Once, but
36
- // it panicked, so the Once is now poisoned. There are no other
37
- // threads currently accessing this Once.
38
- // * Running - some thread is currently attempting to run initialization. It may
39
- // succeed, so all future threads need to wait for it to finish.
40
- // Note that this state is accompanied with a payload, described
41
- // below.
42
- // * Complete - initialization has completed and all future calls should finish
43
- // immediately.
44
- //
45
- // With 4 states we need 2 bits to encode this, and we use the remaining bits
46
- // in the word we have allocated as a queue of threads waiting for the thread
47
- // responsible for entering the RUNNING state. This queue is just a linked list
48
- // of Waiter nodes which is monotonically increasing in size. Each node is
49
- // allocated on the stack, and whenever the running closure finishes it will
50
- // consume the entire queue and notify all waiters they should try again.
51
- //
52
- // You'll find a few more details in the implementation, but that's the gist of
53
- // it!
54
- //
55
- // Atomic orderings:
56
- // When running `Once` we deal with multiple atomics:
57
- // `Once.state_and_queue` and an unknown number of `Waiter.signaled`.
58
- // * `state_and_queue` is used (1) as a state flag, (2) for synchronizing the
59
- // result of the `Once`, and (3) for synchronizing `Waiter` nodes.
60
- // - At the end of the `call_inner` function we have to make sure the result
61
- // of the `Once` is acquired. So every load which can be the only one to
62
- // load COMPLETED must have at least Acquire ordering, which means all
63
- // three of them.
64
- // - `WaiterQueue::Drop` is the only place that may store COMPLETED, and
65
- // must do so with Release ordering to make the result available.
66
- // - `wait` inserts `Waiter` nodes as a pointer in `state_and_queue`, and
67
- // needs to make the nodes available with Release ordering. The load in
68
- // its `compare_exchange` can be Relaxed because it only has to compare
69
- // the atomic, not to read other data.
70
- // - `WaiterQueue::Drop` must see the `Waiter` nodes, so it must load
71
- // `state_and_queue` with Acquire ordering.
72
- // - There is just one store where `state_and_queue` is used only as a
73
- // state flag, without having to synchronize data: switching the state
74
- // from INCOMPLETE to RUNNING in `call_inner`. This store can be Relaxed,
75
- // but the read has to be Acquire because of the requirements mentioned
76
- // above.
77
- // * `Waiter.signaled` is both used as a flag, and to protect a field with
78
- // interior mutability in `Waiter`. `Waiter.thread` is changed in
79
- // `WaiterQueue::Drop` which then sets `signaled` with Release ordering.
80
- // After `wait` loads `signaled` with Acquire and sees it is true, it needs to
81
- // see the changes to drop the `Waiter` struct correctly.
82
- // * There is one place where the two atomics `Once.state_and_queue` and
83
- // `Waiter.signaled` come together, and might be reordered by the compiler or
84
- // processor. Because both use Acquire ordering such a reordering is not
85
- // allowed, so no need for SeqCst.
86
-
87
6
#[ cfg( all( test, not( target_os = "emscripten" ) ) ) ]
88
7
mod tests;
89
8
90
- use crate :: cell:: Cell ;
91
9
use crate :: fmt;
92
- use crate :: marker;
93
10
use crate :: panic:: { RefUnwindSafe , UnwindSafe } ;
94
- use crate :: ptr;
95
- use crate :: sync:: atomic:: { AtomicBool , AtomicPtr , Ordering } ;
96
- use crate :: thread:: { self , Thread } ;
97
-
98
- type Masked = ( ) ;
11
+ use crate :: sys_common:: once as sys;
99
12
100
13
/// A synchronization primitive which can be used to run a one-time global
101
14
/// initialization. Useful for one-time initialization for FFI or related
@@ -114,19 +27,9 @@ type Masked = ();
114
27
/// ```
115
28
#[ stable( feature = "rust1" , since = "1.0.0" ) ]
116
29
pub struct Once {
117
- // `state_and_queue` is actually a pointer to a `Waiter` with extra state
118
- // bits, so we add the `PhantomData` appropriately.
119
- state_and_queue : AtomicPtr < Masked > ,
120
- _marker : marker:: PhantomData < * const Waiter > ,
30
+ inner : sys:: Once ,
121
31
}
122
32
123
- // The `PhantomData` of a raw pointer removes these two auto traits, but we
124
- // enforce both below in the implementation so this should be safe to add.
125
- #[ stable( feature = "rust1" , since = "1.0.0" ) ]
126
- unsafe impl Sync for Once { }
127
- #[ stable( feature = "rust1" , since = "1.0.0" ) ]
128
- unsafe impl Send for Once { }
129
-
130
33
#[ stable( feature = "sync_once_unwind_safe" , since = "1.59.0" ) ]
131
34
impl UnwindSafe for Once { }
132
35
@@ -136,10 +39,8 @@ impl RefUnwindSafe for Once {}
136
39
/// State yielded to [`Once::call_once_force()`]’s closure parameter. The state
137
40
/// can be used to query the poison status of the [`Once`].
138
41
#[ stable( feature = "once_poison" , since = "1.51.0" ) ]
139
- #[ derive( Debug ) ]
140
42
pub struct OnceState {
141
- poisoned : bool ,
142
- set_state_on_drop_to : Cell < * mut Masked > ,
43
+ pub ( crate ) inner : sys:: OnceState ,
143
44
}
144
45
145
46
/// Initialization value for static [`Once`] values.
@@ -159,49 +60,14 @@ pub struct OnceState {
159
60
) ]
160
61
pub const ONCE_INIT : Once = Once :: new ( ) ;
161
62
162
- // Four states that a Once can be in, encoded into the lower bits of
163
- // `state_and_queue` in the Once structure.
164
- const INCOMPLETE : usize = 0x0 ;
165
- const POISONED : usize = 0x1 ;
166
- const RUNNING : usize = 0x2 ;
167
- const COMPLETE : usize = 0x3 ;
168
-
169
- // Mask to learn about the state. All other bits are the queue of waiters if
170
- // this is in the RUNNING state.
171
- const STATE_MASK : usize = 0x3 ;
172
-
173
- // Representation of a node in the linked list of waiters, used while in the
174
- // RUNNING state.
175
- // Note: `Waiter` can't hold a mutable pointer to the next thread, because then
176
- // `wait` would both hand out a mutable reference to its `Waiter` node, and keep
177
- // a shared reference to check `signaled`. Instead we hold shared references and
178
- // use interior mutability.
179
- #[ repr( align( 4 ) ) ] // Ensure the two lower bits are free to use as state bits.
180
- struct Waiter {
181
- thread : Cell < Option < Thread > > ,
182
- signaled : AtomicBool ,
183
- next : * const Waiter ,
184
- }
185
-
186
- // Head of a linked list of waiters.
187
- // Every node is a struct on the stack of a waiting thread.
188
- // Will wake up the waiters when it gets dropped, i.e. also on panic.
189
- struct WaiterQueue < ' a > {
190
- state_and_queue : & ' a AtomicPtr < Masked > ,
191
- set_state_on_drop_to : * mut Masked ,
192
- }
193
-
194
63
impl Once {
195
64
/// Creates a new `Once` value.
196
65
#[ inline]
197
66
#[ stable( feature = "once_new" , since = "1.2.0" ) ]
198
67
#[ rustc_const_stable( feature = "const_once_new" , since = "1.32.0" ) ]
199
68
#[ must_use]
200
69
pub const fn new ( ) -> Once {
201
- Once {
202
- state_and_queue : AtomicPtr :: new ( ptr:: invalid_mut ( INCOMPLETE ) ) ,
203
- _marker : marker:: PhantomData ,
204
- }
70
+ Once { inner : sys:: Once :: new ( ) }
205
71
}
206
72
207
73
/// Performs an initialization routine once and only once. The given closure
@@ -261,19 +127,20 @@ impl Once {
261
127
/// This is similar to [poisoning with mutexes][poison].
262
128
///
263
129
/// [poison]: struct.Mutex.html#poisoning
130
+ #[ inline]
264
131
#[ stable( feature = "rust1" , since = "1.0.0" ) ]
265
132
#[ track_caller]
266
133
pub fn call_once < F > ( & self , f : F )
267
134
where
268
135
F : FnOnce ( ) ,
269
136
{
270
137
// Fast path check
271
- if self . is_completed ( ) {
138
+ if self . inner . is_completed ( ) {
272
139
return ;
273
140
}
274
141
275
142
let mut f = Some ( f) ;
276
- self . call_inner ( false , & mut |_| f. take ( ) . unwrap ( ) ( ) ) ;
143
+ self . inner . call ( false , & mut |_| f. take ( ) . unwrap ( ) ( ) ) ;
277
144
}
278
145
279
146
/// Performs the same function as [`call_once()`] except ignores poisoning.
@@ -320,18 +187,19 @@ impl Once {
320
187
/// // once any success happens, we stop propagating the poison
321
188
/// INIT.call_once(|| {});
322
189
/// ```
190
+ #[ inline]
323
191
#[ stable( feature = "once_poison" , since = "1.51.0" ) ]
324
192
pub fn call_once_force < F > ( & self , f : F )
325
193
where
326
194
F : FnOnce ( & OnceState ) ,
327
195
{
328
196
// Fast path check
329
- if self . is_completed ( ) {
197
+ if self . inner . is_completed ( ) {
330
198
return ;
331
199
}
332
200
333
201
let mut f = Some ( f) ;
334
- self . call_inner ( true , & mut |p| f. take ( ) . unwrap ( ) ( p) ) ;
202
+ self . inner . call ( true , & mut |p| f. take ( ) . unwrap ( ) ( p) ) ;
335
203
}
336
204
337
205
/// Returns `true` if some [`call_once()`] call has completed
@@ -378,119 +246,7 @@ impl Once {
378
246
#[ stable( feature = "once_is_completed" , since = "1.43.0" ) ]
379
247
#[ inline]
380
248
pub fn is_completed ( & self ) -> bool {
381
- // An `Acquire` load is enough because that makes all the initialization
382
- // operations visible to us, and, this being a fast path, weaker
383
- // ordering helps with performance. This `Acquire` synchronizes with
384
- // `Release` operations on the slow path.
385
- self . state_and_queue . load ( Ordering :: Acquire ) . addr ( ) == COMPLETE
386
- }
387
-
388
- // This is a non-generic function to reduce the monomorphization cost of
389
- // using `call_once` (this isn't exactly a trivial or small implementation).
390
- //
391
- // Additionally, this is tagged with `#[cold]` as it should indeed be cold
392
- // and it helps let LLVM know that calls to this function should be off the
393
- // fast path. Essentially, this should help generate more straight line code
394
- // in LLVM.
395
- //
396
- // Finally, this takes an `FnMut` instead of a `FnOnce` because there's
397
- // currently no way to take an `FnOnce` and call it via virtual dispatch
398
- // without some allocation overhead.
399
- #[ cold]
400
- #[ track_caller]
401
- fn call_inner ( & self , ignore_poisoning : bool , init : & mut dyn FnMut ( & OnceState ) ) {
402
- let mut state_and_queue = self . state_and_queue . load ( Ordering :: Acquire ) ;
403
- loop {
404
- match state_and_queue. addr ( ) {
405
- COMPLETE => break ,
406
- POISONED if !ignore_poisoning => {
407
- // Panic to propagate the poison.
408
- panic ! ( "Once instance has previously been poisoned" ) ;
409
- }
410
- POISONED | INCOMPLETE => {
411
- // Try to register this thread as the one RUNNING.
412
- let exchange_result = self . state_and_queue . compare_exchange (
413
- state_and_queue,
414
- ptr:: invalid_mut ( RUNNING ) ,
415
- Ordering :: Acquire ,
416
- Ordering :: Acquire ,
417
- ) ;
418
- if let Err ( old) = exchange_result {
419
- state_and_queue = old;
420
- continue ;
421
- }
422
- // `waiter_queue` will manage other waiting threads, and
423
- // wake them up on drop.
424
- let mut waiter_queue = WaiterQueue {
425
- state_and_queue : & self . state_and_queue ,
426
- set_state_on_drop_to : ptr:: invalid_mut ( POISONED ) ,
427
- } ;
428
- // Run the initialization function, letting it know if we're
429
- // poisoned or not.
430
- let init_state = OnceState {
431
- poisoned : state_and_queue. addr ( ) == POISONED ,
432
- set_state_on_drop_to : Cell :: new ( ptr:: invalid_mut ( COMPLETE ) ) ,
433
- } ;
434
- init ( & init_state) ;
435
- waiter_queue. set_state_on_drop_to = init_state. set_state_on_drop_to . get ( ) ;
436
- break ;
437
- }
438
- _ => {
439
- // All other values must be RUNNING with possibly a
440
- // pointer to the waiter queue in the more significant bits.
441
- assert ! ( state_and_queue. addr( ) & STATE_MASK == RUNNING ) ;
442
- wait ( & self . state_and_queue , state_and_queue) ;
443
- state_and_queue = self . state_and_queue . load ( Ordering :: Acquire ) ;
444
- }
445
- }
446
- }
447
- }
448
- }
449
-
450
- fn wait ( state_and_queue : & AtomicPtr < Masked > , mut current_state : * mut Masked ) {
451
- // Note: the following code was carefully written to avoid creating a
452
- // mutable reference to `node` that gets aliased.
453
- loop {
454
- // Don't queue this thread if the status is no longer running,
455
- // otherwise we will not be woken up.
456
- if current_state. addr ( ) & STATE_MASK != RUNNING {
457
- return ;
458
- }
459
-
460
- // Create the node for our current thread.
461
- let node = Waiter {
462
- thread : Cell :: new ( Some ( thread:: current ( ) ) ) ,
463
- signaled : AtomicBool :: new ( false ) ,
464
- next : current_state. with_addr ( current_state. addr ( ) & !STATE_MASK ) as * const Waiter ,
465
- } ;
466
- let me = & node as * const Waiter as * const Masked as * mut Masked ;
467
-
468
- // Try to slide in the node at the head of the linked list, making sure
469
- // that another thread didn't just replace the head of the linked list.
470
- let exchange_result = state_and_queue. compare_exchange (
471
- current_state,
472
- me. with_addr ( me. addr ( ) | RUNNING ) ,
473
- Ordering :: Release ,
474
- Ordering :: Relaxed ,
475
- ) ;
476
- if let Err ( old) = exchange_result {
477
- current_state = old;
478
- continue ;
479
- }
480
-
481
- // We have enqueued ourselves, now lets wait.
482
- // It is important not to return before being signaled, otherwise we
483
- // would drop our `Waiter` node and leave a hole in the linked list
484
- // (and a dangling reference). Guard against spurious wakeups by
485
- // reparking ourselves until we are signaled.
486
- while !node. signaled . load ( Ordering :: Acquire ) {
487
- // If the managing thread happens to signal and unpark us before we
488
- // can park ourselves, the result could be this thread never gets
489
- // unparked. Luckily `park` comes with the guarantee that if it got
490
- // an `unpark` just before on an unparked thread it does not park.
491
- thread:: park ( ) ;
492
- }
493
- break ;
249
+ self . inner . is_completed ( )
494
250
}
495
251
}
496
252
@@ -501,37 +257,6 @@ impl fmt::Debug for Once {
501
257
}
502
258
}
503
259
504
- impl Drop for WaiterQueue < ' _ > {
505
- fn drop ( & mut self ) {
506
- // Swap out our state with however we finished.
507
- let state_and_queue =
508
- self . state_and_queue . swap ( self . set_state_on_drop_to , Ordering :: AcqRel ) ;
509
-
510
- // We should only ever see an old state which was RUNNING.
511
- assert_eq ! ( state_and_queue. addr( ) & STATE_MASK , RUNNING ) ;
512
-
513
- // Walk the entire linked list of waiters and wake them up (in lifo
514
- // order, last to register is first to wake up).
515
- unsafe {
516
- // Right after setting `node.signaled = true` the other thread may
517
- // free `node` if there happens to be has a spurious wakeup.
518
- // So we have to take out the `thread` field and copy the pointer to
519
- // `next` first.
520
- let mut queue =
521
- state_and_queue. with_addr ( state_and_queue. addr ( ) & !STATE_MASK ) as * const Waiter ;
522
- while !queue. is_null ( ) {
523
- let next = ( * queue) . next ;
524
- let thread = ( * queue) . thread . take ( ) . unwrap ( ) ;
525
- ( * queue) . signaled . store ( true , Ordering :: Release ) ;
526
- // ^- FIXME (maybe): This is another case of issue #55005
527
- // `store()` has a potentially dangling ref to `signaled`.
528
- queue = next;
529
- thread. unpark ( ) ;
530
- }
531
- }
532
- }
533
- }
534
-
535
260
impl OnceState {
536
261
/// Returns `true` if the associated [`Once`] was poisoned prior to the
537
262
/// invocation of the closure passed to [`Once::call_once_force()`].
@@ -568,13 +293,22 @@ impl OnceState {
568
293
/// assert!(!state.is_poisoned());
569
294
/// });
570
295
#[ stable( feature = "once_poison" , since = "1.51.0" ) ]
296
+ #[ inline]
571
297
pub fn is_poisoned ( & self ) -> bool {
572
- self . poisoned
298
+ self . inner . is_poisoned ( )
573
299
}
574
300
575
301
/// Poison the associated [`Once`] without explicitly panicking.
576
- // NOTE: This is currently only exposed for the `lazy` module
302
+ // NOTE: This is currently only exposed for `OnceLock`.
303
+ #[ inline]
577
304
pub ( crate ) fn poison ( & self ) {
578
- self . set_state_on_drop_to . set ( ptr:: invalid_mut ( POISONED ) ) ;
305
+ self . inner . poison ( ) ;
306
+ }
307
+ }
308
+
309
+ #[ stable( feature = "std_debug" , since = "1.16.0" ) ]
310
+ impl fmt:: Debug for OnceState {
311
+ fn fmt ( & self , f : & mut fmt:: Formatter < ' _ > ) -> fmt:: Result {
312
+ f. debug_struct ( "OnceState" ) . field ( "poisoned" , & self . is_poisoned ( ) ) . finish ( )
579
313
}
580
314
}
0 commit comments