Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit d65b8e6

Browse files
Dazza0sudeep-mohanty
andcommittedApr 2, 2025··
feat(freertos/smp): Add Granular Locking V4 proposal documents
Co-authored-by: Sudeep Mohanty <[email protected]>
1 parent a7b27da commit d65b8e6

File tree

1 file changed

+411
-0
lines changed

1 file changed

+411
-0
lines changed
 

‎granular_locks_v4.md

Lines changed: 411 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,411 @@
1+
# Introduction
2+
3+
Currently, the SMP FreeRTOS kernel implements critical section using a "Global locking" approach, where all data is protected by a pair of spinlocks (namely the Task Lock and ISR Lock). This means that every critical section contests for this pair of spinlocks, even if those critical sections access unrelated/orthogonal data.
4+
5+
The goal of this proposal is to use granular or localized spinlocks so that concurrent access to different data groups do no contest for the same spinlocks. This will reduce lock contention and hopefully increase performance of the SMP FreeRTOS kernel.
6+
7+
This proposal describes a **"Dual Spinlock With Data Group Locking"** approach to granular locking.
8+
9+
Source code changes are based off release V11.1.0 of the FreeRTOS kernel.
10+
11+
# Data Groups
12+
13+
To make the spinlocks granular, FreeRTOS data will be organized into the following data groups, where each data group is protected by their own set of spinlocks.
14+
15+
- Kernel Data Group
16+
- All data in `tasks.c` and all event lists (e.g., `xTasksWaitingToSend` and `xTasksWaitingToReceive` in the queue objects)
17+
- Queue Data Group
18+
- Each queue object (`Queue_t`) is its own data group (excluding the task lists)
19+
- Event Group Data Group
20+
- Each event group object (`EventGroup_t`) is its own data group (excluding the task lists)
21+
- Stream Buffer Data Group
22+
- Each stream buffer object (`StreamBuffer_t`) is its own data group (excluding the task lists)
23+
- Timers
24+
- All data in `timers.c` and timer objects, belong to the same Timer Data Group
25+
- User/Port Data Groups
26+
- The user and ports are free to organize their own data in to data groups of their choosing
27+
28+
# Dual Spinlock With Data Group Locking
29+
30+
The **"Dual Spinlock With Data Group Locking"** uses a pair of spinlocks to protect each data group (namely the `xTaskSpinlock` and `xISRSpinlock` spinlocks).
31+
32+
```c
33+
typedef struct
34+
{
35+
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) )
36+
portSPINLOCK_TYPE xTaskSpinlock;
37+
portSPINLOCK_TYPE xISRSpinlock;
38+
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */
39+
} xSomeDataGroup_t;
40+
```
41+
42+
However, each data group must also allow for non-deterministic access with interrupts **enabled** by providing a pair of lock/unlock functions. These functions must block access to other tasks trying to access members of a data group and must pend access to ISRs trying to access members of the data group.
43+
44+
```c
45+
static void prvLockSomeDataGroupForTasks( xSomeDataGroup_t * const pxSomDataGroup );
46+
static void prvUnlockSomeDataGroupForTasks( xSomeDataGroup_t * const pxSomDataGroup );
47+
```
48+
49+
In simple terms, the "Dual Spinlock With Data Group Locking" is an extension is the existing dual spinlock (i.e., Task and ISR spinlock) approach used in the SMP FreeRTOS kernel, but replicated across different data groups.
50+
51+
## Data Group Critical Sections (Granular Locks)
52+
53+
A critical section for a data group can be achieved as follows:
54+
55+
- When entering a data group critical section from a task
56+
1. Disable interrupts
57+
2. Take `xTaskSpinlock` of data group
58+
3. Take `xISRSpinlock` of data group
59+
4. Increment nesting count
60+
- When entering data group critical section from an ISR
61+
1. Disable interrupts
62+
2. Take `xISRSpinlock` of data group
63+
3. Increment nesting count
64+
65+
When exiting a data group critical section, the procedure is reversed. Furthermore, since yield requests are pended when inside a critical section, exiting a task critical section will also need to handle any pended yields.
66+
67+
- When exiting a data group critical section from a task
68+
1. Release `xISRSpinlock` of data group
69+
2. Release `xTaskSpinlock` of data group
70+
3. Decrement nesting count
71+
4. Reenable interrupts if nesting count is 0
72+
5. Trigger yield if there is a yield pending
73+
- When exiting data group critical section form an ISR
74+
1. Release `xISRSpinlock` of data group
75+
2. Decrement nesting count
76+
3. Reenable interrupts if nesting count is 0
77+
78+
Entering multiple data group critical sections in a nested manner is permitted. This means, if a code path has already entered a critical section in data group A, it can then enter a critical section in data group B. This is analogous to nested critical sections. However, care must be taken to avoid deadlocks. This can be achieved by organizing data groups into a hierarchy, where a higher layer data group cannot nest into a lower one.
79+
80+
```
81+
+-------------------+
82+
| Kernel Data Group |
83+
+-------------------+
84+
85+
+-------------------+ +---------------------------+ +--------------------------+ +------------------+
86+
| Queue Data Groups | | Stream Buffer Data Groups | | Event Groups Data Groups | | Timer Data Group |
87+
+-------------------+ +---------------------------+ +--------------------------+ +------------------+
88+
89+
+------------------------------------------------------------------------------------------------------+
90+
| User Data Groups |
91+
+------------------------------------------------------------------------------------------------------+
92+
```
93+
94+
If nested locking only occurs from bottom up (e.g., User data group can nested into a Queue data group which in turn can nested into Kernel data group), then deadlocking will never occur.
95+
96+
## Data Group Locking
97+
98+
FreeRTOS does not permit walking linked lists while interrupts are disabled to ensure deterministic ISR latency. Therefore, each data group must provide a method of locking so that non-deterministic operations can be executed for a data group. While a data group is locked:
99+
100+
- Preemption is disabled for the current task
101+
- Interrupts remained enabled
102+
- The data group's `xTaskSpinlock` is taken to prevent tasks running on other cores from accessing the data group
103+
- Any ISR that attempts to update the data group will have their access pended. These pended accesses will be handled on resumption
104+
105+
The logic of suspending a data group is analogous to the logic of `vTaskSuspendAll()`/`xTaskResumeAll()` and `prvLockQueue()`/`prvUnlockQueue()` in the existing SMP kernel.
106+
107+
The details of how ISR accesses are pended during suspension will be specific to each data group type, thus the implementation of the suspend/resumption functions also be specified to each data group type. However, the procedure for data group suspension and resumption will typically be as follows:
108+
109+
- Suspension
110+
1. Disable preemption
111+
2. Lock the data group
112+
3. Set a suspension flag that indicates the data group is suspended
113+
4. Unlock the data group, but keep holding `xTaskSpinlock`
114+
- Resumption
115+
1. Lock the data group
116+
2. Clear the suspension flag
117+
3. Handle all pended accesses from ISRs
118+
4. Unlock the data group, thus releasing `xTaskSpinlock`
119+
120+
Locking multiple data groups in a nested manner is permitted, meaning if a code path has already locked data group A, it can then lock data group B. This is analogous to nested `vTaskSuspendAll()` calls. Similar to data group locking, deadlocks can be avoided by organizing data groups into a hierarchy.
121+
122+
## Thread Safety Check
123+
124+
Under SMP, there are four sources of concurrent access for a particular data group:
125+
126+
- Preempting task on the same core
127+
- Preempting ISR on the same core
128+
- Concurrent task on another core
129+
- Concurrent ISR on another core
130+
131+
This section checks that the data group critical section and locking mechanisms mentioned, ensure thread safety from each concurrent source of access.
132+
133+
- Data Group Critical Section from tasks: Interrupts are disabled, `xTaskSpinlock` and `xISRSpinlock` are taken
134+
- Task (same core): Context switch cannot occur because interrupts are disabled
135+
- Task (other cores): The task will spin on `xTaskSpinlock`
136+
- ISR (same core): Interrupts on the current core are disabled
137+
- ISR (other cores): ISR will spin on `xISRSpinlock`
138+
139+
- Data Group Critical Sections from ISRs: Interrupts are disabled, `xISRSpinlock` is taken
140+
- Task (same core): Context switch cannot occur because we are in an ISR
141+
- Task (other cores): The task will spin on `xISRSpinlock`
142+
- ISR (same core): Interrupts on the current core are disabled
143+
- ISR (other cores): ISR will spin on `xISRSpinlock`
144+
145+
- Data Group Locking from tasks: Preemption is disabled, `xTaskSpinlock` is taken
146+
- Task (same core): Context switch cannot occur because preemption is disabled
147+
- Task (other cores): The task will spin on `xTaskSpinlock`
148+
- ISR (same core): Critical section is entered because `xISRSpinlock` is not held, but access is pended
149+
- ISR (other cores): Critical section is entered because `xISRSpinlock` is not held, but access is pended
150+
151+
# Public API Changes
152+
153+
To support **"Dual Spinlock With Data Group Locking"**, the following changes have been made to the public facing API. These changes are non-breaking, meaning that applications that can build against the existing SMP FreeRTOS kernel will still be able to build even with granular locking enabled (albeit less performant).
154+
155+
The following APIs have been added to enter/exit a critical section in a data group. This are called by FreeRTOS source code to mark critical sections in data groups. However, users can also create their own data groups and enter/exit critical sections in the same manner.
156+
157+
If granular locking is disabled, these macros simply revert to being the standard task enter/exit critical macros.
158+
159+
```c
160+
#if ( portUSING_GRANULAR_LOCKS == 1 )
161+
#define data_groupENTER_CRITICAL() portENTER_CRITICAL_DATA_GROUP( ( portSPINLOCK_TYPE * ) pxTaskSpinlock, ( portSPINLOCK_TYPE * ) pxISRSpinlock )
162+
#define data_groupENTER_CRITICAL_FROM_ISR() portENTER_CRITICAL_DATA_GROUP_FROM_ISR( ( portSPINLOCK_TYPE * ) pxISRSpinlock )
163+
#define data_groupEXIT_CRITICAL() portEXIT_CRITICAL_DATA_GROUP( ( portSPINLOCK_TYPE * ) pxTaskSpinlock, ( portSPINLOCK_TYPE * ) pxISRSpinlock )
164+
#define data_groupEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ) portEXIT_CRITICAL_DATA_GROUP_FROM_ISR( uxSavedInterruptStatus, ( portSPINLOCK_TYPE * ) pxISRSpinlock )
165+
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
166+
#define data_groupENTER_CRITICAL() taskENTER_CRITICAL()
167+
#define data_groupENTER_CRITICAL_FROM_ISR() taskENTER_CRITICAL_FROM_ISR()
168+
#define data_groupEXIT_CRITICAL() taskEXIT_CRITICAL()
169+
#define data_groupEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus )
170+
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
171+
```
172+
173+
In case of the kernel data group (tasks.c), the granular locking macros make use of the existing `vTaskEnter/ExitCritical<FromISR>()` functions to establish critical sections.
174+
175+
176+
```c
177+
#if ( portUSING_GRANULAR_LOCKS == 1 )
178+
#define kernelENTER_CRITICAL() vTaskEnterCritical()
179+
#define kernelENTER_CRITICAL_FROM_ISR() vTaskEnterCriticalFromISR()
180+
#define kernelEXIT_CRITICAL() vTaskExitCritical()
181+
#define kernelEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ) vTaskExitCriticalFromISR( uxSavedInterruptStatus )
182+
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
183+
#define kernelENTER_CRITICAL() taskENTER_CRITICAL()
184+
#define kernelENTER_CRITICAL_FROM_ISR() taskENTER_CRITICAL_FROM_ISR()
185+
#define kernelEXIT_CRITICAL() taskEXIT_CRITICAL()
186+
#define kernelEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus )
187+
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
188+
```
189+
190+
The previous critical section macros, viz., `taskENTER/EXIT_CRITICAL()` are still provided and can be called by users. However, FreeRTOS source code will no longer call them. If called by the users, the port should implement a "user" data group. As a result, if an application previously relied on `taskENTER/EXIT_CRITICAL()` for thread safe access to some user data, the same code is still thread safe with granular locking enabled.
191+
192+
```c
193+
#define taskENTER_CRITICAL() portENTER_CRITICAL()
194+
#define taskEXIT_CRITICAL() portEXIT_CRITICAL()
195+
#define taskENTER_CRITICAL_FROM_ISR() portENTER_CRITICAL_FROM_ISR()
196+
#define taskEXIT_CRITICAL_FROM_ISR( x ) portEXIT_CRITICAL_FROM_ISR( x )
197+
```
198+
199+
# Porting Interface
200+
201+
To support **"Dual Spinlock With Data Group Locking"**, ports will need to provide the following macro definitions
202+
203+
## Port Config
204+
205+
The ports will need to provide the following port configuration macros
206+
207+
```c
208+
#define portUSING_GRANULAR_LOCKS 1 // Enables usage of granular locks
209+
#define portCRITICAL_NESTING_IN_TCB 0 // Disable critical nesting in TCB. Ports will need to track their own critical nesting
210+
```
211+
212+
## Spinlocks
213+
214+
Ports will need to provide the following spinlock related macros macros
215+
216+
```c
217+
/*
218+
Data type for the port's implementation of a spinlock
219+
*/
220+
#define portSPINLOCK_TYPE port_spinlock_t
221+
```
222+
223+
Macros are provided for the spinlocks for initializing them either statically or dynamically. This is reflected in the macros API pattern.
224+
225+
```c
226+
#define portINIT_SPINLOCK( pxSpinlock ) _port_spinlock_init( pxSpinlock )
227+
#define portINIT__SPINLOCK_STATIC PORT_SPINLOCK_STATIC_INIT
228+
```
229+
230+
## Critical Section Macros
231+
232+
The port will need to provide implementations for macros to enter/exit a data group critical section according the procedures described above. Typical implementations of each macro is demonstrated below:
233+
234+
```c
235+
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) )
236+
#define portENTER_CRITICAL_DATA_GROUP( pxTaskSpinlock, pxISRSpinlock ) vPortEnterCriticalDataGroup( pxTaskSpinlock, pxISRSpinlock )
237+
#define portENTER_CRITICAL_DATA_GROUP_FROM_ISR( pxISRSpinlock ) uxPortEnterCriticalDataGroupFromISR( pxISRSpinlock )
238+
#define portEXIT_CRITICAL_DATA_GROUP( pxTaskSpinlock, pxISRSpinlock ) vPortExitCriticalDataGroup( pxTaskSpinlock, pxISRSpinlock )
239+
#define portEXIT_CRITICAL_DATA_GROUP_FROM_ISR( x, pxISRSpinlock ) vPortExitCriticalDataGroupFromISR( x, pxISRSpinlock )
240+
#endif /* #if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) */
241+
```
242+
243+
Example implementation of `portENTER_CRITICAL_DATA_GROUP( pxTaskSpinlock, pxISRSpinlock )` and `portEXIT_CRITICAL_DATA_GROUP( pxTaskSpinlock, pxISRSpinlock )`. Note that:
244+
245+
- `pxTaskSpinlock` is made optional in case users want to create their own data groups that are protected only be a single lock.
246+
- The kernel implements the `xTaskUnlockCanYield()` function to indicate whether an yield should occur when a critical section exits. This function takes into account whether there are any pending yields and whether preemption is currently disabled.
247+
248+
```c
249+
void vPortEnterCriticalDataGroup( port_spinlock_t *pxTaskSpinlock, port_spinlock_t *pxISRSpinlock )
250+
{
251+
portDISABLE_INTERRUPTS();
252+
253+
BaseType_t xCoreID = xPortGetCoreID();
254+
255+
/* Task spinlock is optional and is always taken first */
256+
if( pxTaskSpinlock != NULL )
257+
{
258+
vPortSpinlockTake( pxTaskSpinlock, portMUX_NO_TIMEOUT );
259+
uxCriticalNesting[ xCoreID ]++;
260+
}
261+
262+
/* ISR spinlock must always be provided */
263+
vPortSpinlockTake( pxISRSpinlock, portMUX_NO_TIMEOUT );
264+
uxCriticalNesting[ xCoreID ]++;
265+
}
266+
267+
void vPortExitCriticalDataGroup( port_spinlock_t *pxTaskSpinlock, port_spinlock_t *pxISRSpinlock )
268+
{
269+
BaseType_t xCoreID = xPortGetCoreID();
270+
BaseType_t xYieldCurrentTask;
271+
272+
configASSERT( uxCriticalNesting[ xCoreID ] > 0U );
273+
274+
/* Get the xYieldPending stats inside the critical section. */
275+
xYieldCurrentTask = xTaskUnlockCanYield();
276+
277+
/* ISR spinlock must always be provided */
278+
vPortSpinlockRelease( pxISRSpinlock );
279+
uxCriticalNesting[ xCoreID ]--;
280+
281+
/* Task spinlock is optional and is always taken first */
282+
if( pxTaskSpinlock != NULL )
283+
{
284+
vPortSpinlockRelease( pxTaskSpinlock);
285+
uxCriticalNesting[ xCoreID ]--;
286+
}
287+
288+
assert(uxCriticalNesting[ xCoreID ] >= 0);
289+
290+
if( uxCriticalNesting[ xCoreID ] == 0 )
291+
{
292+
portENABLE_INTERRUPTS();
293+
294+
/* When a task yields in a critical section it just sets xYieldPending to
295+
* true. So now that we have exited the critical section check if xYieldPending
296+
* is true, and if so yield. */
297+
if( xYieldCurrentTask != pdFALSE )
298+
{
299+
portYIELD();
300+
}
301+
else
302+
{
303+
mtCOVERAGE_TEST_MARKER();
304+
}
305+
}
306+
}
307+
```
308+
309+
Example implementation of `portENTER_CRITICAL_DATA_GROUP_FROM_ISR( pxISRSpinlock )` and `portEXIT_CRITICAL_DATA_GROUP_FROM_ISR( x, pxISRSpinlock )`. Note that only `pxISRSpinlock` needs to be provided since ISR critical sections take a single lock.
310+
311+
```c
312+
UBaseType_t uxPortLockDataGroupFromISR( port_spinlock_t *pxISRSpinlock )
313+
{
314+
UBaseType_t uxSavedInterruptStatus = 0;
315+
316+
uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
317+
318+
vPortSpinlockTake( pxISRSpinlock, portMUX_NO_TIMEOUT );
319+
uxCriticalNesting[xPortGetCoreID()]++;
320+
321+
return uxSavedInterruptStatus;
322+
}
323+
324+
```c
325+
void vPortUnlockDataGroupFromISR( UBaseType_t uxSavedInterruptStatus, port_spinlock_t *pxISRSpinlock )
326+
{
327+
BaseType_t xCoreID = xPortGetCoreID();
328+
329+
vPortSpinlockRelease( pxISRSpinlock );
330+
uxCriticalNesting[ xCoreID ]--;
331+
332+
assert(uxCriticalNesting[ xCoreID ] >= 0);
333+
334+
if( uxCriticalNesting[ xCoreID ] == 0 )
335+
{
336+
portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
337+
}
338+
}
339+
```
340+
341+
# Source Specific Changes
342+
343+
- Added a `xTaskSpinlock` and `xISRSpinlock` to the data structures of each data group
344+
- All calls to `taskENTER/EXIT_CRITICAL[_FROM_ISR]()` have been replaced with `data_groupENTER/EXIT_CRITICAL[_FROM_ISR]()`.
345+
- Added `xTaskUnlockCanYield()` which indicates whether a yield should occur when exiting a critical (i.e., unlocking a data group). Yields should not occur if preemption is disabled (such as when exiting a critical section inside a suspension block).
346+
347+
## Tasks (Kernel Data Group)
348+
349+
- Some functions are called from nested critical sections of other data groups, thus an extra critical section call needs to be added to lock/unlock the kernel data group:
350+
- `vTaskInternalSetTimeOutState()`
351+
- `xTaskIncrementTick()`
352+
- `vTaskSwitchContext()`
353+
- `xTaskRemoveFromEventList()`
354+
- `vTaskInternalSetTimeOutState()`
355+
- `eTaskConfirmSleepModeStatus()`
356+
- `xTaskPriorityDisinherit()`
357+
- `pvTaskIncrementMutexHeldCount()`
358+
- Some functions are called from nested suspension blocks of other data gropus, thus an extra suspend/resume call need to be added:
359+
- `vTaskPlaceOnEventList()`
360+
- `vTaskPlaceOnUnorderedEventList()`
361+
- `vTaskPlaceOnEventListRestricted()`
362+
- `prvCheckForRunStateChange()` has been removed
363+
- Updated `vTaskSuspendAll()` and `xTaskResumeAll()`
364+
- Now holds the `xTaskSpinlock` during kernel suspension
365+
- Also increments/decrements `xPreemptionDisable` to prevent yield from occuring when exiting a critical section from inside a kernel suspension block.
366+
367+
## Queue
368+
369+
- Added `queueLOCK()` and `queueUNLOCK()`
370+
- If granular locks are disabled, reverts to the previous `prvLockQueue()` and `prvUnlockQueue()`
371+
- If granular locks are enabled, will lock/unlock the queue data group for tasks
372+
373+
## Event Groups
374+
375+
- Added `eventLOCK()` and `eventUNLOCK()`
376+
- If granular locks are disabled, reverts to the previous `vTaskSuspendAll()` and `xTaskResumeAll()` calls
377+
- If granular locks are enabled, will lock/unlock the event groups data group for tasks
378+
- `xEventGroupSetBits()` and `vEventGroupDelete()` will manually walk the task lists (which belong to the kernel data group). Thus, an extra `vTaskSuspendAll()`/`xTaskResumeAll()` is added to ensure that the kernel data group is suspended while walking those tasks lists.
379+
380+
## Stream Buffer
381+
382+
- Added `sbLOCK()` and `sbUNLOCK()`
383+
- If granular locks are disabled, reverts to the previous `vTaskSuspendAll()` and `xTaskResumeAll()` calls
384+
- If granular locks are enabled, will lock/unlock the stream buffer data group for tasks
385+
386+
## Timers
387+
388+
- Timers don't have a lock/unlock function. The existing `vTaskSuspendAll()`/`xTaskResumeAll()` calls are valid as they rely on freezing the tick count which is part of the kernel data group.
389+
390+
# Prerequisite Refactoring
391+
392+
A number of refactoring commits have been added to make the addition of granular locking changes simpler:
393+
394+
1. Move critical sections inside `xTaskPriorityInherit()`
395+
396+
Currently, `xTaskPriorityInherit()` is called with wrapping critical sections. The critical sections have now be moved inside the function so that they have access to the kernel data group's spinlocks.
397+
398+
2. Move critical section into `vTaskPriorityDisinheritAfterTimeout()`
399+
400+
Currently, `vTaskPriorityDisinheritAfterTimeout()` is called wrapping critical sections, where the highest priority waiting task is separately obtained via `prvGetDisinheritPriorityAfterTimeout()`. The critical section and checking of the highest priority have been all been moved into `vTaskPriorityDisinheritAfterTimeout()` as all of these operations access the kernel data group.
401+
402+
3. Allow `vTaskPreemptionEnable()` to be nested
403+
404+
Currently, nested calls of `vTaskPreemptionEnable()` is not supported. However, nested calls are required for granular locking due the occurrence of nested suspension across multiple data groups.
405+
406+
Thus, `vTaskPreemptionEnable()` has been updated to support nested calls. This is done by changing `xPreemptionDisable` to a count, where a non-zero count means that the current task cannot be preempted.
407+
408+
# Performance Metrics
409+
410+
Todo
411+

0 commit comments

Comments
 (0)
Please sign in to comment.