Skip to content

Conversation

@Superjomn
Copy link
Collaborator

@Superjomn Superjomn commented Oct 24, 2025

Summary by CodeRabbit

  • New Features

    • Added configuration option to control LLM frontend process spawning behavior, enabling users to choose between separate process execution (more stable) or inline execution (higher throughput).
  • Documentation

    • Added environment variable documentation describing process spawning configuration and its trade-offs.
  • Tests

    • Extended test coverage to validate both process spawning modes.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Superjomn Superjomn requested review from a team as code owners October 24, 2025 06:59
@Superjomn
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 24, 2025

📝 Walkthrough

Walkthrough

Introduces a new toggle spawn_extra_main_process to control whether the LLM main process runs as a spawned background process or directly in the main process, with two separate execution paths, updated MPI environment variable handling, and supporting test infrastructure changes.

Changes

Cohort / File(s) Summary
Documentation update
examples/llm-api/llm_mgmn_llm_distributed.sh
Added commentary documenting optional TLLM_SPAWN_EXTRA_MAIN_PROCESS environment variable and trade-offs between stability and high-throughput streaming generation.
Main launch orchestration
tensorrt_llm/llmapi/trtllm-llmapi-launch
Introduces spawn_extra_main_process toggle (default 1) with two new execution functions: run_with_spawn_extra_main_process (spawns background process, manages MPI vars, propagates exit codes) and run_without_spawn_extra_main_process (runs task in main process, spawns mgmn_worker_node for non-zero ranks). Centralizes logic to select execution path and compute tllm_mpi_size.
Test environment
tests/unittest/llmapi/_run_remote_mpi_session.sh
tests/unittest/llmapi/test_mpi_session.py
Test script now logs TLLM_SPAWN_EXTRA_MAIN_PROCESS variable value. Python test added spawn_extra_main_process parametrization (True/False), updates shell script reference, exposes environment variable to subprocess, and extends Popen with start_new_session=True, universal_newlines=True, and cwd arguments.

Sequence Diagram(s)

sequenceDiagram
    participant User as User/Launcher
    participant MainScript as trtllm-llmapi-launch
    participant BGProc as Background Process
    participant MainTask as Main Task
    participant WorkerNode as mgmn_worker_node
    participant MPI as MPI Server

    User->>MainScript: Invoke with spawn_extra_main_process=1 (or 0)

    alt spawn_extra_main_process=1 (spawn enabled)
        MainScript->>MainScript: Clean MPI environment variables
        MainScript->>MainScript: Prepare IPC address/keys
        MainScript->>BGProc: Spawn background process (rank 0)
        par Background execution
            BGProc->>MainTask: Execute main task
            MainTask-->>BGProc: Return exit code
        and Parallel worker execution (rank > 0)
            MainScript->>WorkerNode: Execute mgmn_worker_node
            WorkerNode-->>MainScript: Return exit code
        end
        MainScript->>MPI: Stop MGMN leader node
        MPI-->>MainScript: Return exit code
        MainScript-->>User: Propagate final exit status
    else spawn_extra_main_process=0 (spawn disabled)
        MainScript->>MainTask: Execute main task (in-process)
        MainTask-->>MainScript: Return exit code
        alt Rank > 0
            MainScript->>WorkerNode: Execute mgmn_worker_node
            WorkerNode-->>MainScript: Return exit code
        end
        MainScript-->>User: Propagate exit status
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

The primary complexity stems from the launch script's new control flow logic with two distinct execution paths, MPI environment variable management, and exit-status propagation semantics. The test changes introduce parametrization and subprocess configuration adjustments. While the overall scope is focused (3 files), the density of logic changes in the main script and the need to verify control-flow correctness across both execution modes warrant careful review.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is incomplete and fails to meet the template requirements. While the author checked the PR Checklist box, the two critical sections—"Description" and "Test Coverage"—are entirely empty, containing only HTML comment placeholders with no substantive content. According to the repository template, the Description section should explain the issue and solution, and the Test Coverage section should list relevant test cases that safeguard the changes. The absence of these sections makes it impossible for reviewers to understand the motivation, implementation details, or test strategy for this PR.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Title Check ✅ Passed The pull request title clearly and specifically identifies the main change: introducing a TLLM_SPAWN_EXTRA_MAIN_PROCESS flag to control whether the main process is spawned separately. The title accurately reflects the core functionality across the modified files—the launch script implements the two execution paths (with and without spawning), the shell script and test files are updated to support parameterized testing of this feature, and documentation is added. The title is concise, specific (names the exact flag), and unambiguous about the primary purpose of the changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/llmapi/trtllm-llmapi-launch (1)

31-44: Use python3 consistently and avoid hard dependency on openssl

  • The free‑port probe uses “python”, while the rest uses “python3”. On images without “python”, this fails.
  • HMAC key generation relies on “openssl”; not always present in minimal containers.

Apply:

-    local free_port=$(python -c 'import socket; s=socket.socket();
+    local free_port=$(python3 -c 'import socket; s=socket.socket();
 port = 10012
 while True:
     try:
         s.bind(("", port))
         break
     except OSError:
         port += 1
 print(port); s.close()')
     export TLLM_SPAWN_PROXY_PROCESS_IPC_ADDR="tcp://127.0.0.1:${free_port}"
     log_stderr "TLLM_SPAWN_PROXY_PROCESS_IPC_ADDR: $TLLM_SPAWN_PROXY_PROCESS_IPC_ADDR"
 
-    export TLLM_SPAWN_PROXY_PROCESS_IPC_HMAC_KEY=$(openssl rand -hex 32)
+    if command -v openssl >/dev/null 2>&1; then
+        export TLLM_SPAWN_PROXY_PROCESS_IPC_HMAC_KEY="$(openssl rand -hex 32)"
+    else
+        export TLLM_SPAWN_PROXY_PROCESS_IPC_HMAC_KEY="$(python3 -c 'import secrets; print(secrets.token_hex(32))')"
+    fi
🧹 Nitpick comments (8)
examples/llm-api/llm_mgmn_llm_distributed.sh (1)

38-41: Clarify default and when to use this toggle

Suggest noting the default is 1 (spawn enabled) and that setting to 0 trades throughput for stability; also mention it’s consumed by trtllm-llmapi-launch. Example:

“TLLM_SPAWN_EXTRA_MAIN_PROCESS=0 disables extra main-process spawning (default is 1). Use 0 only when debugging or if spawning is unstable; keep 1 for high‑throughput streaming.”

tensorrt_llm/llmapi/trtllm-llmapi-launch (5)

8-9: Harden boolean parsing for TLLM_SPAWN_EXTRA_MAIN_PROCESS

Current test uses -eq 1; non‑numeric values (e.g., “true”) will cause “integer expression expected” and abort under set -e. Normalize common truthy/falsey values and default safely.

Apply:

-# Whether to spawn a additional process for the main process, it will optimize
-# the performance of the main process.
-spawn_extra_main_process=${TLLM_SPAWN_EXTRA_MAIN_PROCESS:-1}
+# Whether to spawn an additional process for the main process (default: 1).
+# Accept 1/0/true/false/yes/no/on/off (case-insensitive).
+normalize_bool() {
+    local v="${1:-1}"
+    v="${v,,}"
+    case "$v" in
+        1|true|yes|on)  echo 1 ;;
+        0|false|no|off) echo 0 ;;
+        *) log_stderr "Invalid TLLM_SPAWN_EXTRA_MAIN_PROCESS='$1', defaulting to 1"; echo 1 ;;
+    esac
+}
+spawn_extra_main_process="$(normalize_bool "${TLLM_SPAWN_EXTRA_MAIN_PROCESS:-1}")"

100-118: Graceful termination: forward signals to the background subshell

If the job is canceled (SIGTERM/SIGINT), ensure the background task process is also terminated. Add a trap after capturing subshell_pid.

Apply:

         subshell_pid=$!
         log_stderr "Rank${mpi_rank} Subshell PID: $subshell_pid"
 
+        # Ensure background subshell is terminated on INT/TERM
+        trap 'log_stderr "Rank${mpi_rank} received signal; stopping subshell $subshell_pid"; kill -TERM "$subshell_pid" 2>/dev/null || true' INT TERM
+
         log_stderr "Rank${mpi_rank} run mgmn leader node with mpi_world_size: $(mpi_world_size) ..."
         log_stderr "Rank0 host: $HOSTNAME"
         python3 -m tensorrt_llm.llmapi.mgmn_leader_node

I can extend this to capture and terminate any child PIDs from the leader if needed.

Also applies to: 119-124


139-151: Mirror exit logging/propagation in the no‑spawn path (rank 0)

For consistency and easier debugging, log and propagate the task’s exit code explicitly.

Apply:

-    if [ -z "$mpi_rank" ] || [ "$mpi_rank" -eq 0 ]; then
-        "${task_with_command[@]}"
+    if [ -z "$mpi_rank" ] || [ "$mpi_rank" -eq 0 ]; then
+        set +e
+        "${task_with_command[@]}"
+        task_exit_code=$?
+        log_stderr "Rank${mpi_rank} Task exit code: $task_exit_code"
+        exit $task_exit_code

10-11: Minor: remove unused native_mpi_rank

native_mpi_rank is assigned but not used.

-native_mpi_rank=$OMPI_COMM_WORLD_RANK

107-109: HOSTNAME may be unset in minimal containers

Prefer $(hostname) to avoid empty output.

-        log_stderr "Rank0 host: $HOSTNAME"
+        log_stderr "Rank0 host: $(hostname)"
tests/unittest/llmapi/_run_remote_mpi_session.sh (1)

8-8: LGTM; consider echoing the effective branch for assertions

Printing the numeric toggle is helpful. Optionally add a second line like “mode: spawn” vs “mode: no‑spawn” to assert correct path in tests.

tests/unittest/llmapi/test_mpi_session.py (1)

58-63: Assert the selected mode from process output for stronger coverage

You already print TLLM_SPAWN_EXTRA_MAIN_PROCESS in the helper script. Capture one line (e.g., buffer stderr) and assert it matches the parameter to ensure both code paths are exercised, not just invoked.

Example:

-    with Popen(command,
+    with Popen(command,
                env=envs,
                stdout=PIPE,
                stderr=PIPE,
                bufsize=1,
                start_new_session=True,
                universal_newlines=True,
                cwd=os.path.dirname(os.path.abspath(__file__))) as process:
-        # Function to read from a stream and write to output
+        captured = []
+        # Function to read from a stream and write to output
         def read_stream(stream, output_stream):
             for line in stream:
                 output_stream.write(line)
                 output_stream.flush()
+                captured.append(line)
         ...
         return_code = process.wait()
         ...
         if return_code != 0:
             raise subprocess.CalledProcessError(return_code, command)
+        # Basic check that the selected mode was honored
+        expected = f"TLLM_SPAWN_EXTRA_MAIN_PROCESS: {'1' if spawn_extra_main_process else '0'}"
+        assert any(expected in ln for ln in captured), f"Missing toggle line: {expected}"

Also applies to: 70-73, 73-81

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 35e35db and 9b1987c.

📒 Files selected for processing (4)
  • examples/llm-api/llm_mgmn_llm_distributed.sh (1 hunks)
  • tensorrt_llm/llmapi/trtllm-llmapi-launch (2 hunks)
  • tests/unittest/llmapi/_run_remote_mpi_session.sh (1 hunks)
  • tests/unittest/llmapi/test_mpi_session.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/unittest/llmapi/test_mpi_session.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/unittest/llmapi/test_mpi_session.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/unittest/llmapi/test_mpi_session.py
🧬 Code graph analysis (1)
tests/unittest/llmapi/test_mpi_session.py (2)
tests/integration/defs/trt_test_alternative.py (1)
  • Popen (153-168)
tests/unittest/llmapi/apps/_test_disagg_serving_multi_nodes.py (1)
  • env (61-68)
🪛 Ruff (0.14.1)
tests/unittest/llmapi/test_mpi_session.py

73-73: subprocess call: check for execution of untrusted input

(S603)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tensorrt_llm/llmapi/trtllm-llmapi-launch (2)

156-158: Env var casing consistency for tllm_mpi_size

You export tllm_mpi_size (lowercase). If downstream expects uppercase naming (typical for env vars), consider TLLM_MPI_SIZE or confirm current usage.

Please confirm the consumer name; if uppercase is expected, adjust:

-export tllm_mpi_size=$(mpi_world_size)
-log_stderr "tllm_mpi_size: $tllm_mpi_size"
+export TLLM_MPI_SIZE="$(mpi_world_size)"
+log_stderr "TLLM_MPI_SIZE: $TLLM_MPI_SIZE"

63-77: Verify whether KMP_ inclusion in blacklist is intentional or overly broad.

The review comment's concern is valid. Test file _test_disagg_serving_multi_nodes.py (line 67) strips only PMI_, OMPI_, PMIX_, SLURM_, while both trtllm-llmapi-launch and mpi_session.py include KMP_ in their broader blacklist. Since KMP_ controls OpenMP/MKL threading (not MPI initialization), and the launch script's documented purpose is to "clean the MPI environment" before spawning child processes, the inclusion of KMP_ warrants review to confirm it's intentional and not unintended side-effect cleanup.

tests/unittest/llmapi/test_mpi_session.py (1)

1-7: Security hint (Ruff S603) is OK here

Inputs are controlled: task_type is a Literal (“submit”|“submit_sync”) and the script path is computed locally. Keeping shell=False and arg list avoids injection.

If desired, run with a third, invalid task_type to confirm failure is clean and not shell‑interpreted.

Also applies to: 73-81

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22398 [ run ] triggered by Bot. Commit: 9b1987c

@Superjomn
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22409 [ run ] triggered by Bot. Commit: f245ee6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22398 [ run ] completed with state ABORTED. Commit: 9b1987c
LLM/main/L0_MergeRequest_PR #16881 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22409 [ run ] completed with state FAILURE. Commit: f245ee6
/LLM/main/L0_MergeRequest_PR pipeline #16891 completed with status: 'FAILURE'

@Superjomn Superjomn force-pushed the fix.turn-off-spawn-main-process branch from f245ee6 to 1bad84f Compare October 24, 2025 10:09
@Superjomn
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22431 [ run ] triggered by Bot. Commit: 1bad84f

Signed-off-by: Superjomn <[email protected]>
@Superjomn Superjomn force-pushed the fix.turn-off-spawn-main-process branch from 1bad84f to c82731d Compare October 24, 2025 11:09
@Superjomn Superjomn changed the title [https://nvbugs/5552836][fix] turn off spawning main process [https://nvbugs/5552836][fix] Add a flag to disable spawning main process Oct 24, 2025
@Superjomn
Copy link
Collaborator Author

/bot run

@Superjomn Superjomn changed the title [https://nvbugs/5552836][fix] Add a flag to disable spawning main process [https://nvbugs/5552836][fix] Add flag TLLM_SPAWN_EXTRA_MAIN_PROCESS to disable spawning main process Oct 24, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #22436 [ run ] triggered by Bot. Commit: c82731d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22431 [ run ] completed with state ABORTED. Commit: 1bad84f
LLM/main/L0_MergeRequest_PR #16905 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22436 [ run ] completed with state SUCCESS. Commit: c82731d
/LLM/main/L0_MergeRequest_PR pipeline #16909 completed with status: 'FAILURE'

@Superjomn
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22499 [ run ] triggered by Bot. Commit: c82731d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22499 [ run ] completed with state SUCCESS. Commit: c82731d
/LLM/main/L0_MergeRequest_PR pipeline #16956 completed with status: 'FAILURE'

@Superjomn
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22517 [ run ] triggered by Bot. Commit: c82731d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22517 [ run ] completed with state SUCCESS. Commit: c82731d
/LLM/main/L0_MergeRequest_PR pipeline #16973 completed with status: 'FAILURE'


export_free_tcp_addr_for_spawn_proxy_process

if [ -z "$mpi_rank" ] || [ "$mpi_rank" -eq 0 ]; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the following parts are directly moved to run_with_spawn_extra_main_process without changes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants