Skip to content

Run LLMxCPG-Q by vllm#22

Open
SmallBookworm wants to merge 3 commits intoqcri:mainfrom
SmallBookworm:vllm-branch
Open

Run LLMxCPG-Q by vllm#22
SmallBookworm wants to merge 3 commits intoqcri:mainfrom
SmallBookworm:vllm-branch

Conversation

@SmallBookworm
Copy link
Copy Markdown
Contributor

This pull request adds support for running the LLMxCPG-Q model locally using a vLLM server, making it easier to generate and run queries with a local OpenAI-compatible endpoint. The main changes include documentation updates and the addition of a helper script to launch the vLLM server.

  • New vLLM server integration:
    Added a new script run_vllm_server.py to launch a local OpenAI-compatible vLLM server for LLMxCPG-Q, with support for both merged/full models and LoRA adapters, and various configuration options such as host, port, tensor parallelism, and more.

  • Documentation updates:
    Updated queries/README.md to provide clear instructions for launching the vLLM server and running the query generation script with the local endpoint, including installation requirements and example commands.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a helper script and documentation to run the LLMxCPG-Q model locally via a vLLM OpenAI-compatible server, enabling query generation against a local endpoint.

Changes:

  • Added queries/run_vllm_server.py to launch a vLLM OpenAI-compatible API server (supports full/merged models and LoRA adapters).
  • Updated queries/README.md with instructions to start the vLLM server and run generate_and_run_queries.py against it.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
queries/run_vllm_server.py New CLI helper that builds and runs the vLLM API server command and prints usage hints.
queries/README.md Documents the new local vLLM workflow and example commands.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Lekssays and others added 2 commits March 26, 2026 12:05
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@Lekssays
Copy link
Copy Markdown
Collaborator

Thanks @SmallBookworm for the PR! Please take a look at the comments.

Copy link
Copy Markdown
Contributor Author

@SmallBookworm SmallBookworm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The copilot's code is no longer needed. I have changed --llm-endpoint to --llm-model-name.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +130 to +140
print("Model name for --llm-endpoint:", args.served_model_name)
print()
print("Run generate_and_run_queries.py with:")
print(
"python generate_and_run_queries.py "
"-d /path/to/dataset.json "
"-o /path/to/output_dir "
"--llm-model-type vLLM "
f"--llm-endpoint {args.served_model_name} "
f"--llm-port {args.port}"
)
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This printed helper command uses --llm-endpoint and labels it as a “Model name”, while the README uses --llm-model-name. Please align the flag name across run_vllm_server.py output and the README (and with generate_and_run_queries.py’s actual argparse flags). Otherwise users will copy/paste a command that fails or pass the wrong value type for the option.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot apply changes based on this feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants