Skip to content

Feat: browser extension#1436

Draft
nitpicker55555 wants to merge 26 commits intomainfrom
feat/browser_extension
Draft

Feat: browser extension#1436
nitpicker55555 wants to merge 26 commits intomainfrom
feat/browser_extension

Conversation

@nitpicker55555
Copy link
Copy Markdown
Collaborator

Related Issue

Closes #

Description

Testing Evidence (REQUIRED)

  • I have included human-verified testing evidence in this PR.
  • This PR includes frontend/UI changes, and I attached screenshot(s) or screen recording(s).
  • No frontend/UI changes in this PR.

What is the purpose of this pull request?

  • Bug fix
  • New Feature
  • Documentation update
  • Other

Contribution Guidelines Acknowledgement

nitpicker55555 and others added 20 commits February 27, 2026 01:03
- Add @mention dropdown UI in input box (keyboard navigable)
- Parse {{@agentid}} tags in message content for mention badge rendering
- Backend: route @agent messages directly to persistent agents, skip workforce
- Backend: reuse persistent agents across turns (preserve toolkit state)
- Frontend: persist mention target across turns, render in input and chat bubbles
- Fix keyboard ArrowUp/Down selection in mention dropdown
Persistent agents reused across multi-turn @mention conversations were
losing tool call history due to prune_tool_calls_from_memory, causing
the LLM to repeat operations (e.g. get_page_snapshot) it already performed.
Move prune_tool_calls_from_memory=False from chat_service persistent
agent override to browser factory directly, so all browser agents
(workforce and @mention) retain tool call history.
Resolve conflicts in:
- server/uv.lock (revision number, greenlet wheel format)
- src/store/chatStore.ts (merge target/displayContent and executionId/projectId params)
- src/components/ChatBox/BottomBox/InputBox.tsx (merge mention and trigger/expanded features)
- src/components/ChatBox/index.tsx (merge mention/direct-agent features with new unified layout)
Resolve conflicts keeping feature branch mention/direct-agent additions
merged with main's trigger, execution, and layout changes.
- Add ExtensionProxyWrapper support for browser_plug Chrome extension
- Extension proxy managed via dedicated Extension settings page
- Backend starts WebSocket server on connect, polls for extension status
- Multi-tab parallelism: each sub-agent gets its own browser tab
- Extension proxy mode is exclusive (no CDP fallback when connected)
- Add extension_proxy_service singleton and controller endpoints
- Use local camel package instead of PyPI version
Enable real-time token streaming from ChatAgent to Chrome extension UI
by setting stream=True in model config and async-iterating over response
chunks, sending each text delta immediately via STREAM_TEXT.
@nitpicker55555 nitpicker55555 marked this pull request as draft March 4, 2026 14:25
Includes sidepanel UI, background service worker with WebSocket
connection to eigent backend, CDP debugger management, and
auto-reconnect/tab-lock/settings-persistence robustness features.
@nitpicker55555
Copy link
Copy Markdown
Collaborator Author

TODO:gif record for actions

Resolve conflicts in pyproject.toml, uv.lock, UserQueryGroup.tsx,
and ChatBox/index.tsx - keep branch's privacy/mention features and
main's package versions.
Comment on lines +56 to +57
f"Extension chat model configured: "
f"{config.get('model_platform')}/{config.get('model_type')}"

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.

Copilot Autofix

AI 8 days ago

In general, to fix clear-text logging of sensitive information you should ensure that no secret-bearing data (API keys, passwords, tokens, full config dicts that contain them, etc.) is ever passed to logging calls. Instead, log only non-sensitive metadata (e.g., model names, platform identifiers, status flags) or sanitized versions of the data (e.g., masked values).

For this specific case, the best fix without changing functionality is to:

  • Keep logging that the extension chat model has been configured and which platform/type are used (these are non-sensitive).
  • Make the log string construction clearly independent of the tainted config object in a way that the static analyzer recognizes as safe. The simplest way is to extract just the non-sensitive values into local variables and log only those, ensuring we never log config itself or any field that might be sensitive.

Concretely, in backend/app/service/extension_chat_service.py, in configure_model:

  • Introduce local variables, e.g. platform = config.get('model_platform') and model_type = config.get('model_type').
  • Use these variables in the logger.info message, rather than referring to config inline in the formatted string.
  • Make sure not to add any logging that includes config, api_key, or similar sensitive values.

No other files need content changes for this specific issue, because they don’t log the config or API key.


Suggested changeset 1
backend/app/service/extension_chat_service.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/backend/app/service/extension_chat_service.py b/backend/app/service/extension_chat_service.py
--- a/backend/app/service/extension_chat_service.py
+++ b/backend/app/service/extension_chat_service.py
@@ -52,9 +52,12 @@
     """
     global _model_config
     _model_config = config
+    platform = config.get("model_platform")
+    model_type = config.get("model_type")
     logger.info(
-        f"Extension chat model configured: "
-        f"{config.get('model_platform')}/{config.get('model_type')}"
+        "Extension chat model configured: %s/%s",
+        platform,
+        model_type,
     )
 
 
EOF
@@ -52,9 +52,12 @@
"""
global _model_config
_model_config = config
platform = config.get("model_platform")
model_type = config.get("model_type")
logger.info(
f"Extension chat model configured: "
f"{config.get('model_platform')}/{config.get('model_type')}"
"Extension chat model configured: %s/%s",
platform,
model_type,
)


Copilot is powered by AI and may make mistakes. Always verify output.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants