Conversation
- Add @mention dropdown UI in input box (keyboard navigable)
- Parse {{@agentid}} tags in message content for mention badge rendering
- Backend: route @agent messages directly to persistent agents, skip workforce
- Backend: reuse persistent agents across turns (preserve toolkit state)
- Frontend: persist mention target across turns, render in input and chat bubbles
- Fix keyboard ArrowUp/Down selection in mention dropdown
Persistent agents reused across multi-turn @mention conversations were losing tool call history due to prune_tool_calls_from_memory, causing the LLM to repeat operations (e.g. get_page_snapshot) it already performed.
Move prune_tool_calls_from_memory=False from chat_service persistent agent override to browser factory directly, so all browser agents (workforce and @mention) retain tool call history.
Resolve conflicts in: - server/uv.lock (revision number, greenlet wheel format) - src/store/chatStore.ts (merge target/displayContent and executionId/projectId params) - src/components/ChatBox/BottomBox/InputBox.tsx (merge mention and trigger/expanded features) - src/components/ChatBox/index.tsx (merge mention/direct-agent features with new unified layout)
Resolve conflicts keeping feature branch mention/direct-agent additions merged with main's trigger, execution, and layout changes.
- Add ExtensionProxyWrapper support for browser_plug Chrome extension - Extension proxy managed via dedicated Extension settings page - Backend starts WebSocket server on connect, polls for extension status - Multi-tab parallelism: each sub-agent gets its own browser tab - Extension proxy mode is exclusive (no CDP fallback when connected) - Add extension_proxy_service singleton and controller endpoints - Use local camel package instead of PyPI version
Enable real-time token streaming from ChatAgent to Chrome extension UI by setting stream=True in model config and async-iterating over response chunks, sending each text delta immediately via STREAM_TEXT.
Includes sidepanel UI, background service worker with WebSocket connection to eigent backend, CDP debugger management, and auto-reconnect/tab-lock/settings-persistence robustness features.
|
TODO:gif record for actions |
Resolve conflicts in pyproject.toml, uv.lock, UserQueryGroup.tsx, and ChatBox/index.tsx - keep branch's privacy/mention features and main's package versions.
| f"Extension chat model configured: " | ||
| f"{config.get('model_platform')}/{config.get('model_type')}" |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 8 days ago
In general, to fix clear-text logging of sensitive information you should ensure that no secret-bearing data (API keys, passwords, tokens, full config dicts that contain them, etc.) is ever passed to logging calls. Instead, log only non-sensitive metadata (e.g., model names, platform identifiers, status flags) or sanitized versions of the data (e.g., masked values).
For this specific case, the best fix without changing functionality is to:
- Keep logging that the extension chat model has been configured and which platform/type are used (these are non-sensitive).
- Make the log string construction clearly independent of the tainted
configobject in a way that the static analyzer recognizes as safe. The simplest way is to extract just the non-sensitive values into local variables and log only those, ensuring we never logconfigitself or any field that might be sensitive.
Concretely, in backend/app/service/extension_chat_service.py, in configure_model:
- Introduce local variables, e.g.
platform = config.get('model_platform')andmodel_type = config.get('model_type'). - Use these variables in the
logger.infomessage, rather than referring toconfiginline in the formatted string. - Make sure not to add any logging that includes
config,api_key, or similar sensitive values.
No other files need content changes for this specific issue, because they don’t log the config or API key.
| @@ -52,9 +52,12 @@ | ||
| """ | ||
| global _model_config | ||
| _model_config = config | ||
| platform = config.get("model_platform") | ||
| model_type = config.get("model_type") | ||
| logger.info( | ||
| f"Extension chat model configured: " | ||
| f"{config.get('model_platform')}/{config.get('model_type')}" | ||
| "Extension chat model configured: %s/%s", | ||
| platform, | ||
| model_type, | ||
| ) | ||
|
|
||
|
|
Related Issue
Closes #
Description
Testing Evidence (REQUIRED)
What is the purpose of this pull request?
Contribution Guidelines Acknowledgement