diff --git a/.github/workflows/gh-pages.yml b/.github/workflows/gh-pages.yml
index 2a97016897..ab94b0156d 100644
--- a/.github/workflows/gh-pages.yml
+++ b/.github/workflows/gh-pages.yml
@@ -16,6 +16,7 @@ permissions:
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
+
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
diff --git a/docs/contributing.mdx b/docs/contributing.mdx
index 7642a60a7b..179a31efec 100644
--- a/docs/contributing.mdx
+++ b/docs/contributing.mdx
@@ -61,7 +61,7 @@ Here’s how you can help improve accessibility when you contribute:
- **Color Contrast & Visual Indicators**: Use high-contrast colors and ensure visual indicators, like focus states, are clear. You can use tools like [WebAIM Contrast Checker](https://webaim.org/resources/contrastchecker/) to verify your changes.
- **Alt Text for Images**: Provide descriptive `alt` text for images and icons when they convey meaning, but equally important, if the image convays no information and is decorative in nature, use an empty alt text (`alt=""`).
-To test you changes, you can use tools like lighthouse or the accessibility tools in your browser.
+To test you changes, you can use tools like lighthouse or the accessibility tools in your browser.
Let's make Open WebUI usable for *everyone*.
diff --git a/docs/enterprise/index.mdx b/docs/enterprise/index.mdx
index 11b40166d6..5145fd411f 100644
--- a/docs/enterprise/index.mdx
+++ b/docs/enterprise/index.mdx
@@ -7,44 +7,42 @@ import { Testimonals } from "@site/src/components/Testimonals";
:::tip
-## Built for Everyone, Backed by the Community
+## Built for Everyone, Backed by the Community
-Open WebUI is completely free to use as-is, with no restrictions or hidden limits.
+Open WebUI is completely free to use as-is, with no restrictions or hidden limits.
It is **independently developed** and **sustained** by its users. **Optional** licenses are available to **support** ongoing development while providing **additional benefits** for businesses.
:::
+## The AI Platform Powering the World’s Leading Organizations
-## The AI Platform Powering the World’s Leading Organizations
+In the rapidly advancing AI landscape, staying ahead isn't just a competitive advantage—it’s a necessity. Open WebUI is the **fastest-growing AI platform** designed for **seamless enterprise deployment**, helping organizations leverage cutting-edge AI capabilities with **unmatched efficiency**.
-In the rapidly advancing AI landscape, staying ahead isn't just a competitive advantage—it’s a necessity. Open WebUI is the **fastest-growing AI platform** designed for **seamless enterprise deployment**, helping organizations leverage cutting-edge AI capabilities with **unmatched efficiency**.
-
-
-## **Let’s Talk**
+## **Let’s Talk**
:::info
Enterprise licenses and partnership opportunities are available exclusively to registered entities and organizations. At this time, we are unable to accommodate individual users. We appreciate your understanding and interest.
-To help us respond quickly and efficiently to your inquiry, **please use your official work email address**—**Personal email accounts (e.g. gmail.com, hotmail.com, icloud.com, etc.) are often flagged by our system** and will not be answered.
+To help us respond quickly and efficiently to your inquiry, **please use your official work email address**—**Personal email accounts (e.g., gmail.com, hotmail.com, icloud.com, etc.) are often flagged by our system** and will not be answered.
:::
-📧 **sales@openwebui.com** — Send us your deployment **end user count (seats)**, and let’s explore how we can work together! Support available in **English & Korean (한국어), with more languages coming soon!**
+📧 **sales@openwebui.com** — Send us your deployment **end user count (seats)**, and let’s explore how we can work together! Support available in **English & Korean (한국어), with more languages coming soon!**
-Take your AI strategy to the next level with our **premium enterprise solutions**, crafted for organizations that demand **expert consulting, tailored deployment, and dedicated support.**
+Take your AI strategy to the next level with our **premium enterprise solutions**, crafted for organizations that demand **expert consulting, tailored deployment, and dedicated support.**
:::warning
### Partnership Guidelines for Agencies
-We **carefully select** our partners to maintain the **highest standards** and provide **the best experience** to our community.
+We **carefully select** our partners to maintain the **highest standards** and provide **the best experience** to our community.
-If you are a **consulting agency**, **AI services provider**, or **reseller**, please **do not** contact our enterprise sales directly. You will **not** get a response from our sales team. Instead, **fill out our partnership interest form**:
+If you are a **consulting agency**, **AI services provider**, or **reseller**, please **do not** contact our enterprise sales directly. You will **not** get a response from our sales team. Instead, **fill out our partnership interest form**:
-🔗 **[Apply Here](https://forms.gle/SemdgxjFXpHmdCby6)**
+🔗 **[Apply Here](https://forms.gle/SemdgxjFXpHmdCby6)**
Please understand:
@@ -54,14 +52,13 @@ Please understand:
- We prioritize mature organizations, companies less than 5 years old are not eligible, except in truly exceptional cases.
- Our program is currently at full capacity. We will reach out at our discretion should an opportunity arise.
-
**If you have an end-client** who is ready to move forward with us and is committed to purchasing a rebrand (enterprise) license immediately...
-Contact us directly with:
+Contact us directly with:
-- Your agency details
-- Client’s company name and official work email domain
+- Your agency details
+- Client’s company name and official work email domain
- The expected number of end users (seats) to be deployed
This will help us expedite the process and ensure we can support your client’s needs effectively.
@@ -73,80 +70,77 @@ Thank you for understanding and respecting our partnership process.
:::
-
-
-
---
-
---
-## Why Enterprises Choose Open WebUI
+## Why Enterprises Choose Open WebUI
-### 🚀 **Faster AI Innovation, No Vendor Lock-In**
-Unlike proprietary AI platforms that dictate your roadmap, **Open WebUI puts you in control**. Deploy **on-premise, in a private cloud, or hybrid environments**—without restrictive contracts.
+### 🚀 **Faster AI Innovation, No Vendor Lock-In**
+Unlike proprietary AI platforms that dictate your roadmap, **Open WebUI puts you in control**. Deploy **on-premise, in a private cloud, or hybrid environments**—without restrictive contracts.
-### 🔒 **Enterprise-Grade Security & Compliance**
-Security is a business-critical requirement. Open WebUI is built to support **SOC 2, HIPAA, GDPR, FedRAMP, and ISO 27001 compliance**, ensuring enterprise security best practices with **on-premise and air-gapped deployments**.
+### 🔒 **Enterprise-Grade Security & Compliance**
+Security is a business-critical requirement. Open WebUI is built to support **SOC 2, HIPAA, GDPR, FedRAMP, and ISO 27001 compliance**, ensuring enterprise security best practices with **on-premise and air-gapped deployments**.
-### ⚡ **Reliable, Scalable, and Performance-Optimized**
-Built for large-scale enterprise deployments with **multi-node high availability**, Open WebUI can be configured to ensure **99.99% uptime**, optimized workloads, and **scalability across regions and business units**.
+### ⚡ **Reliable, Scalable, and Performance-Optimized**
+Built for large-scale enterprise deployments with **multi-node high availability**, Open WebUI can be configured to ensure **99.99% uptime**, optimized workloads, and **scalability across regions and business units**.
-### 💡 **Fully Customizable & Modular**
-Customize every aspect of Open WebUI to fit your enterprise’s needs. **White-label, extend, and integrate** seamlessly with **your existing systems**, including **LDAP, Active Directory, and custom AI models**.
+### 💡 **Fully Customizable & Modular**
+Customize every aspect of Open WebUI to fit your enterprise’s needs. **White-label, extend, and integrate** seamlessly with **your existing systems**, including **LDAP, Active Directory, and custom AI models**.
-### 🌍 **Thriving Ecosystem with Continuous Innovation**
-With one of the **fastest iteration cycles in AI**, Open WebUI ensures your organization stays ahead with **cutting-edge features** and **continuous updates**—no waiting for long release cycles.
+### 🌍 **Thriving Ecosystem with Continuous Innovation**
+With one of the **fastest iteration cycles in AI**, Open WebUI ensures your organization stays ahead with **cutting-edge features** and **continuous updates**—no waiting for long release cycles.
---
-## **Exclusive Enterprise Features & Services**
+## **Exclusive Enterprise Features & Services**
-Open WebUI’s enterprise solutions provide mission-critical businesses with **a suite of advanced capabilities and dedicated support**, including:
+Open WebUI’s enterprise solutions provide mission-critical businesses with **a suite of advanced capabilities and dedicated support**, including:
-### 🔧 **Enterprise-Grade Support & SLAs**
-✅ **Priority SLA Support** – **24/7 support — Available in English and Korean (한국어)** with dedicated response times for mission-critical issues.
-✅ **Dedicated Account Manager** – A **single point of contact** for guidance, onboarding, and strategy.
-✅ **Exclusive Office Hours with Core Engineers** – Directly work with the engineers evolving Open WebUI.
+### 🔧 **Enterprise-Grade Support & SLAs**
+✅ **Priority SLA Support** – **24/7 support — Available in English and Korean (한국어)** with dedicated response times for mission-critical issues.
+✅ **Dedicated Account Manager** – A **single point of contact** for guidance, onboarding, and strategy.
+✅ **Exclusive Office Hours with Core Engineers** – Directly work with the engineers evolving Open WebUI.
-### ⚙ **Customization & AI Model Optimization**
-✅ **Custom Theming & Branding** – White-label Open WebUI to **reflect your enterprise identity**.
-✅ **Custom AI Model Integration & Fine-Tuning** – Integrate **proprietary** or **third-party** AI models tailored for your workflows.
-✅ **Private Feature Development** – Work directly with our team to **build custom features** specific to your organization’s needs.
+### ⚙ **Customization & AI Model Optimization**
+✅ **Custom Theming & Branding** – White-label Open WebUI to **reflect your enterprise identity**.
+✅ **Custom AI Model Integration & Fine-Tuning** – Integrate **proprietary** or **third-party** AI models tailored for your workflows.
+✅ **Private Feature Development** – Work directly with our team to **build custom features** specific to your organization’s needs.
-### 🛡️ **Advanced Security & Compliance**
-✅ **On-Premise & Air-Gapped Deployments** – Full control over data, hosted in **your infrastructure**.
-✅ **Security Hardening & Compliance Audits** – Receive **customized compliance assessments** and configurations.
-✅ **Role-Based Access Control (RBAC)** – Enterprise-ready **SSO, LDAP, and IAM** integration.
+### 🛡️ **Advanced Security & Compliance**
+✅ **On-Premise & Air-Gapped Deployments** – Full control over data, hosted in **your infrastructure**.
+✅ **Security Hardening & Compliance Audits** – Receive **customized compliance assessments** and configurations.
+✅ **Role-Based Access Control (RBAC)** – Enterprise-ready **SSO, LDAP, and IAM** integration.
-### 🏗️ **Operational Reliability & Deployment Services**
-✅ **Managed Deployments** – Our team helps you **deploy Open WebUI effortlessly**, whether **on-premise, hybrid, or cloud**.
-✅ **Version Stability & Long-Term Maintenance** – Enterprise customers receive **LTS (Long-Term Support) versions** for managed **stability and security** over time.
-✅ **Enterprise Backups & Disaster Recovery** – High availability with structured backup plans and rapid recovery strategies.
+### 🏗️ **Operational Reliability & Deployment Services**
+✅ **Managed Deployments** – Our team helps you **deploy Open WebUI effortlessly**, whether **on-premise, hybrid, or cloud**.
+✅ **Version Stability & Long-Term Maintenance** – Enterprise customers receive **LTS (Long-Term Support) versions** for managed **stability and security** over time.
+✅ **Enterprise Backups & Disaster Recovery** – High availability with structured backup plans and rapid recovery strategies.
-### 📚 **Enterprise Training, Workshops & Consulting**
-✅ **AI Training & Enablement** – Expert-led **workshops for your engineering and data science teams**.
-✅ **Operational AI Consulting** – On-demand **architecture, optimization, and deployment consulting**.
-✅ **Strategic AI Roadmap Planning** – Work with our experts to **define your AI transformation strategy**.
+### 📚 **Enterprise Training, Workshops & Consulting**
+✅ **AI Training & Enablement** – Expert-led **workshops for your engineering and data science teams**.
+✅ **Operational AI Consulting** – On-demand **architecture, optimization, and deployment consulting**.
+✅ **Strategic AI Roadmap Planning** – Work with our experts to **define your AI transformation strategy**.
---
-## **Keep Open WebUI Thriving: Support Continuous Innovation**
+## **Keep Open WebUI Thriving: Support Continuous Innovation**
:::tip
-Even if you **don’t need an enterprise license**, consider becoming a **sponsor** to help fund continuous development.
-It’s an **investment in stability, longevity, and ongoing improvements**. A well-funded Open WebUI means **fewer bugs, fewer security concerns, and a more feature-rich platform** that stays ahead of industry trends. The cost of sponsoring is **a fraction of what it would take to build, maintain, and support an equivalent AI system internally.**
+Even if you **don’t need an enterprise license**, consider becoming a **sponsor** to help fund continuous development.
+
+It’s an **investment in stability, longevity, and ongoing improvements**. A well-funded Open WebUI means **fewer bugs, fewer security concerns, and a more feature-rich platform** that stays ahead of industry trends. The cost of sponsoring is **a fraction of what it would take to build, maintain, and support an equivalent AI system internally.**
:::
-You can use Open WebUI for free, no strings attached. However, building, maintaining, supporting, and evolving such a powerful AI platform requires **significant effort, time, and resources**. Infrastructure costs, security updates, continuous improvements, and keeping up with the latest AI advancements all demand **dedicated engineering, operational, and research efforts**.
+You can use Open WebUI for free, no strings attached. However, building, maintaining, supporting, and evolving such a powerful AI platform requires **significant effort, time, and resources**. Infrastructure costs, security updates, continuous improvements, and keeping up with the latest AI advancements all demand **dedicated engineering, operational, and research efforts**.
If Open WebUI helps your business save time, money, or resources, we **encourage** you to consider supporting its development. As an **independently funded** project, sponsorship enables us to maintain **a fast iteration cycle to keep up with the rapid advancements in AI**. Your support directly contributes to critical features, security enhancements, performance improvements, and integrations that benefit everyone—including **you**. Open WebUI will continue to offer the same feature set without requiring an enterprise license, ensuring **accessibility for all users**.
-💙 **[Sponsor Open WebUI](https://github.com/sponsors/tjbck)** – Join our existing backers in keeping Open WebUI thriving.
+💙 **[Sponsor Open WebUI](https://github.com/sponsors/tjbck)** – Join our existing backers in keeping Open WebUI thriving.
-Whether through **enterprise partnerships, contributions, or financial backing**, your support plays a crucial role in sustaining this powerful AI platform for businesses **worldwide**.
+Whether through **enterprise partnerships, contributions, or financial backing**, your support plays a crucial role in sustaining this powerful AI platform for businesses **worldwide**.
diff --git a/docs/faq.mdx b/docs/faq.mdx
index c560544e31..c4ea4f2951 100644
--- a/docs/faq.mdx
+++ b/docs/faq.mdx
@@ -117,7 +117,7 @@ In summary: MCP is supported — as long as the MCP Tool Server is fronted by an
To stay informed, you can follow release notes and announcements on our [GitHub Releases page](https://github.com/open-webui/open-webui/releases).
-### **Q: Is Open WebUI scalable for large organizations or enterprise deployments?**
+### **Q: Is Open WebUI scalable for large organizations or enterprise deployments?**
**A:** Yes—**Open WebUI is architected for massive scalability and production readiness.** It’s already trusted in deployments supporting extremely high user counts—**think tens or even hundreds of thousands of seats**—used by universities, multinational enterprises, and major organizations worldwide.
@@ -125,7 +125,7 @@ Open WebUI’s stateless, container-first architecture means you’re never bott
With the right infrastructure configuration, Open WebUI will effortlessly scale from pilot projects to mission-critical worldwide rollouts.
-### **Q: How can I deploy Open WebUI in a highly available, large-scale production environment?**
+### **Q: How can I deploy Open WebUI in a highly available, large-scale production environment?**
**A:** For organizations with demanding uptime and scale requirements, Open WebUI is designed to plug into modern production environments:
@@ -134,9 +134,9 @@ With the right infrastructure configuration, Open WebUI will effortlessly scale
- **Integration with enterprise authentication** (like SSO/OIDC/LDAP) for seamless and secure login
- **Observability and monitoring** via modern log/metrics tools
-If you’re planning a high-availability, enterprise-grade deployment, we recommend reviewing this excellent community resource:
+If you’re planning a high-availability, enterprise-grade deployment, we recommend reviewing this excellent community resource:
-👉 [The SRE's Guide to High Availability Open WebUI Deployment Architecture](http://taylorwilsdon.medium.com/the-sres-guide-to-high-availability-open-webui-deployment-architecture-2ee42654eced)
+👉 [The SRE's Guide to High Availability Open WebUI Deployment Architecture](http://taylorwilsdon.medium.com/the-sres-guide-to-high-availability-open-webui-deployment-architecture-2ee42654eced)
*(This provides a strong technical overview and best practices for large-scale Open WebUI architecture.)*
Open WebUI is designed from day one to not just handle, but thrive at scale—serving large organizations, universities, and enterprises worldwide.
diff --git a/docs/features/banners.md b/docs/features/banners.md
index a5f56a54e7..c7576210a5 100644
--- a/docs/features/banners.md
+++ b/docs/features/banners.md
@@ -4,12 +4,12 @@ title: "🔰 Customizable Banners"
---
Overview
---------
+---
Open WebUI provides a feature that allows administrators to create customizable banners with persistence in the `config.json` file. These banners can feature options for content, background color (info, warning, error, or success), and dismissibility. Banners are accessible only to logged-in users, ensuring the confidentiality of sensitive information.
Configuring Banners through the Admin Panel
----------------------------------------------
+---
To configure banners through the admin panel, follow these steps:
@@ -23,7 +23,7 @@ To configure banners through the admin panel, follow these steps:
8. Press `Save` at the bottom of the page to save the banner.
Configuring Banners through Environment Variables
-------------------------------------------------
+---
Alternatively, you can configure banners through environment variables. To do this, you will need to set the `WEBUI_BANNERS` environment variable with a list of dictionaries in the following format:
@@ -34,31 +34,31 @@ Alternatively, you can configure banners through environment variables. To do th
For more information on configuring environment variables in Open WebUI, see [Environment Variable Configuration](https://docs.openwebui.com/getting-started/env-configuration#webui_banners).
Environment Variable Description
----------------------------------
+---
-* `WEBUI_BANNERS`:
- * Type: list of dict
- * Default: `[]`
- * Description: List of banners to show to users.
+- `WEBUI_BANNERS`:
+ - Type: list of dict
+ - Default: `[]`
+ - Description: List of banners to show to users.
Banner Options
-----------------
+---
-* `id`: Unique identifier for the banner.
-* `type`: Background color of the banner (info, success, warning, error).
-* `title`: Title of the banner.
-* `content`: Content of the banner.
-* `dismissible`: Whether the banner is dismissible or not.
-* `timestamp`: Timestamp for the banner (optional).
+- `id`: Unique identifier for the banner.
+- `type`: Background color of the banner (info, success, warning, error).
+- `title`: Title of the banner.
+- `content`: Content of the banner.
+- `dismissible`: Whether the banner is dismissible or not.
+- `timestamp`: Timestamp for the banner (optional).
FAQ
-----
+---
-* Q: Can I configure banners through the admin panel?
+- Q: Can I configure banners through the admin panel?
A: Yes, you can configure banners through the admin panel by navigating to `Admin Panel` -> `Settings` -> `Interface` and clicking on the `+` icon to add a new banner.
-* Q: Can I configure banners through environment variables?
+- Q: Can I configure banners through environment variables?
A: Yes, you can configure banners through environment variables by setting the `WEBUI_BANNERS` environment variable with a list of dictionaries.
-* Q: What is the format for the `WEBUI_BANNERS` environment variable?
+- Q: What is the format for the `WEBUI_BANNERS` environment variable?
A: The format for the `WEBUI_BANNERS` environment variable is a list of dictionaries with the following keys: `id`, `type`, `title`, `content`, `dismissible`, and `timestamp`.
-* Q: Can I make banners dismissible?
+- Q: Can I make banners dismissible?
A: Yes, you can make banners dismissible by setting the `dismissible` key to `True` in the banner configuration or by toggling dismissibility for a banner within the UI.
diff --git a/docs/features/chat-features/chat-params.md b/docs/features/chat-features/chat-params.md
index f5605eb965..d4d782bf12 100644
--- a/docs/features/chat-features/chat-params.md
+++ b/docs/features/chat-features/chat-params.md
@@ -21,8 +21,12 @@ Within Open WebUI, there are three levels to setting a **System Prompt** and **A
Example Use Case
-:::tip **Per-chat basis**:
+
+:::tip
+
+**Per-chat basis**:
Suppose a user wants to set a custom system prompt for a specific conversation. They can do so by accessing the **Chat Controls** section and modifying the **System Prompt** field. These changes will only apply to the current chat session.
+
:::
@@ -34,8 +38,12 @@ Suppose a user wants to set a custom system prompt for a specific conversation.
Example Use Case
-:::tip **Per-account basis**:
+
+:::tip
+
+**Per-account basis**:
Suppose a user wants to set their own system prompt for their account. They can do so by accessing the **Settings** menu and modifying the **System Prompt** field.
+
:::
@@ -49,15 +57,20 @@ Suppose a user wants to set their own system prompt for their account. They can
Example Use Case
-:::tip **Per-model basis**:
+
+:::tip
+
+**Per-model basis**:
Suppose an administrator wants to set a default system prompt for a specific model. They can do so by accessing the **Models** section and modifying the **System Prompt** field for the corresponding model. Any chat instances using this model will automatically use the model's system prompt and advanced parameters.
+
:::
-
## **Optimize System Prompt Settings for Maximum Flexibility**
-:::tip **Bonus Tips**
+:::tip
+
+**Bonus Tips**
**This tip applies for both administrators and user accounts. To achieve maximum flexibility with your system prompts, we recommend considering the following setup:**
- Assign your primary System Prompt (**i.e., to give an LLM a defining character**) you want to use in your **General** settings **System Prompt** field. This sets it on a per-account level and allows it to work as the system prompt across all your LLMs without requiring adjustments within a model from the **Workspace** section.
@@ -65,4 +78,5 @@ Suppose an administrator wants to set a default system prompt for a specific mod
- For your secondary System Prompt (**i.e, to give an LLM a task to perform**), choose whether to place it in the **System Prompt** field within the **Chat Controls** sidebar (on a per-chat basis) or the **Models** section of the **Workspace** section (on a per-model basis) for Admins, allowing you to set them directly. This allows your account-level system prompt to work in conjunction with either the per-chat level system prompt provided by **Chat Controls**, or the per-model level system prompt provided by **Models**.
- As an administrator, you should assign your LLM parameters on a per-model basis using **Models** section for optimal flexibility. For both of these secondary System Prompts, ensure to set them in a manner that maximizes flexibility and minimizes required adjustments across different per-account or per-chat instances. It is essential for both your Admin account as well as all User accounts to understand the priority order by which system prompts within **Chat Controls** and the **Models** section will be applied to the **LLM**.
+
:::
diff --git a/docs/features/chat-features/chatshare.md b/docs/features/chat-features/chatshare.md
index ef3405d3be..a99d77567d 100644
--- a/docs/features/chat-features/chatshare.md
+++ b/docs/features/chat-features/chatshare.md
@@ -12,7 +12,9 @@ To enable community sharing, follow these steps:
3. Toggle on **Enable Community Sharing** within the **General** settings tab.
:::note
+
**Note:** Only Admins can toggle the **Enable Community Sharing** option. If this option is toggled off, users and Admins will not see the **Share to Open WebUI Community** option for sharing their chats. Community sharing must be enabled by an Admin for users to share chats to the Open WebUI community.
+
:::
This will enable community sharing for all users on your Open WebUI instance.
@@ -30,16 +32,18 @@ To share a chat:
When you select `Share to Open WebUI Community`:
-* A new tab will open, allowing you to upload your chat as a snapshot to the Open WebUI community website (https://openwebui.com/chats/upload).
-* You can control who can view your uploaded chat by choosing from the following access settings:
- * **Private**: Only you can access this chat.
- * **Public**: Anyone on the internet can view the messages displayed in the chat snapshot.
- * **Public, Full History**: Anyone on the internet can view the full regeneration history of your chat.
+- A new tab will open, allowing you to upload your chat as a snapshot to the Open WebUI community website (https://openwebui.com/chats/upload).
+- You can control who can view your uploaded chat by choosing from the following access settings:
+ - **Private**: Only you can access this chat.
+ - **Public**: Anyone on the internet can view the messages displayed in the chat snapshot.
+ - **Public, Full History**: Anyone on the internet can view the full regeneration history of your chat.
:::note
+
Note: You can change the permission level of your shared chats on the community platform at any time from your openwebui.com account.
**Currently, shared chats on the community website cannot be found through search. However, future updates are planned to allow public chats to be discoverable by the public if their permission is set to `Public` or `Public, Full History`.**
+
:::
Example of a shared chat to the community platform website: https://openwebui.com/c/iamg30/5e3c569f-905e-4d68-a96d-8a99cc65c90f
@@ -50,12 +54,12 @@ When you select `Copy Link`, a unique share link is generated that can be shared
**Important Considerations:**
-* The shared chat will only include messages that existed at the time the link was created. Any new messages sent within the chat after the link is generated will not be included, unless the link is deleted and updated with a new link.
-* The generated share link acts as a static snapshot of the chat at the time the link was generated.
-* To view the shared chat, users must:
+- The shared chat will only include messages that existed at the time the link was created. Any new messages sent within the chat after the link is generated will not be included, unless the link is deleted and updated with a new link.
+- The generated share link acts as a static snapshot of the chat at the time the link was generated.
+- To view the shared chat, users must:
1. Have an account on the Open WebUI instance where the link was generated.
2. Be signed in to their account on that instance.
-* If a user tries to access the shared link without being signed in, they will be redirected to the login page to log in before they can view the shared chat.
+- If a user tries to access the shared link without being signed in, they will be redirected to the login page to log in before they can view the shared chat.
### Viewing Shared Chats
@@ -77,10 +81,10 @@ To update a shared chat:
The **Share Chat** Modal includes the following options:
-* **before**: Opens a new tab to view the previously shared chat from the share link.
-* **delete this link**: Deletes the shared link of the chat and presents the initial share chat modal.
-* **Share to Open WebUI Community**: Opens a new tab for https://openwebui.com/chats/upload with the chat ready to be shared as a snapshot.
-* **Update and Copy Link**: Updates the snapshot of the chat of the previously shared chat link and copies it to your device's clipboard.
+- **before**: Opens a new tab to view the previously shared chat from the share link.
+- **delete this link**: Deletes the shared link of the chat and presents the initial share chat modal.
+- **Share to Open WebUI Community**: Opens a new tab for https://openwebui.com/chats/upload with the chat ready to be shared as a snapshot.
+- **Update and Copy Link**: Updates the snapshot of the chat of the previously shared chat link and copies it to your device's clipboard.
### Deleting Shared Chats
@@ -95,7 +99,9 @@ To delete a shared chat link:
Once deleted, the shared link will no longer be valid, and users will not be able to view the shared chat.
:::note
+
**Note:** Chats shared to the community platform cannot be deleted. To change the access level of a chat shared to the community platform:
+
:::
1. Log in to your Open WebUI account on openwebui.com.
diff --git a/docs/features/chat-features/conversation-organization.md b/docs/features/chat-features/conversation-organization.md
index 3b2d88d06d..54b26288ee 100644
--- a/docs/features/chat-features/conversation-organization.md
+++ b/docs/features/chat-features/conversation-organization.md
@@ -32,13 +32,16 @@ You can give each folder a unique personality and context. By hovering over a fo
### Example Use Case
-:::tip **Creating a 'Python Expert' Project**
+:::tip
+
+**Creating a 'Python Expert' Project**
Imagine you are working on a Python project. You can create a folder called "Python Expert".
-1. **Edit the folder** and set the System Prompt to something like: `You are an expert Python developer. You provide clean, efficient, and well-documented code. When asked for code, you prioritize clarity and adherence to PEP 8 standards.`
-2. **Attach Knowledge** by linking a knowledge base which contains a PDF of your project's technical specification, or a specific library's documentation.
-3. **Activate/Select the folder** by clicking on it.
-4. Now, any new chat you start will automatically have this expert persona, the context of your documents and is saved within the folder, ensuring you get highly relevant and specialized assistance for your project.
+1. **Edit the folder** and set the System Prompt to something like: `You are an expert Python developer. You provide clean, efficient, and well-documented code. When asked for code, you prioritize clarity and adherence to PEP 8 standards.`
+2. **Attach Knowledge** by linking a knowledge base which contains a PDF of your project's technical specification, or a specific library's documentation.
+3. **Activate/Select the folder** by clicking on it.
+4. Now, any new chat you start will automatically have this expert persona, the context of your documents and is saved within the folder, ensuring you get highly relevant and specialized assistance for your project.
+
:::
## Tagging Conversations
@@ -51,6 +54,9 @@ Tags provide an additional layer of organization by allowing you to label conver
### Example Use Case
-:::tip **Tagging by Topic**
+:::tip
+
+**Tagging by Topic**
If you frequently discuss certain topics, such as "marketing" or "development," you can tag conversations with these terms. Later, when you search for a specific tag, all relevant conversations will be quickly accessible.
+
:::
diff --git a/docs/features/chat-features/url-params.md b/docs/features/chat-features/url-params.md
index b9af303eea..de1f41bdda 100644
--- a/docs/features/chat-features/url-params.md
+++ b/docs/features/chat-features/url-params.md
@@ -11,17 +11,17 @@ The following table lists the available URL parameters, their function, and exam
| **Parameter** | **Description** | **Example** |
|-----------------------|-----------------------------------------------------------------------------------|----------------------------------|
-| `models` | Specifies the models to be used, as a comma-separated list. | `/?models=model1,model2` |
-| `model` | Specifies a single model to be used for the chat session. | `/?model=model1` |
-| `youtube` | Specifies a YouTube video ID to be transcribed within the chat. | `/?youtube=VIDEO_ID` |
+| `models` | Specifies the models to be used, as a comma-separated list. | `/?models=model1,model2` |
+| `model` | Specifies a single model to be used for the chat session. | `/?model=model1` |
+| `youtube` | Specifies a YouTube video ID to be transcribed within the chat. | `/?youtube=VIDEO_ID` |
| `load-url` | Specifies a Website URL to be fetched and uploaded as a document within the chat. | `/?load-url=https://google.com` |
-| `web-search` | Enables web search functionality if set to `true`. | `/?web-search=true` |
-| `tools` or `tool-ids` | Specifies a comma-separated list of tool IDs to activate in the chat. | `/?tools=tool1,tool2` |
-| `call` | Enables a call overlay if set to `true`. | `/?call=true` |
-| `q` | Sets an initial query or prompt for the chat. | `/?q=Hello%20there` |
-| `temporary-chat` | Marks the chat as temporary if set to `true`, for one-time sessions. | `/?temporary-chat=true` |
-| `code-interpreter` | Enables the code interpreter feature if set to `true`. | `/?code-interpreter=true` |
-| `image-generation` | Enables the image generation feature if set to `true`. | `/?image-generation=true` |
+| `web-search` | Enables web search functionality if set to `true`. | `/?web-search=true` |
+| `tools` or `tool-ids` | Specifies a comma-separated list of tool IDs to activate in the chat. | `/?tools=tool1,tool2` |
+| `call` | Enables a call overlay if set to `true`. | `/?call=true` |
+| `q` | Sets an initial query or prompt for the chat. | `/?q=Hello%20there` |
+| `temporary-chat` | Marks the chat as temporary if set to `true`, for one-time sessions. | `/?temporary-chat=true` |
+| `code-interpreter` | Enables the code interpreter feature if set to `true`. | `/?code-interpreter=true` |
+| `image-generation` | Enables the image generation feature if set to `true`. | `/?image-generation=true` |
### 1. **Models and Model Selection**
@@ -96,8 +96,12 @@ The following table lists the available URL parameters, their function, and exam
Example Use Case
-:::tip **Temporary Chat Session**
+
+:::tip
+
+**Temporary Chat Session**
Suppose a user wants to initiate a quick chat session without saving the history. They can do so by setting `temporary-chat=true` in the URL. This provides a disposable chat environment ideal for one-time interactions.
+
:::
diff --git a/docs/features/code-execution/artifacts.md b/docs/features/code-execution/artifacts.md
index 917d0797fb..d2e5e65243 100644
--- a/docs/features/code-execution/artifacts.md
+++ b/docs/features/code-execution/artifacts.md
@@ -3,7 +3,6 @@ sidebar_position: 1
title: "🏺 Artifacts"
---
-
# What are Artifacts and how do I use them in Open WebUI?
Artifacts in Open WebUI are an innovative feature inspired by Claude.AI's Artifacts, allowing you to interact with significant and standalone content generated by an LLM within a chat. They enable you to view, modify, build upon, or reference substantial pieces of content separately from the main conversation, making it easier to work with complex outputs and ensuring that you can revisit important information later.
@@ -14,10 +13,10 @@ Open WebUI creates an Artifact when the generated content meets specific criteri
1. **Renderable**: To be displayed as an Artifact, the content must be in a format that Open WebUI supports for rendering. This includes:
-* Single-page HTML websites
-* Scalable Vector Graphics (SVG) images
-* Complete webpages, which include HTML, Javascript, and CSS all in the same Artifact. Do note that HTML is required if generating a complete webpage.
-* ThreeJS Visualizations and other JavaScript visualization libraries such as D3.js.
+- Single-page HTML websites
+- Scalable Vector Graphics (SVG) images
+- Complete webpages, which include HTML, Javascript, and CSS all in the same Artifact. Do note that HTML is required if generating a complete webpage.
+- ThreeJS Visualizations and other JavaScript visualization libraries such as D3.js.
Other content types like Documents (Markdown or Plain Text), Code snippets, and React components are not rendered as Artifacts by Open WebUI.
@@ -29,39 +28,39 @@ To use artifacts in Open WebUI, a model must provide content that triggers the r
When Open WebUI creates an Artifact, you'll see the content displayed in a dedicated Artifacts window to the right side of the main chat. Here's how to interact with Artifacts:
-* **Editing and iterating**: Ask an LLM within the chat to edit or iterate on the content, and these updates will be displayed directly in the Artifact window. You can switch between versions using the version selector at the bottom left of the Artifact. Each edit creates a new version, allowing you to track changes using the version selector.
-* **Updates**: Open WebUI may update an existing Artifact based on your messages. The Artifact window will display the latest content.
-* **Actions**: Access additional actions for the Artifact, such as copying the content or opening the artifact in full screen, located in the lower right corner of the Artifact.
+- **Editing and iterating**: Ask an LLM within the chat to edit or iterate on the content, and these updates will be displayed directly in the Artifact window. You can switch between versions using the version selector at the bottom left of the Artifact. Each edit creates a new version, allowing you to track changes using the version selector.
+- **Updates**: Open WebUI may update an existing Artifact based on your messages. The Artifact window will display the latest content.
+- **Actions**: Access additional actions for the Artifact, such as copying the content or opening the artifact in full screen, located in the lower right corner of the Artifact.
## Editing Artifacts
1. **Targeted Updates**: Describe what you want changed and where. For example:
-* "Change the color of the bar in the chart from blue to red."
-* "Update the title of the SVG image to 'New Title'."
+- "Change the color of the bar in the chart from blue to red."
+- "Update the title of the SVG image to 'New Title'."
2. **Full Rewrites**: Request major changes affecting most of the content for substantial restructuring or multiple section updates. For example:
-* "Rewrite this single-page HTML website to be a landing page instead."
-* "Redesign this SVG so that it's animated using ThreeJS."
+- "Rewrite this single-page HTML website to be a landing page instead."
+- "Redesign this SVG so that it's animated using ThreeJS."
**Best Practices**:
-* Be specific about which part of the Artifact you want to change.
-* Reference unique identifying text around your desired change for targeted updates.
-* Consider whether a small update or full rewrite is more appropriate for your needs.
+- Be specific about which part of the Artifact you want to change.
+- Reference unique identifying text around your desired change for targeted updates.
+- Consider whether a small update or full rewrite is more appropriate for your needs.
## Use Cases
Artifacts in Open WebUI enable various teams to create high-quality work products quickly and efficiently. Here are some examples tailored to our platform:
-* **Designers**:
- * Create interactive SVG graphics for UI/UX design.
- * Design single-page HTML websites or landing pages.
-* **Developers**: Create simple HTML prototypes or generate SVG icons for projects.
-* **Marketers**:
- * Design campaign landing pages with performance metrics.
- * Create SVG graphics for ad creatives or social media posts.
+- **Designers**:
+ - Create interactive SVG graphics for UI/UX design.
+ - Design single-page HTML websites or landing pages.
+- **Developers**: Create simple HTML prototypes or generate SVG icons for projects.
+- **Marketers**:
+ - Design campaign landing pages with performance metrics.
+ - Create SVG graphics for ad creatives or social media posts.
## Examples of Projects you can create with Open WebUI's Artifacts
@@ -69,35 +68,35 @@ Artifacts in Open WebUI enable various teams and individuals to create high-qual
1. **Interactive Visualizations**
-* Components used: ThreeJS, D3.js, and SVG
-* Benefits: Create immersive data stories with interactive visualizations. Open WebUI's Artifacts enable you to switch between versions, making it easier to test different visualization approaches and track changes.
+- Components used: ThreeJS, D3.js, and SVG
+- Benefits: Create immersive data stories with interactive visualizations. Open WebUI's Artifacts enable you to switch between versions, making it easier to test different visualization approaches and track changes.
Example Project: Build an interactive line chart using ThreeJS to visualize stock prices over time. Update the chart's colors and scales in separate versions to compare different visualization strategies.
2. **Single-Page Web Applications**
-* Components used: HTML, CSS, and JavaScript
-* Benefits: Develop single-page web applications directly within Open WebUI. Edit and iterate on the content using targeted updates and full rewrites.
+- Components used: HTML, CSS, and JavaScript
+- Benefits: Develop single-page web applications directly within Open WebUI. Edit and iterate on the content using targeted updates and full rewrites.
Example Project: Design a to-do list app with a user interface built using HTML and CSS. Use JavaScript to add interactive functionality. Update the app's layout and UI/UX using targeted updates and full rewrites.
3. **Animated SVG Graphics**
-* Components used: SVG and ThreeJS
-* Benefits: Create engaging animated SVG graphics for marketing campaigns, social media, or web design. Open WebUI's Artifacts enable you to edit and iterate on the graphics in a single window.
+- Components used: SVG and ThreeJS
+- Benefits: Create engaging animated SVG graphics for marketing campaigns, social media, or web design. Open WebUI's Artifacts enable you to edit and iterate on the graphics in a single window.
Example Project: Design an animated SVG logo for a company brand. Use ThreeJS to add animation effects and Open WebUI's targeted updates to refine the logo's colors and design.
4. **Webpage Prototypes**
-* Components used: HTML, CSS, and JavaScript
-* Benefits: Build and test webpage prototypes directly within Open WebUI. Switch between versions to compare different design approaches and refine the prototype.
+- Components used: HTML, CSS, and JavaScript
+- Benefits: Build and test webpage prototypes directly within Open WebUI. Switch between versions to compare different design approaches and refine the prototype.
Example Project: Develop a prototype for a new e-commerce website using HTML, CSS, and JavaScript. Use Open WebUI's targeted updates to refines the navigation, layout, and UI/UX.
5. **Interactive Storytelling**
-* Components used: HTML, CSS, and JavaScript
-* Benefits: Create interactive stories with scrolling effects, animations, and other interactive elements. Open WebUI's Artifacts enable you to refine the story and test different versions.
+- Components used: HTML, CSS, and JavaScript
+- Benefits: Create interactive stories with scrolling effects, animations, and other interactive elements. Open WebUI's Artifacts enable you to refine the story and test different versions.
Example Project: Build an interactive story about a company's history, using scrolling effects and animations to engage the reader. Use targeted updates to refine the story's narrative and Open WebUI's version selector to test different versions.
diff --git a/docs/features/code-execution/mermaid.md b/docs/features/code-execution/mermaid.md
index b54410856c..c57ef5b1c3 100644
--- a/docs/features/code-execution/mermaid.md
+++ b/docs/features/code-execution/mermaid.md
@@ -13,8 +13,8 @@ Open WebUI supports rendering of visually appealing MermaidJS diagrams, flowchar
To generate a MermaidJS diagram, simply ask an LLM within any chat to create a diagram or chart using MermaidJS. For example, you can ask the LLM to:
-* "Create a flowchart for a simple decision-making process for me using Mermaid. Explain how the flowchart works."
-* "Use Mermaid to visualize a decision tree to determine whether it's suitable to go for a walk outside."
+- "Create a flowchart for a simple decision-making process for me using Mermaid. Explain how the flowchart works."
+- "Use Mermaid to visualize a decision tree to determine whether it's suitable to go for a walk outside."
Note that for the LLM's response to be rendered correctly, it must begin with the word `mermaid` followed by the MermaidJS code. You can reference the [MermaidJS documentation](https://mermaid.js.org/intro/) to ensure the syntax is correct and provide structured prompts to the LLM to guide it towards generating better MermaidJS syntax.
@@ -28,8 +28,8 @@ If the model generates MermaidJS syntax, but the visualization does not render,
Once your visualization is displayed, you can:
-* Zoom in and out to examine it more closely.
-* Copy the original MermaidJS code used to generate the visualization by clicking the copy button at the top-right corner of the display area.
+- Zoom in and out to examine it more closely.
+- Copy the original MermaidJS code used to generate the visualization by clicking the copy button at the top-right corner of the display area.
### Example
diff --git a/docs/features/code-execution/python.md b/docs/features/code-execution/python.md
index 12d8574b64..ddedd1f97d 100644
--- a/docs/features/code-execution/python.md
+++ b/docs/features/code-execution/python.md
@@ -17,16 +17,16 @@ The Open WebUI frontend includes a self-contained WASM (WebAssembly) Python envi
Pyodide code execution is configured to load only packages configured in scripts/prepare-pyodide.js and then added to "CodeBlock.svelte". The following Pyodide packages are currently supported in Open WebUI:
-* micropip
-* packaging
-* requests
-* beautifulsoup4
-* numpy
-* pandas
-* matplotlib
-* scikit-learn
-* scipy
-* regex
+- micropip
+- packaging
+- requests
+- beautifulsoup4
+- numpy
+- pandas
+- matplotlib
+- scikit-learn
+- scipy
+- regex
These libraries can be used to perform various tasks, such as data manipulation, machine learning, and web scraping. If the package you're wanting to run is not compiled inside of the Pyodide we ship with Open WebUIm, the package will not be able to be used.
@@ -36,13 +36,13 @@ To execute Python code, ask an LLM within a chat to write a Python script for yo
## Tips for Using Python Code Execution
-* When writing Python code, keep in mind that the code would be running in a Pyodide environment when executed. You can inform the LLM of this by mentioning "Pyodide environment" when asking for code.
-* Research the Pyodide documentation to understand the capabilities and limitations of the environment.
-* Experiment with different libraries and scripts to explore the possibilities of Python code execution in Open WebUI.
+- When writing Python code, keep in mind that the code would be running in a Pyodide environment when executed. You can inform the LLM of this by mentioning "Pyodide environment" when asking for code.
+- Research the Pyodide documentation to understand the capabilities and limitations of the environment.
+- Experiment with different libraries and scripts to explore the possibilities of Python code execution in Open WebUI.
## Pyodide Documentation
-* [Pyodide Documentation](https://pyodide.org/en/stable/)
+- [Pyodide Documentation](https://pyodide.org/en/stable/)
## Code Example
@@ -52,7 +52,7 @@ Here is an example of a simple Python script that can be executed using Pyodide:
import pandas as pd
# Create a sample DataFrame
-data = {'Name': ['John', 'Anna', 'Peter'],
+data = {'Name': ['John', 'Anna', 'Peter'],
'Age': [28, 24, 35]}
df = pd.DataFrame(data)
diff --git a/docs/features/document-extraction/apachetika.md b/docs/features/document-extraction/apachetika.md
index 59ccdd6e31..8830564572 100644
--- a/docs/features/document-extraction/apachetika.md
+++ b/docs/features/document-extraction/apachetika.md
@@ -4,7 +4,9 @@ title: "🪶 Apache Tika Extraction"
---
:::warning
+
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+
:::
## 🪶 Apache Tika Extraction
@@ -12,14 +14,14 @@ This tutorial is a community contribution and is not supported by the Open WebUI
This documentation provides a step-by-step guide to integrating Apache Tika with Open WebUI. Apache Tika is a content analysis toolkit that can be used to detect and extract metadata and text content from over a thousand different file types. All of these file types can be parsed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more.
Prerequisites
-------------
+---
-* Open WebUI instance
-* Docker installed on your system
-* Docker network set up for Open WebUI
+- Open WebUI instance
+- Docker installed on your system
+- Docker network set up for Open WebUI
Integration Steps
-----------------
+---
### Step 1: Create a Docker Compose File or Run the Docker Command for Apache Tika
@@ -62,15 +64,15 @@ Note that if you choose to use the Docker run command, you'll need to specify th
To use Apache Tika as the context extraction engine in Open WebUI, follow these steps:
-* Log in to your Open WebUI instance.
-* Navigate to the `Admin Panel` settings menu.
-* Click on `Settings`.
-* Click on the `Documents` tab.
-* Change the `Default` content extraction engine dropdown to `Tika`.
-* Update the context extraction engine URL to `http://tika:9998`.
-* Save the changes.
+- Log in to your Open WebUI instance.
+- Navigate to the `Admin Panel` settings menu.
+- Click on `Settings`.
+- Click on the `Documents` tab.
+- Change the `Default` content extraction engine dropdown to `Tika`.
+- Update the context extraction engine URL to `http://tika:9998`.
+- Save the changes.
- Verifying Apache Tika in Docker
+Verifying Apache Tika in Docker
=====================================
To verify that Apache Tika is working correctly in a Docker environment, you can follow these steps:
@@ -145,10 +147,10 @@ Instructions to run the script:
### Prerequisites
-* Python 3.x must be installed on your system
-* `requests` library must be installed (you can install it using pip: `pip install requests`)
-* Apache Tika Docker container must be running (use `docker run -p 9998:9998 apache/tika` command)
-* Replace `"test.txt"` with the path to the file you want to send to Apache Tika
+- Python 3.x must be installed on your system
+- `requests` library must be installed (you can install it using pip: `pip install requests`)
+- Apache Tika Docker container must be running (use `docker run -p 9998:9998 apache/tika` command)
+- Replace `"test.txt"` with the path to the file you want to send to Apache Tika
### Running the Script
@@ -158,29 +160,33 @@ Instructions to run the script:
4. Run the script using the following command: `python verify_tika.py`
5. The script will output a message indicating whether Apache Tika is working correctly
+:::note
+
Note: If you encounter any issues, ensure that the Apache Tika container is running correctly and that the file is being sent to the correct URL.
+:::
+
### Conclusion
By following these steps, you can verify that Apache Tika is working correctly in a Docker environment. You can test the setup by sending a file for analysis, verifying the server is running with a GET request, or use a script to automate the process. If you encounter any issues, ensure that the Apache Tika container is running correctly and that the file is being sent to the correct URL.
Troubleshooting
---------------
+---
-* Make sure the Apache Tika service is running and accessible from the Open WebUI instance.
-* Check the Docker logs for any errors or issues related to the Apache Tika service.
-* Verify that the context extraction engine URL is correctly configured in Open WebUI.
+- Make sure the Apache Tika service is running and accessible from the Open WebUI instance.
+- Check the Docker logs for any errors or issues related to the Apache Tika service.
+- Verify that the context extraction engine URL is correctly configured in Open WebUI.
Benefits of Integration
-----------------------
+---
Integrating Apache Tika with Open WebUI provides several benefits, including:
-* **Improved Metadata Extraction**: Apache Tika's advanced metadata extraction capabilities can help you extract accurate and relevant data from your files.
-* **Support for Multiple File Formats**: Apache Tika supports a wide range of file formats, making it an ideal solution for organizations that work with diverse file types.
-* **Enhanced Content Analysis**: Apache Tika's advanced content analysis capabilities can help you extract valuable insights from your files.
+- **Improved Metadata Extraction**: Apache Tika's advanced metadata extraction capabilities can help you extract accurate and relevant data from your files.
+- **Support for Multiple File Formats**: Apache Tika supports a wide range of file formats, making it an ideal solution for organizations that work with diverse file types.
+- **Enhanced Content Analysis**: Apache Tika's advanced content analysis capabilities can help you extract valuable insights from your files.
Conclusion
-----------
+---
Integrating Apache Tika with Open WebUI is a straightforward process that can improve the metadata extraction capabilities of your Open WebUI instance. By following the steps outlined in this documentation, you can easily set up Apache Tika as a context extraction engine for Open WebUI.
diff --git a/docs/features/document-extraction/docling.md b/docs/features/document-extraction/docling.md
index 01f06f6618..240b6e2381 100644
--- a/docs/features/document-extraction/docling.md
+++ b/docs/features/document-extraction/docling.md
@@ -4,7 +4,9 @@ title: "🐤 Docling Document Extraction"
---
:::warning
+
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+
:::
## 🐤 Docling Document Extraction
@@ -12,14 +14,14 @@ This tutorial is a community contribution and is not supported by the Open WebUI
This documentation provides a step-by-step guide to integrating Docling with Open WebUI. Docling is a document processing library designed to transform a wide range of file formats—including PDFs, Word documents, spreadsheets, HTML, and images—into structured data such as JSON or Markdown. With built-in support for layout detection, table parsing, and language-aware processing, Docling streamlines document preparation for AI applications like search, summarization, and retrieval-augmented generation, all through a unified and extensible interface.
Prerequisites
-------------
+---
-* Open WebUI instance
-* Docker installed on your system
-* Docker network set up for Open WebUI
+- Open WebUI instance
+- Docker installed on your system
+- Docker network set up for Open WebUI
Integration Steps
-----------------
+---
### Step 1: Run the Docker Command for Docling-Serve
@@ -28,31 +30,32 @@ docker run -p 5001:5001 -e DOCLING_SERVE_ENABLE_UI=true quay.io/docling-project/
```
*With GPU support:
+
```bash
docker run --gpus all -p 5001:5001 -e DOCLING_SERVE_ENABLE_UI=true quay.io/docling-project/docling-serve-cu124
```
### Step 2: Configure Open WebUI to use Docling
-* Log in to your Open WebUI instance.
-* Navigate to the `Admin Panel` settings menu.
-* Click on `Settings`.
-* Click on the `Documents` tab.
-* Change the `Default` content extraction engine dropdown to `Docling`.
-* Update the context extraction engine URL to `http://host.docker.internal:5001`.
-* Save the changes.
+- Log in to your Open WebUI instance.
+- Navigate to the `Admin Panel` settings menu.
+- Click on `Settings`.
+- Click on the `Documents` tab.
+- Change the `Default` content extraction engine dropdown to `Docling`.
+- Update the context extraction engine URL to `http://host.docker.internal:5001`.
+- Save the changes.
### (optional) Step 3: Configure Docling's picture description features
-* on the `Documents` tab:
-* Activate `Describe Pictures in Documents` button.
-* Below, choose a description mode: `local` or `API`
- * `local`: vision model will run in the same context as Docling itself
- * `API`: Docling will make a call to an external service/container (i.e. Ollama)
-* fill in an **object value** as described at https://github.com/docling-project/docling-serve/blob/main/docs/usage.md#picture-description-options
-* Save the changes.
+- on the `Documents` tab:
+- Activate `Describe Pictures in Documents` button.
+- Below, choose a description mode: `local` or `API`
+ - `local`: vision model will run in the same context as Docling itself
+ - `API`: Docling will make a call to an external service/container (i.e. Ollama)
+- fill in an **object value** as described at https://github.com/docling-project/docling-serve/blob/main/docs/usage.md#picture-description-options
+- Save the changes.
- #### Make sure the object value is a valid JSON! Working examples below:
+ #### Make sure the object value is a valid JSON! Working examples below

@@ -97,12 +100,12 @@ This command starts the Docling container and maps port 5001 from the container
### 2. Verify the Server is Running
-* Go to `http://127.0.0.1:5001/ui/`
-* The URL should lead to a UI to use Docling
+- Go to `http://127.0.0.1:5001/ui/`
+- The URL should lead to a UI to use Docling
### 3. Verify the Integration
-* You can try uploading some files via the UI and it should return output in MD format or your desired format
+- You can try uploading some files via the UI and it should return output in MD format or your desired format
### Conclusion
diff --git a/docs/features/document-extraction/index.md b/docs/features/document-extraction/index.md
index 9d0184c873..6661d16cf5 100644
--- a/docs/features/document-extraction/index.md
+++ b/docs/features/document-extraction/index.md
@@ -10,12 +10,14 @@ Open WebUI provides powerful document extraction capabilities that allow you to
## What is Document Extraction?
Document extraction refers to the process of automatically identifying and extracting text and data from various file formats, including:
+
- PDFs (both text-based and scanned)
- Images containing text
- Handwritten documents
- And more
With proper document extraction, Open WebUI can help you:
+
- Convert image-based documents to searchable text
- Preserve document structure and layout information
- Extract data in structured formats for further processing
@@ -26,4 +28,3 @@ With proper document extraction, Open WebUI can help you:
Open WebUI supports multiple document extraction engines to accommodate different needs and document types. Each extraction method has its own strengths and is suitable for different scenarios.
Explore the documentation for each available extraction method to learn how to set it up and use it effectively with your Open WebUI instance.
-
diff --git a/docs/features/document-extraction/mistral-ocr.md b/docs/features/document-extraction/mistral-ocr.md
index 237c6196e7..68475f7260 100644
--- a/docs/features/document-extraction/mistral-ocr.md
+++ b/docs/features/document-extraction/mistral-ocr.md
@@ -4,7 +4,9 @@ title: "👁️ Mistral OCR"
---
:::warning
+
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+
:::
## 👁️ Mistral OCR
@@ -12,41 +14,40 @@ This tutorial is a community contribution and is not supported by the Open WebUI
This documentation provides a step-by-step guide to integrating Mistral OCR with Open WebUI. Mistral OCR is an optical character recognition library designed to extract text from a variety of image-based file formats—including scanned PDFs, images, and handwritten documents—into structured data such as JSON or plain text. With advanced support for multilingual text recognition, layout analysis, and handwriting interpretation, Mistral OCR simplifies the process of digitizing and processing documents for AI applications like search, summarization, and data extraction, all through a robust and customizable interface.
Prerequisites
-------------
+---
-* Open WebUI instance
-* Mistral AI account
+- Open WebUI instance
+- Mistral AI account
Integration Steps
-----------------
+---
### Step 1: Sign Up or Login to Mistral AI console
-* Go to `https://console.mistral.ai`
-* Follow the instructions as instructed on the process
-* After successful authorization, you should be welcomed to the Console Home
+- Go to `https://console.mistral.ai`
+- Follow the instructions as instructed on the process
+- After successful authorization, you should be welcomed to the Console Home
### Step 2: Generate an API key
-* Go to `API Keys` or `https://console.mistral.ai/api-keys`
-* Create a new key and make sure to copy it
+- Go to `API Keys` or `https://console.mistral.ai/api-keys`
+- Create a new key and make sure to copy it
### Step 3: Configure Open WebUI to use Mistral OCR
-* Log in to your Open WebUI instance.
-* Navigate to the `Admin Panel` settings menu.
-* Click on `Settings`.
-* Click on the `Documents` tab.
-* Change the `Default` content extraction engine dropdown to `Mistral OCR`.
-* Paste the API Key on the field
-* Save the Admin Panel.
+- Log in to your Open WebUI instance.
+- Navigate to the `Admin Panel` settings menu.
+- Click on `Settings`.
+- Click on the `Documents` tab.
+- Change the `Default` content extraction engine dropdown to `Mistral OCR`.
+- Paste the API Key on the field
+- Save the Admin Panel.
Verifying Mistral OCR
=====================================
To verify that Mistral OCR is working correctly in script, please refer to `https://docs.mistral.ai/capabilities/document/`
-
### Conclusion
Integrating Mistral OCR with Open WebUI is a simple and effective way to enhance document processing and content extraction capabilities. By following the steps in this guide, you can set up Mistral OCR as the default extraction engine and leverage its advanced text recognition features. Once configured, Mistral OCR enables powerful, multilingual document parsing with support for various formats, enhancing AI-driven document analysis capabilities in Open WebUI.
diff --git a/docs/features/evaluation/index.mdx b/docs/features/evaluation/index.mdx
index fbcc990d51..027ecdd6a6 100644
--- a/docs/features/evaluation/index.mdx
+++ b/docs/features/evaluation/index.mdx
@@ -3,7 +3,6 @@ sidebar_position: 6
title: "📝 Evaluation"
---
-
## Why Should I Evaluate Models?
Meet **Alex**, a machine learning engineer at a mid-sized company. Alex knows there are numerous AI models out there—GPTs, LLaMA, and many more—but which one works best for the job at hand? They all sound impressive on paper, but Alex can’t just rely on public leaderboards. These models perform differently depending on the context, and some models may have been trained on the evaluation dataset (sneaky!). Plus, the way these models write can sometimes feel … off.
@@ -15,7 +14,7 @@ That's where Open WebUI comes in. It gives Alex and their team an easy way to ev
- **Why evaluations matter**: Too many models, but not all fit your specific needs. General public leaderboards can't always be trusted.
- **How to solve it**: Open WebUI offers a built-in evaluation system. Use a thumbs up/down to rate model responses.
- **What happens behind the scenes**: Ratings adjust your personalized leaderboard, and snapshots from rated chats will be used for future model fine-tuning!
-- **Evaluation options**:
+- **Evaluation options**:
- **Arena Model**: Randomly selects models for you to compare.
- **Normal Interaction**: Just chat like usual and rate the responses.
@@ -42,7 +41,7 @@ One cool feature? **Whenever you rate a response**, the system captures a **snap
### Two Ways to Evaluate an AI Model
-Open WebUI provides two straightforward approaches for evaluating AI models.
+Open WebUI provides two straightforward approaches for evaluating AI models.
### **1. Arena Model**
@@ -51,7 +50,7 @@ The **Arena Model** randomly selects from a pool of available models, making sur
How to use it:
- Select a model from the Arena Model selector.
- Use it like you normally would, but now you’re in “arena mode.”
-
+
For your feedback to affect the leaderboard, you need what’s called a **sibling message**. What's a sibling message? A sibling message is just any alternative response generated by the same query (think of message regenerations or having multiple models generating responses side-by-side). This way, you’re comparing responses **head-to-head**.
- **Scoring tip**: When you thumbs up one response, the other will automatically get a thumbs down. So, be mindful and only upvote the message you believe is genuinely the best!
@@ -67,7 +66,7 @@ Need more depth? You can even replicate a [**Chatbot Arena**](https://lmarena.ai
### **2. Normal Interaction**
-No need to switch to “arena mode” if you don't want to. You can use Open WebUI normally and rate the AI model responses as you would in everyday operations. Just thumbs up/down the model responses, whenever you feel like it. However, **if you want your feedback to be used for ranking on the leaderboard**, you'll need to **swap out the model and interact with a different one**. This ensures there's a **sibling response** to compare it with – only comparisons between two different models will influence rankings.
+No need to switch to “arena mode” if you don't want to. You can use Open WebUI normally and rate the AI model responses as you would in everyday operations. Just thumbs up/down the model responses, whenever you feel like it. However, **if you want your feedback to be used for ranking on the leaderboard**, you'll need to **swap out the model and interact with a different one**. This ensures there's a **sibling response** to compare it with – only comparisons between two different models will influence rankings.
For instance, this is how you can rate during a normal interaction:
@@ -95,7 +94,7 @@ When you rate chats, you can **tag them by topic** for more granular insights. T
Open WebUI tries to **automatically tag chats** based on the conversation topic. However, depending on the model you're using, the automatic tagging feature might **sometimes fail** or misinterpret the conversation. When this happens, it’s best practice to **manually tag your chats** to ensure the feedback is accurate.
- **How to manually tag**: When you rate a response, you'll have the option to add your own tags based on the conversation's context.
-
+
Don't skip this! Tagging is super powerful because it allows you to **re-rank models based on specific topics**. For instance, you might want to see which model performs best for answering technical support questions versus general customer inquiries.
Here’s an example of how re-ranking looks:
diff --git a/docs/features/index.mdx b/docs/features/index.mdx
index 6ea5851a89..a4338165e4 100644
--- a/docs/features/index.mdx
+++ b/docs/features/index.mdx
@@ -222,7 +222,6 @@ import { TopBanners } from "@site/src/components/TopBanners";
### 💻 Model Management
-
- 🛠️ **Model Builder**: All models can be built and edited with a persistent model builder mode within the models edit page.
- 📚 **Knowledge Support for Models**: The ability to attach tools, functions, and knowledge collections directly to models from a model's edit page, enhancing the information available to each model.
diff --git a/docs/features/openldap.mdx b/docs/features/openldap.mdx
index cefb6f0ba3..45528de5a4 100644
--- a/docs/features/openldap.mdx
+++ b/docs/features/openldap.mdx
@@ -14,7 +14,7 @@ The easiest way to get a test OpenLDAP server running is by using Docker. This `
version: "3.9"
services:
ldap:
- image: osixia/openldap:1.5.0
+ image: osixia/openldap:1.5.0
container_name: openldap
environment:
LDAP_ORGANISATION: "Example Inc"
@@ -28,7 +28,7 @@ services:
- "389:389"
networks: [ldapnet]
- phpldapadmin:
+ phpldapadmin:
image: osixia/phpldapadmin:0.9.0
environment:
PHPLDAPADMIN_LDAP_HOSTS: "ldap"
@@ -67,6 +67,7 @@ userPassword: {PLAIN}password123
**Note on Passwords:** The `userPassword` field is set to a plain text value for simplicity in this test environment. In production, you should always use a hashed password. You can generate a hashed password using `slappasswd` or `openssl passwd`. For example:
```bash
+
# Using slappasswd (inside the container)
docker exec openldap slappasswd -s your_password
@@ -104,10 +105,13 @@ Now, configure your Open WebUI instance to use the LDAP server for authenticatio
Set the following environment variables for your Open WebUI instance.
:::info
+
Open WebUI reads these environment variables only on the first startup. Subsequent changes must be made in the Admin settings panel of the UI unless you have `ENABLE_PERSISTENT_CONFIG=false`.
+
:::
```env
+
# Enable LDAP
ENABLE_LDAP="true"
@@ -133,11 +137,11 @@ LDAP_SEARCH_FILTER="(uid=%(user)s)" # More secure and performant
Alternatively, you can configure these settings directly in the UI:
-1. Log in as an administrator.
-2. Navigate to **Settings** > **General**.
-3. Enable **LDAP Authentication**.
-4. Fill in the fields corresponding to the environment variables above.
-5. Save the settings and restart Open WebUI.
+1. Log in as an administrator.
+2. Navigate to **Settings** > **General**.
+3. Enable **LDAP Authentication**.
+4. Fill in the fields corresponding to the environment variables above.
+5. Save the settings and restart Open WebUI.
## 5. Logging In
@@ -186,15 +190,15 @@ openldap | ... conn=1001 op=0 RESULT tag=97 err=49 text=
**Cause:** The LDAP server rejected the bind attempt because the distinguished name (DN) or the password was incorrect. This happens during the second bind attempt, where Open WebUI tries to authenticate with the user's provided credentials.
**Solution:**
-1. **Verify the Password:** Ensure you are using the correct plaintext password. The `userPassword` value in the LDIF file is what the server expects. If it's a hash, you must provide the original plaintext password.
-2. **Check the User DN:** The DN used for the bind (`uid=jdoe,ou=users,dc=example,dc=org`) must be correct.
-3. **Test with `ldapwhoami`:** Verify the credentials directly against the LDAP server to isolate the issue from Open WebUI.
+1. **Verify the Password:** Ensure you are using the correct plaintext password. The `userPassword` value in the LDIF file is what the server expects. If it's a hash, you must provide the original plaintext password.
+2. **Check the User DN:** The DN used for the bind (`uid=jdoe,ou=users,dc=example,dc=org`) must be correct.
+3. **Test with `ldapwhoami`:** Verify the credentials directly against the LDAP server to isolate the issue from Open WebUI.
```bash
ldapwhoami -x -H ldap://localhost:389 \
-D "uid=jdoe,ou=users,dc=example,dc=org" -w "password123"
```
If this command fails with `ldap_bind: Invalid credentials (49)`, the problem is with the credentials or the LDAP server's password configuration, not Open WebUI.
-4. **Reset the Password:** If you don't know the password, reset it using `ldapmodify` or `ldappasswd`. It's often easiest to use a `{PLAIN}` password for initial testing and then switch to a secure hash like `{SSHA}`.
+4. **Reset the Password:** If you don't know the password, reset it using `ldapmodify` or `ldappasswd`. It's often easiest to use a `{PLAIN}` password for initial testing and then switch to a secure hash like `{SSHA}`.
**Example LDIF to change password:**
```ldif title="change_password.ldif"
diff --git a/docs/features/plugin/events/index.mdx b/docs/features/plugin/events/index.mdx
index 53ed5e7c74..a23c098c66 100644
--- a/docs/features/plugin/events/index.mdx
+++ b/docs/features/plugin/events/index.mdx
@@ -17,7 +17,7 @@ This guide explains **what events are**, **how you can trigger them** from your
- Events are sent using the `__event_emitter__` helper for one-way updates, or `__event_call__` when you need user input or a response (e.g., confirmation, input, etc.).
-**Metaphor:**
+**Metaphor:**
Think of Events like push notifications and modal dialogs that your plugin can trigger, making the chat experience richer and more interactive.
---
@@ -58,6 +58,7 @@ result = await __event_call__(
},
}
)
+
# result will contain the user's input value
```
@@ -86,22 +87,30 @@ Below is a comprehensive table of **all supported `type` values** for events, al
| -------------------------------------------- | ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| `status` | Show a status update/history for a message | `{description: ..., done: bool, hidden: bool}` |
| `chat:completion` | Provide a chat completion result | (Custom, see Open WebUI internals) |
-| `chat:message:delta`,
`message` | Append content to the current message | `{content: "text to append"}` |
-| `chat:message`,
`replace` | Replace current message content completely | `{content: "replacement text"}` |
-| `chat:message:files`,
`files` | Set or overwrite message files (for uploads, output) | `{files: [...]}` |
+| `chat:message:delta`,
+`message` | Append content to the current message | `{content: "text to append"}` |
+| `chat:message`,
+`replace` | Replace current message content completely | `{content: "replacement text"}` |
+| `chat:message:files`,
+`files` | Set or overwrite message files (for uploads, output) | `{files: [...]}` |
| `chat:title` | Set (or update) the chat conversation title | Topic string OR `{title: ...}` |
| `chat:tags` | Update the set of tags for a chat | Tag array or object |
-| `source`,
`citation` | Add a source/citation, or code execution result | For code: See [below.](/docs/features/plugin/events/index.mdx#source-or-citation-and-code-execution) |
+| `source`,
+`citation` | Add a source/citation, or code execution result | For code: See [below.](/docs/features/plugin/events/index.mdx#source-or-citation-and-code-execution) |
| `notification` | Show a notification ("toast") in the UI | `{type: "info" or "success" or "error" or "warning", content: "..."}` |
-| `confirmation`
(needs `__event_call__`) | Ask for confirmation (OK/Cancel dialog) | `{title: "...", message: "..."}` |
-| `input`
(needs `__event_call__`) | Request simple user input ("input box" dialog) | `{title: "...", message: "...", placeholder: "...", value: ...}` |
-| `execute`
(needs `__event_call__`) | Request user-side code execution and return result | `{code: "...javascript code..."}` |
+| `confirmation`
+(needs `__event_call__`) | Ask for confirmation (OK/Cancel dialog) | `{title: "...", message: "..."}` |
+| `input`
+(needs `__event_call__`) | Request simple user input ("input box" dialog) | `{title: "...", message: "...", placeholder: "...", value: ...}` |
+| `execute`
+(needs `__event_call__`) | Request user-side code execution and return result | `{code: "...javascript code..."}` |
**Other/Advanced types:**
- You can define your own types and handle them at the UI layer (or use upcoming event-extension mechanisms).
### ❗ Details on Specific Event Types
+
### `status`
Show a status/progress update in the UI:
@@ -388,6 +397,7 @@ response = await __event_call__({
"placeholder": "Name"
}
})
+
# response will be: {"value": "user's answer"}
```
diff --git a/docs/features/plugin/functions/action.mdx b/docs/features/plugin/functions/action.mdx
index f075b8f555..457d83f155 100644
--- a/docs/features/plugin/functions/action.mdx
+++ b/docs/features/plugin/functions/action.mdx
@@ -32,11 +32,11 @@ Actions follow a specific class structure with an `action` method as the main en
class Action:
def __init__(self):
self.valves = self.Valves()
-
+
class Valves(BaseModel):
# Configuration parameters
parameter_name: str = "default_value"
-
+
async def action(self, body: dict, __user__=None, __event_emitter__=None, __event_call__=None):
# Action implementation
return {"content": "Modified message content"}
@@ -68,10 +68,10 @@ Send real-time updates to the frontend during action execution:
async def action(self, body: dict, __event_emitter__=None):
# Send status updates
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Processing request..."}
})
-
+
# Send notifications
await __event_emitter__({
"type": "notification",
@@ -92,7 +92,7 @@ async def action(self, body: dict, __event_call__=None):
"message": "Are you sure you want to proceed?"
}
})
-
+
# Request user input
user_input = await __event_call__({
"type": "input",
@@ -127,7 +127,7 @@ actions = [
},
{
"id": "translate",
- "name": "Translate",
+ "name": "Translate",
"icon_url": "data:image/svg+xml;base64,..."
}
]
@@ -137,7 +137,7 @@ async def action(self, body: dict, __id__=None, **kwargs):
# Summarization logic
return {"content": "Summary: ..."}
elif __id__ == "translate":
- # Translation logic
+ # Translation logic
return {"content": "Translation: ..."}
```
@@ -157,10 +157,10 @@ async def action(self, body: dict, __event_emitter__=None):
"type": "status",
"data": {"description": "Starting background processing..."}
})
-
+
# Perform time-consuming operation
result = await some_long_running_function()
-
+
return {"content": f"Processing completed: {result}"}
```
@@ -170,7 +170,7 @@ Actions can work with uploaded files and generate new media:
```python
async def action(self, body: dict):
message = body
-
+
# Access uploaded files
if message.get("files"):
for file in message["files"]:
@@ -178,7 +178,7 @@ async def action(self, body: dict):
if file["type"] == "image":
# Image processing logic
pass
-
+
# Return new files
return {
"content": "Analysis complete",
@@ -199,7 +199,7 @@ Actions can access user information and respect permissions:
async def action(self, body: dict, __user__=None):
if __user__["role"] != "admin":
return {"content": "This action requires admin privileges"}
-
+
user_name = __user__["name"]
return {"content": f"Hello {user_name}, admin action completed"}
```
@@ -252,7 +252,7 @@ class Action:
"type": "status",
"data": {"description": "Processing message..."}
})
-
+
# Get user confirmation
response = await __event_call__({
"type": "confirmation",
@@ -261,14 +261,14 @@ class Action:
"message": "Do you want to enhance this message?"
}
})
-
+
if not response:
return {"content": "Action cancelled by user"}
-
+
# Process the message
original_content = body.get("content", "")
enhanced_content = f"Enhanced: {original_content}"
-
+
return {"content": enhanced_content}
```
diff --git a/docs/features/plugin/functions/filter.mdx b/docs/features/plugin/functions/filter.mdx
index 9cd41399a9..d380657b27 100644
--- a/docs/features/plugin/functions/filter.mdx
+++ b/docs/features/plugin/functions/filter.mdx
@@ -5,7 +5,7 @@ title: "🪄 Filter Function"
# 🪄 Filter Function: Modify Inputs and Outputs
-Welcome to the comprehensive guide on Filter Functions in Open WebUI! Filters are a flexible and powerful **plugin system** for modifying data *before it's sent to the Large Language Model (LLM)* (input) or *after it’s returned from the LLM* (output). Whether you’re transforming inputs for better context or cleaning up outputs for improved readability, **Filter Functions** let you do it all.
+Welcome to the comprehensive guide on Filter Functions in Open WebUI! Filters are a flexible and powerful **plugin system** for modifying data *before it's sent to the Large Language Model (LLM)* (input) or *after it’s returned from the LLM* (output). Whether you’re transforming inputs for better context or cleaning up outputs for improved readability, **Filter Functions** let you do it all.
This guide will break down **what Filters are**, how they work, their structure, and everything you need to know to build powerful and user-friendly filters of your own. Let’s dig in, and don’t worry—I’ll use metaphors, examples, and tips to make everything crystal clear! 🌟
@@ -26,7 +26,7 @@ Here’s a quick summary of what Filters do:
2. **Intercept Model Outputs (Stream Function)**: Capture and adjust the AI’s responses **as they’re generated** by the model. This is useful for real-time modifications, like filtering out sensitive information or formatting the output for better readability.
3. **Modify Model Outputs (Outlet Function)**: Adjust the AI's response **after it’s processed**, before showing it to the user. This can help refine, log, or adapt the data for a cleaner user experience.
-> **Key Concept:** Filters are not standalone models but tools that enhance or transform the data traveling *to* and *from* models.
+> **Key Concept:** Filters are not standalone models but tools that enhance or transform the data traveling *to* and *from* models.
Filters are like **translators or editors** in the AI workflow: you can intercept and change the conversation without interrupting the flow.
@@ -44,7 +44,7 @@ from typing import Optional
class Filter:
# Valves: Configuration options for the filter
- class Valves(BaseModel):
+ class Valves(BaseModel):
pass
def __init__(self):
@@ -54,7 +54,7 @@ class Filter:
def inlet(self, body: dict) -> dict:
# This is where you manipulate user inputs.
print(f"inlet called: {body}")
- return body
+ return body
def stream(self, event: dict) -> dict:
# This is where you modify streamed chunks of model output.
@@ -127,10 +127,9 @@ class Valves(BaseModel):
OPTION_NAME: str = "Default Value"
```
-For example:
+For example:
If you're creating a filter that converts responses into uppercase, you might allow users to configure whether every output gets totally capitalized via a valve like `TRANSFORM_UPPERCASE: bool = True/False`.
-
##### Configuring Valves with Dropdown Menus (Enums)
You can enhance the user experience for your filter's settings by providing dropdown menus instead of free-form text inputs for certain `Valves`. This is achieved using `json_schema_extra` with the `enum` keyword in your Pydantic `Field` definitions.
@@ -222,26 +221,25 @@ Using `enum` for your `Valves` options makes your filters more user-friendly and
The `inlet` function is like **prepping food before cooking**. Imagine you’re a chef: before the ingredients go into the recipe (the LLM in this case), you might wash vegetables, chop onions, or season the meat. Without this step, your final dish could lack flavor, have unwashed produce, or simply be inconsistent.
-In the world of Open WebUI, the `inlet` function does this important prep work on the **user input** before it’s sent to the model. It ensures the input is as clean, contextual, and helpful as possible for the AI to handle.
+In the world of Open WebUI, the `inlet` function does this important prep work on the **user input** before it’s sent to the model. It ensures the input is as clean, contextual, and helpful as possible for the AI to handle.
-📥 **Input**:
+📥 **Input**:
- **`body`**: The raw input from Open WebUI to the model. It is in the format of a chat-completion request (usually a dictionary that includes fields like the conversation's messages, model settings, and other metadata). Think of this as your recipe ingredients.
-🚀 **Your Task**:
+🚀 **Your Task**:
Modify and return the `body`. The modified version of the `body` is what the LLM works with, so this is your chance to bring clarity, structure, and context to the input.
-
##### 🍳 Why Would You Use the `inlet`?
1. **Adding Context**: Automatically append crucial information to the user’s input, especially if their text is vague or incomplete. For example, you might add "You are a friendly assistant" or "Help this user troubleshoot a software bug."
-
+
2. **Formatting Data**: If the input requires a specific format, like JSON or Markdown, you can transform it before sending it to the model.
3. **Sanitizing Input**: Remove unwanted characters, strip potentially harmful or confusing symbols (like excessive whitespace or emojis), or replace sensitive information.
4. **Streamlining User Input**: If your model’s output improves with additional guidance, you can use the `inlet` to inject clarifying instructions automatically!
-
##### 💡 Example Use Cases: Build on Food Prep
+
###### 🥗 Example 1: Adding System Context
Let’s say the LLM is a chef preparing a dish for Italian cuisine, but the user hasn’t mentioned "This is for Italian cooking." You can ensure the message is clear by appending this context before sending the data to the model.
@@ -260,7 +258,6 @@ def inlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
📖 **What Happens?**
- Any user input like "What are some good dinner ideas?" now carries the Italian theme because we’ve set the system context! Cheesecake might not show up as an answer, but pasta sure will.
-
###### 🔪 Example 2: Cleaning Input (Remove Odd Characters)
Suppose the input from the user looks messy or includes unwanted symbols like `!!!`, making the conversation inefficient or harder for the model to parse. You can clean it up while preserving the core content.
@@ -275,8 +272,11 @@ def inlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
📖 **What Happens?**
- Before: `"How can I debug this issue!!!"` ➡️ Sent to the model as `"How can I debug this issue"`
+:::note
+
Note: The user feels the same, but the model processes a cleaner and easier-to-understand query.
+:::
##### 📊 How `inlet` Helps Optimize Input for the LLM:
- Improves **accuracy** by clarifying ambiguous queries.
@@ -328,14 +328,14 @@ def stream(self, event: dict) -> dict:
delta["content"] = delta["content"].replace("😊", "") # Strip emojis
return event
```
-📖 **Before:** `"Hi 😊"`
+📖 **Before:** `"Hi 😊"`
📖 **After:** `"Hi"`
---
#### 4️⃣ **`outlet` Function (Output Post-Processing)**
-The `outlet` function is like a **proofreader**: tidy up the AI's response (or make final changes) *after it’s processed by the LLM.*
+The `outlet` function is like a **proofreader**: tidy up the AI's response (or make final changes) *after it’s processed by the LLM.*
📤 **Input**:
- **`body`**: This contains **all current messages** in the chat (user history + LLM replies).
@@ -351,7 +351,7 @@ The `outlet` function is like a **proofreader**: tidy up the AI's response (or m
def outlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
for message in body["messages"]:
message["content"] = message["content"].replace("", "[REDACTED]")
- return body
+ return body
```
---
@@ -368,7 +368,7 @@ Want the LLM to always know it's assisting a customer in troubleshooting softwar
class Filter:
def inlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
context_message = {
- "role": "system",
+ "role": "system",
"content": "You're a software troubleshooting assistant."
}
body.setdefault("messages", []).insert(0, context_message)
@@ -393,7 +393,7 @@ class Filter:
---
-## 🚧 Potential Confusion: Clear FAQ 🛑
+## 🚧 Potential Confusion: Clear FAQ 🛑
### **Q: How Are Filters Different From Pipe Functions?**
@@ -420,6 +420,6 @@ By now, you’ve learned:
---
-🚀 **Your Turn**: Start experimenting! What small tweak or context addition could elevate your Open WebUI experience? Filters are fun to build, flexible to use, and can take your models to the next level!
+🚀 **Your Turn**: Start experimenting! What small tweak or context addition could elevate your Open WebUI experience? Filters are fun to build, flexible to use, and can take your models to the next level!
Happy coding! ✨
diff --git a/docs/features/plugin/functions/index.mdx b/docs/features/plugin/functions/index.mdx
index 8c2feec467..26c2ae139a 100644
--- a/docs/features/plugin/functions/index.mdx
+++ b/docs/features/plugin/functions/index.mdx
@@ -5,15 +5,15 @@ title: "🧰 Functions"
## 🚀 What Are Functions?
-Functions are like **plugins** for Open WebUI. They help you **extend its capabilities**—whether it’s adding support for new AI model providers like Anthropic or Vertex AI, tweaking how messages are processed, or introducing custom buttons to the interface for better usability.
+Functions are like **plugins** for Open WebUI. They help you **extend its capabilities**—whether it’s adding support for new AI model providers like Anthropic or Vertex AI, tweaking how messages are processed, or introducing custom buttons to the interface for better usability.
-Unlike external tools that may require complex integrations, **Functions are built-in and run within the Open WebUI environment.** That means they are fast, modular, and don’t rely on external dependencies.
+Unlike external tools that may require complex integrations, **Functions are built-in and run within the Open WebUI environment.** That means they are fast, modular, and don’t rely on external dependencies.
Think of Functions as **modular building blocks** that let you enhance how the WebUI works, tailored exactly to what you need. They’re lightweight, highly customizable, and written in **pure Python**, so you have the freedom to create anything—from new AI-powered workflows to integrations with anything you use, like Google Search or Home Assistant.
---
-## 🏗️ Types of Functions
+## 🏗️ Types of Functions
There are **three types of Functions** in Open WebUI, each with a specific purpose. Let’s break them down and explain exactly what they do:
@@ -21,113 +21,113 @@ There are **three types of Functions** in Open WebUI, each with a specific purpo
### 1. [**Pipe Function** – Create Custom "Agents/Models" ](./pipe.mdx)
-A **Pipe Function** is how you create **custom agents/models** or integrations, which then appear in the interface as if they were standalone models.
+A **Pipe Function** is how you create **custom agents/models** or integrations, which then appear in the interface as if they were standalone models.
-**What does it do?**
+**What does it do?**
- Pipes let you define complex workflows. For instance, you could create a Pipe that sends data to **Model A** and **Model B**, processes their outputs, and combines the results into one finalized answer.
-- Pipes don’t even have to use AI! They can be setups for **search APIs**, **weather data**, or even systems like **Home Assistant**. Basically, anything you’d like to interact with can become part of Open WebUI.
+- Pipes don’t even have to use AI! They can be setups for **search APIs**, **weather data**, or even systems like **Home Assistant**. Basically, anything you’d like to interact with can become part of Open WebUI.
-**Use case example:**
-Imagine you want to query Google Search directly from Open WebUI. You can create a Pipe Function that:
-1. Takes your message as the search query.
-2. Sends the query to Google Search’s API.
-3. Processes the response and returns it to you inside the WebUI like a normal "model" response.
+**Use case example:**
+Imagine you want to query Google Search directly from Open WebUI. You can create a Pipe Function that:
+1. Takes your message as the search query.
+2. Sends the query to Google Search’s API.
+3. Processes the response and returns it to you inside the WebUI like a normal "model" response.
-When enabled, **Pipe Functions show up as their own selectable model**. Use Pipes whenever you need custom functionality that works like a model in the interface.
+When enabled, **Pipe Functions show up as their own selectable model**. Use Pipes whenever you need custom functionality that works like a model in the interface.
For a detailed guide, see [**Pipe Functions**](./pipe.mdx).
---
-### 2. [**Filter Function** – Modify Inputs and Outputs](./filter.mdx)
+### 2. [**Filter Function** – Modify Inputs and Outputs](./filter.mdx)
-A **Filter Function** is like a tool for tweaking data before it gets sent to the AI **or** after it comes back.
+A **Filter Function** is like a tool for tweaking data before it gets sent to the AI **or** after it comes back.
-**What does it do?**
-Filters act as "hooks" in the workflow and have two main parts:
-- **Inlet**: Adjust the input that is sent to the model. For example, adding additional instructions, keywords, or formatting tweaks.
-- **Outlet**: Modify the output that you receive from the model. For instance, cleaning up the response, adjusting tone, or formatting data into a specific style.
+**What does it do?**
+Filters act as "hooks" in the workflow and have two main parts:
+- **Inlet**: Adjust the input that is sent to the model. For example, adding additional instructions, keywords, or formatting tweaks.
+- **Outlet**: Modify the output that you receive from the model. For instance, cleaning up the response, adjusting tone, or formatting data into a specific style.
-**Use case example:**
-Suppose you’re working on a project that needs precise formatting. You can use a Filter to ensure:
-1. Your input is always transformed into the required format.
-2. The output from the model is cleaned up before being displayed.
+**Use case example:**
+Suppose you’re working on a project that needs precise formatting. You can use a Filter to ensure:
+1. Your input is always transformed into the required format.
+2. The output from the model is cleaned up before being displayed.
-Filters are **linked to specific models** or can be enabled for all models **globally**, depending on your needs.
+Filters are **linked to specific models** or can be enabled for all models **globally**, depending on your needs.
Check out the full guide for more examples and instructions: [**Filter Functions**](./filter.mdx).
---
-### 3. [**Action Function** – Add Custom Buttons](./action.mdx)
+### 3. [**Action Function** – Add Custom Buttons](./action.mdx)
-An **Action Function** is used to add **custom buttons** to the chat interface.
+An **Action Function** is used to add **custom buttons** to the chat interface.
-**What does it do?**
-Actions allow you to define **interactive shortcuts** that trigger specific functionality directly from the chat. These buttons appear underneath individual chat messages, giving you convenient, one-click access to the actions you define.
+**What does it do?**
+Actions allow you to define **interactive shortcuts** that trigger specific functionality directly from the chat. These buttons appear underneath individual chat messages, giving you convenient, one-click access to the actions you define.
-**Use case example:**
-Let’s say you often need to summarize long messages or generate specific outputs like translations. You can create an Action Function to:
-1. Add a “Summarize” button under every incoming message.
-2. When clicked, it triggers your custom function to process that message and return the summary.
+**Use case example:**
+Let’s say you often need to summarize long messages or generate specific outputs like translations. You can create an Action Function to:
+1. Add a “Summarize” button under every incoming message.
+2. When clicked, it triggers your custom function to process that message and return the summary.
-Buttons provide a **clean and user-friendly way** to interact with extended functionality you define.
+Buttons provide a **clean and user-friendly way** to interact with extended functionality you define.
Learn how to set them up in the [**Action Functions Guide**](./action.mdx).
---
-## 🛠️ How to Use Functions
+## 🛠️ How to Use Functions
Here's how to put Functions to work in Open WebUI:
-### 1. **Install Functions**
+### 1. **Install Functions**
You can install Functions via the Open WebUI interface or by importing them manually. You can find community-created functions on the [Open WebUI Community Site](https://openwebui.com/functions).
-⚠️ **Be cautious.** Only install Functions from trusted sources. Running unknown code poses security risks.
+⚠️ **Be cautious.** Only install Functions from trusted sources. Running unknown code poses security risks.
---
-### 2. **Enable Functions**
-Functions must be explicitly enabled after installation:
-- When you enable a **Pipe Function**, it becomes available as its own **model** in the interface.
-- For **Filter** and **Action Functions**, enabling them isn’t enough—you also need to assign them to specific models or enable them globally for all models.
+### 2. **Enable Functions**
+Functions must be explicitly enabled after installation:
+- When you enable a **Pipe Function**, it becomes available as its own **model** in the interface.
+- For **Filter** and **Action Functions**, enabling them isn’t enough—you also need to assign them to specific models or enable them globally for all models.
---
-### 3. **Assign Filters or Actions to Models**
-- Navigate to `Workspace => Models` and assign your Filter or Action to the relevant model there.
+### 3. **Assign Filters or Actions to Models**
+- Navigate to `Workspace => Models` and assign your Filter or Action to the relevant model there.
- Alternatively, enable Functions for **all models globally** by going to `Workspace => Functions`, selecting the "..." menu, and toggling the **Global** switch.
---
-### Quick Summary
-- **Pipes** appear as standalone models you can interact with.
-- **Filters** modify inputs/outputs for smoother AI interactions.
-- **Actions** add clickable buttons to individual chat messages.
+### Quick Summary
+- **Pipes** appear as standalone models you can interact with.
+- **Filters** modify inputs/outputs for smoother AI interactions.
+- **Actions** add clickable buttons to individual chat messages.
Once you’ve followed the setup process, Functions will seamlessly enhance your workflows.
---
-## ✅ Why Use Functions?
+## ✅ Why Use Functions?
-Functions are designed for anyone who wants to **unlock new possibilities** with Open WebUI:
+Functions are designed for anyone who wants to **unlock new possibilities** with Open WebUI:
-- **Extend**: Add new models or integrate with non-AI tools like APIs, databases, or smart devices.
-- **Optimize**: Tweak inputs and outputs to fit your use case perfectly.
-- **Simplify**: Add buttons or shortcuts to make the interface intuitive and efficient.
+- **Extend**: Add new models or integrate with non-AI tools like APIs, databases, or smart devices.
+- **Optimize**: Tweak inputs and outputs to fit your use case perfectly.
+- **Simplify**: Add buttons or shortcuts to make the interface intuitive and efficient.
Whether you’re customizing workflows for specific projects, integrating external data, or just making Open WebUI easier to use, Functions are the key to taking control of your instance.
---
-### 📝 Final Notes:
-1. Always install Functions from **trusted sources only**.
-2. Make sure you understand the difference between Pipe, Filter, and Action Functions to use them effectively.
-3. Explore the official guides:
- - [Pipe Functions Guide](./pipe.mdx)
- - [Filter Functions Guide](./filter.mdx)
- - [Action Functions Guide](./action.mdx)
+### 📝 Final Notes:
+1. Always install Functions from **trusted sources only**.
+2. Make sure you understand the difference between Pipe, Filter, and Action Functions to use them effectively.
+3. Explore the official guides:
+ - [Pipe Functions Guide](./pipe.mdx)
+ - [Filter Functions Guide](./filter.mdx)
+ - [Action Functions Guide](./action.mdx)
By leveraging Functions, you’ll bring entirely new capabilities to your Open WebUI setup. Start experimenting today! 🚀
\ No newline at end of file
diff --git a/docs/features/plugin/functions/pipe.mdx b/docs/features/plugin/functions/pipe.mdx
index 6e58a2eb68..01361cc783 100644
--- a/docs/features/plugin/functions/pipe.mdx
+++ b/docs/features/plugin/functions/pipe.mdx
@@ -3,10 +3,9 @@ sidebar_position: 1
title: "🚰 Pipe Function"
---
-# 🚰 Pipe Function: Create Custom "Agents/Models"
+# 🚰 Pipe Function: Create Custom "Agents/Models"
Welcome to this guide on creating **Pipes** in Open WebUI! Think of Pipes as a way to **adding** a new model to Open WebUI. In this document, we'll break down what a Pipe is, how it works, and how you can create your own to add custom logic and processing to your Open WebUI models. We'll use clear metaphors and go through every detail to ensure you have a comprehensive understanding.
-
## Introduction to Pipes
Imagine Open WebUI as a **plumbing system** where data flows through pipes and valves. In this analogy:
diff --git a/docs/features/plugin/index.mdx b/docs/features/plugin/index.mdx
index 42b321e1f1..edc3c13c52 100644
--- a/docs/features/plugin/index.mdx
+++ b/docs/features/plugin/index.mdx
@@ -17,7 +17,7 @@ Getting started with Tools and Functions is easy because everything’s already
## What are "Tools" and "Functions"?
-Let's start by thinking of **Open WebUI** as a "base" software that can do many tasks related to using Large Language Models (LLMs). But sometimes, you need extra features or abilities that don't come _out of the box_—this is where **tools** and **functions** come into play.
+Let's start by thinking of **Open WebUI** as a "base" software that can do many tasks related to using Large Language Models (LLMs). But sometimes, you need extra features or abilities that don't come *out of the box*—this is where **tools** and **functions** come into play.
### Tools
@@ -59,7 +59,7 @@ Without functions, these would all be out of reach. But with this framework in O
Functions are not located in the same place as Tools.
- **Tools** are about model access and live in your **Workspace tabs** (where you add models, prompts, and knowledge collections). They can be added by users if granted permissions.
-- **Functions** are about **platform customization** and are found in the **Admin Panel**.
+- **Functions** are about **platform customization** and are found in the **Admin Panel**.
They are configured and managed only by admins who want to extend the platform interface or behavior for all users.
### Summary of Differences:
diff --git a/docs/features/plugin/migration/index.mdx b/docs/features/plugin/migration/index.mdx
index 575515f0f4..633291e5c0 100644
--- a/docs/features/plugin/migration/index.mdx
+++ b/docs/features/plugin/migration/index.mdx
@@ -30,6 +30,7 @@ Here’s an overview of what changed:
#### Example:
```python
+
# Full API flow with parsing (new function):
from open_webui.main import chat_completion
@@ -70,7 +71,7 @@ Follow this guide to smoothly update your project.
---
-### 🔄 1. Shifting from `apps` to `routers`
+### 🔄 1. Shifting from `apps` to `routers`
All apps have been renamed and relocated under `open_webui.routers`. This affects imports in your codebase.
@@ -84,25 +85,23 @@ Quick changes for import paths:
| `open_webui.apps.retrieval` | `open_webui.routers.retrieval` |
| `open_webui.apps.webui` | `open_webui.main` |
+### 📜 An Important Example
-### 📜 An Important Example
-
-To clarify the special case of the main app (`webui`), here’s a simple rule of thumb:
+To clarify the special case of the main app (`webui`), here’s a simple rule of thumb:
-- **Was in `webui`?** It’s now in the project’s root or `open_webui.main`.
-- For example:
- - **Before (0.4):**
- ```python
- from open_webui.apps.webui.models import SomeModel
- ```
- - **After (0.5):**
- ```python
- from open_webui.models import SomeModel
- ```
+- **Was in `webui`?** It’s now in the project’s root or `open_webui.main`.
+- For example:
+ - **Before (0.4):**
+ ```python
+ from open_webui.apps.webui.models import SomeModel
+ ```
+ - **After (0.5):**
+ ```python
+ from open_webui.models import SomeModel
+ ```
In general, **just replace `open_webui.apps` with `open_webui.routers`—except for `webui`, which is now `open_webui.main`!**
-
---
### 👩💻 2. Updating Import Statements
@@ -117,6 +116,7 @@ from open_webui.apps.openai import main as openai
#### After:
```python
+
# Separate router imports
from open_webui.routers.ollama import generate_chat_completion
from open_webui.routers.openai import generate_chat_completion
@@ -125,7 +125,11 @@ from open_webui.routers.openai import generate_chat_completion
from open_webui.main import chat_completion
```
-**💡 Pro Tip:** Prioritize the unified endpoint (`chat_completion`) for simplicity and future compatibility.
+::tip
+
+Prioritize the unified endpoint (`chat_completion`) for simplicity and future compatibility.
+
+:::
### 📝 **Additional Note: Choosing Between `main.chat_completion` and `utils.chat.generate_chat_completion`**
@@ -143,6 +147,7 @@ Depending on your use case, you can choose between:
#### Example:
```python
+
# Use this for the full API flow with parsing:
from open_webui.main import chat_completion
@@ -152,18 +157,18 @@ from open_webui.utils.chat import generate_chat_completion
---
-### 📋 3. Adapting to Updated Function Signatures
+### 📋 3. Adapting to Updated Function Signatures
We’ve updated the **function signatures** to better fit the new architecture. If you're looking for a direct replacement, start with the lightweight utility function `generate_chat_completion` from `open_webui.utils.chat`. For the full API flow, use the new unified `chat_completion` function in `open_webui.main`.
-#### Function Signature Changes:
+#### Function Signature Changes:
| **Old** | **Direct Successor (New)** | **Unified Option (New)** |
|-----------------------------------------|-----------------------------------------|-----------------------------------------|
| `openai.generate_chat_completion(form_data: dict, user: UserModel)` | `generate_chat_completion(request: Request, form_data: dict, user: UserModel)` | `chat_completion(request: Request, form_data: dict, user: UserModel)` |
-- **Direct Successor (`generate_chat_completion`)**: A lightweight, 1:1 replacement for previous `ollama`/`openai` methods.
-- **Unified Option (`chat_completion`)**: Use this for the complete API flow, including file parsing and additional functionality.
+- **Direct Successor (`generate_chat_completion`)**: A lightweight, 1:1 replacement for previous `ollama`/`openai` methods.
+- **Unified Option (`chat_completion`)**: Use this for the complete API flow, including file parsing and additional functionality.
#### Example:
diff --git a/docs/features/plugin/tools/development.mdx b/docs/features/plugin/tools/development.mdx
index 9c86316082..ef2bb5f478 100644
--- a/docs/features/plugin/tools/development.mdx
+++ b/docs/features/plugin/tools/development.mdx
@@ -3,8 +3,6 @@ sidebar_position: 2
title: "🛠️ Development"
---
-
-
## Writing A Custom Toolkit
Toolkits are defined in a single Python file, with a top level docstring with metadata and a `Tools` class.
@@ -46,7 +44,7 @@ class Tools:
# example usage of valves
if self.valves.api_key != "42":
return "Wrong API key"
- return string[::-1]
+ return string[::-1]
```
### Type Hints
@@ -54,7 +52,7 @@ Each tool must have type hints for arguments. The types may also be nested, such
### Valves and UserValves - (optional, but HIGHLY encouraged)
-Valves and UserValves are used for specifying customizable settings of the Tool, you can read more on the dedicated [Valves & UserValves](../valves/index.mdx) page.
+Valves and UserValves are used for specifying customizable settings of the Tool, you can read more on the dedicated [Valves & UserValves](/features/plugin/valves/index.mdx) page.
### Optional Arguments
Below is a list of optional arguments your tools can depend on:
@@ -92,14 +90,14 @@ class Tools:
"""
if not __oauth_token__ or "access_token" not in __oauth_token__:
return "Error: User is not authenticated via OAuth or token is unavailable."
-
+
access_token = __oauth_token__["access_token"]
-
+
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json"
}
-
+
try:
async with httpx.AsyncClient() as client:
response = await client.get("https://api.my-service.com/v1/profile", headers=headers)
@@ -182,14 +180,14 @@ class Tools:
def __init__(self):
# Add a note about function calling mode requirements
self.description = "This tool requires Default function calling mode for full functionality"
-
+
async def interactive_tool(self, prompt: str, __event_emitter__=None) -> str:
"""
⚠️ This tool requires function_calling = "default" for proper event emission
"""
if not __event_emitter__:
return "Event emitter not available - ensure Default function calling mode is enabled"
-
+
# Safe to use message events in Default mode
await __event_emitter__({
"type": "message",
@@ -206,12 +204,12 @@ async def universal_tool(self, prompt: str, __event_emitter__=None, __metadata__
"""
# Check if we're in native mode (this is a rough heuristic)
is_native_mode = __metadata__ and __metadata__.get("params", {}).get("function_calling") == "native"
-
+
if __event_emitter__:
if is_native_mode:
# Use only compatible event types in native mode
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Processing in native mode...", "done": False}
})
else:
@@ -220,15 +218,15 @@ async def universal_tool(self, prompt: str, __event_emitter__=None, __metadata__
"type": "message",
"data": {"content": "Processing with full event support..."}
})
-
+
# ... tool logic here
-
+
if __event_emitter__:
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Completed successfully", "done": True}
})
-
+
return "Tool execution completed"
```
@@ -250,39 +248,39 @@ async def universal_tool(self, prompt: str, __event_emitter__=None, __metadata__
```python
async def debug_events_tool(self, __event_emitter__=None, __metadata__=None) -> str:
"""Debug tool to test event emitter functionality"""
-
+
if not __event_emitter__:
return "No event emitter available"
-
+
# Test various event types
test_events = [
{"type": "status", "data": {"description": "Testing status events", "done": False}},
{"type": "message", "data": {"content": "Testing message events (may not work in native mode)"}},
{"type": "notification", "data": {"content": "Testing notification events"}},
]
-
+
mode_info = "Unknown"
if __metadata__:
mode_info = __metadata__.get("params", {}).get("function_calling", "default")
-
+
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": f"Function calling mode: {mode_info}", "done": False}
})
-
+
for i, event in enumerate(test_events):
await asyncio.sleep(1) # Space out events
await __event_emitter__(event)
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": f"Sent event {i+1}/{len(test_events)}", "done": False}
})
-
+
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Event testing complete", "done": True}
})
-
+
return f"Event testing completed in {mode_info} mode. Check for missing or flickering content."
```
@@ -299,7 +297,7 @@ Status events add live status updates to a message while it's performing steps.
await __event_emitter__({
"type": "status",
"data": {
- "description": "Message that shows up in the chat",
+ "description": "Message that shows up in the chat",
"done": False, # False = still processing, True = completed
"hidden": False # False = visible, True = auto-hide when done
}
@@ -322,7 +320,7 @@ async def data_processing_tool(
Processes a large data file with status updates
✅ Works in both Default and Native function calling modes
"""
-
+
if not __event_emitter__:
return "Processing completed (no status updates available)"
@@ -331,19 +329,19 @@ async def data_processing_tool(
"type": "status",
"data": {"description": "Loading data file...", "done": False}
})
-
+
# Simulate loading time
await asyncio.sleep(2)
-
+
# Step 2: Processing
await __event_emitter__({
"type": "status",
"data": {"description": "Analyzing 10,000 records...", "done": False}
})
-
+
# Simulate processing time
await asyncio.sleep(3)
-
+
# Step 3: Completion
await __event_emitter__({
"type": "status",
@@ -365,50 +363,50 @@ async def api_integration_tool(
Integrates with external API with comprehensive status tracking
✅ Compatible with both function calling modes
"""
-
+
if not __event_emitter__:
return "API integration completed (no status available)"
-
+
try:
await __event_emitter__({
"type": "status",
"data": {"description": "Connecting to API...", "done": False}
})
-
+
# Simulate API connection
await asyncio.sleep(1.5)
-
+
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Authenticating...", "done": False}
})
-
+
# Simulate authentication
await asyncio.sleep(1)
-
+
await __event_emitter__({
"type": "status",
"data": {"description": "Fetching data...", "done": False}
})
-
+
# Simulate data fetching
await asyncio.sleep(2)
-
+
# Success status
await __event_emitter__({
"type": "status",
"data": {"description": "API integration successful", "done": True}
})
-
+
return "Successfully retrieved 150 records from the API"
-
+
except Exception as e:
# Error status - always visible for debugging
await __event_emitter__({
"type": "status",
"data": {"description": f"Error: {str(e)}", "done": True, "hidden": False}
})
-
+
return f"API integration failed: {str(e)}"
```
@@ -424,19 +422,19 @@ async def batch_processor_tool(
Processes items in batches with detailed progress tracking
✅ Works perfectly in both function calling modes
"""
-
+
if not __event_emitter__ or not items:
return "Batch processing completed"
-
+
total_items = len(items)
batch_size = 10
completed = 0
-
+
for i in range(0, total_items, batch_size):
batch = items[i:i + batch_size]
batch_num = (i // batch_size) + 1
total_batches = (total_items + batch_size - 1) // batch_size
-
+
# Update status for current batch
await __event_emitter__({
"type": "status",
@@ -445,22 +443,22 @@ async def batch_processor_tool(
"done": False
}
})
-
+
# Simulate batch processing
await asyncio.sleep(1)
-
+
completed += len(batch)
-
+
# Progress update
progress_pct = int((completed / total_items) * 100)
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {
"description": f"Progress: {completed}/{total_items} items ({progress_pct}%)",
"done": False
}
})
-
+
# Final completion status
await __event_emitter__({
"type": "status",
@@ -469,15 +467,19 @@ async def batch_processor_tool(
"done": True
}
})
-
+
return f"Successfully processed {total_items} items in {total_batches} batches"
```
#### Message Events ⚠️ DEFAULT MODE ONLY
+:::warning
+
**🚨 CRITICAL WARNING: Message events are INCOMPATIBLE with Native function calling mode!**
+:::
+
Message events (`message`, `chat:message`, `chat:message:delta`, `replace`) allow you to append or modify message content at any stage during tool execution. This enables embedding images, rendering web pages, streaming content updates, and creating rich interactive experiences.
**However, these event types have major compatibility issues:**
@@ -512,37 +514,37 @@ async def streaming_content_tool(
Streams content updates during processing
⚠️ REQUIRES function_calling = "default" - Will not work in Native mode!
"""
-
+
# Check function calling mode (rough detection)
mode = "unknown"
if __metadata__:
mode = __metadata__.get("params", {}).get("function_calling", "default")
-
+
if mode == "native":
return "❌ This tool requires Default function calling mode. Message streaming is not supported in Native mode due to content overwriting issues."
-
+
if not __event_emitter__:
return "Event emitter not available"
-
+
# Stream progressive content updates
content_chunks = [
"🔍 **Phase 1: Research**\nGathering information about your query...\n\n",
- "📊 **Phase 2: Analysis**\nAnalyzing gathered data patterns...\n\n",
+ "📊 **Phase 2: Analysis**\nAnalyzing gathered data patterns...\n\n",
"✨ **Phase 3: Synthesis**\nGenerating insights and recommendations...\n\n",
"📝 **Phase 4: Final Report**\nCompiling comprehensive results...\n\n"
]
-
+
accumulated_content = ""
-
+
for i, chunk in enumerate(content_chunks):
accumulated_content += chunk
-
+
# Append this chunk to the message
await __event_emitter__({
"type": "message",
"data": {"content": chunk}
})
-
+
# Show progress status
await __event_emitter__({
"type": "status",
@@ -551,16 +553,16 @@ async def streaming_content_tool(
"done": False
}
})
-
+
# Simulate processing time
await asyncio.sleep(2)
-
+
# Final completion
await __event_emitter__({
"type": "status",
"data": {"description": "Content streaming complete!", "done": True}
})
-
+
return "Content streaming completed successfully. All phases processed."
```
@@ -576,10 +578,10 @@ async def live_dashboard_tool(
Creates a live-updating dashboard using content replacement
⚠️ ONLY WORKS in Default function calling mode
"""
-
+
# Verify we're not in Native mode
mode = __metadata__.get("params", {}).get("function_calling", "default") if __metadata__ else "default"
-
+
if mode == "native":
return """
❌ **Native Mode Incompatibility**
@@ -591,44 +593,45 @@ This dashboard tool cannot function in Native mode because:
**Solution:** Switch to Default function calling mode in Model Settings → Advanced Params → Function Calling = "Default"
"""
-
+
if not __event_emitter__:
return "Dashboard created (static mode - no live updates)"
-
+
# Create initial dashboard
initial_dashboard = """
+
# 📊 Live System Dashboard
## System Status: 🟡 Initializing...
### Current Metrics:
- **CPU Usage**: Loading...
-- **Memory**: Loading...
+- **Memory**: Loading...
- **Active Users**: Loading...
- **Response Time**: Loading...
---
*Last Updated: Initializing...*
"""
-
+
await __event_emitter__({
"type": "replace",
"data": {"content": initial_dashboard}
})
-
+
# Simulate live data updates
updates = [
{
"status": "🟢 Online",
- "cpu": "23%",
+ "cpu": "23%",
"memory": "64%",
"users": "1,247",
"response": "145ms"
},
{
- "status": "🟢 Optimal",
+ "status": "🟢 Optimal",
"cpu": "18%",
- "memory": "61%",
+ "memory": "61%",
"users": "1,352",
"response": "132ms"
},
@@ -636,15 +639,16 @@ This dashboard tool cannot function in Native mode because:
"status": "🟡 Busy",
"cpu": "67%",
"memory": "78%",
- "users": "1,891",
+ "users": "1,891",
"response": "234ms"
}
]
-
+
for i, data in enumerate(updates):
await asyncio.sleep(3) # Simulate data collection delay
-
+
updated_dashboard = f"""
+
# 📊 Live System Dashboard
## System Status: {data['status']}
@@ -659,24 +663,24 @@ This dashboard tool cannot function in Native mode because:
*Last Updated: {datetime.now().strftime('%H:%M:%S')}*
*Update {i+1}/{len(updates)}*
"""
-
+
# Replace entire dashboard content
await __event_emitter__({
- "type": "replace",
+ "type": "replace",
"data": {"content": updated_dashboard}
})
-
+
# Status update
await __event_emitter__({
"type": "status",
"data": {"description": f"Dashboard updated ({i+1}/{len(updates)})", "done": False}
})
-
+
await __event_emitter__({
"type": "status",
"data": {"description": "Live dashboard monitoring complete", "done": True}
})
-
+
return "Dashboard monitoring session completed."
```
@@ -692,15 +696,15 @@ async def adaptive_content_tool(
Adapts behavior based on function calling mode
✅ Provides best possible experience in both modes
"""
-
+
# Detect function calling mode
mode = "default" # Default assumption
if __metadata__:
mode = __metadata__.get("params", {}).get("function_calling", "default")
-
+
if not __event_emitter__:
return f"Generated {content_type} content (no real-time updates available)"
-
+
# Mode-specific behavior
if mode == "native":
# Use only compatible events in Native mode
@@ -708,16 +712,17 @@ async def adaptive_content_tool(
"type": "status",
"data": {"description": f"Generating {content_type} content in Native mode...", "done": False}
})
-
+
await asyncio.sleep(2)
-
+
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Content generation complete", "done": True}
})
-
+
# Return content normally - no message events
return f"""
+
# {content_type.title()} Content
**Mode**: Native Function Calling (Limited Event Support)
@@ -726,41 +731,41 @@ Generated content here... This content is returned as the tool result rather tha
*Note: Live content updates are not available in Native mode due to event compatibility limitations.*
"""
-
+
else: # Default mode
# Full message event functionality available
await __event_emitter__({
"type": "status",
"data": {"description": "Generating content with full streaming support...", "done": False}
})
-
- # Stream content progressively
+
+ # Stream content progressively
progressive_content = [
f"# {content_type.title()} Content\n\n**Mode**: Default Function Calling ✅\n\n",
"## Section 1: Introduction\nStreaming content in real-time...\n\n",
- "## Section 2: Details\nAdding detailed information...\n\n",
+ "## Section 2: Details\nAdding detailed information...\n\n",
"## Section 3: Conclusion\nFinalizing content delivery...\n\n",
"*✅ Content streaming completed successfully!*"
]
-
+
for i, chunk in enumerate(progressive_content):
await __event_emitter__({
"type": "message",
"data": {"content": chunk}
})
-
+
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": f"Streaming section {i+1}/{len(progressive_content)}...", "done": False}
})
-
+
await asyncio.sleep(1.5)
-
+
await __event_emitter__({
"type": "status",
"data": {"description": "Content streaming complete!", "done": True}
})
-
+
return "Content has been streamed above with full Default mode capabilities."
```
@@ -799,9 +804,13 @@ def __init__(self):
self.citation = False # REQUIRED - prevents automatic citations from overriding custom ones
```
+:::warning
+
**⚠️ Critical Citation Warning:**
If you set `self.citation = True` (or don't set it to `False`), automatic citations will replace any custom citations you send. Always disable automatic citations when using custom citation events.
+:::
+
Basic Citation Example
@@ -809,7 +818,7 @@ If you set `self.citation = True` (or don't set it to `False`), automatic citati
class Tools:
def __init__(self):
self.citation = False # Disable automatic citations
-
+
async def research_tool(
self, topic: str, __event_emitter__=None
) -> str:
@@ -817,15 +826,15 @@ class Tools:
Researches a topic and provides proper citations
✅ Works identically in both Default and Native modes
"""
-
+
if not __event_emitter__:
return "Research completed (citations not available)"
-
+
# Simulate research findings
sources = [
{
"title": "Advanced AI Systems",
- "url": "https://example.com/ai-systems",
+ "url": "https://example.com/ai-systems",
"content": "Artificial intelligence systems have evolved significantly...",
"author": "Dr. Jane Smith",
"date": "2024-03-15"
@@ -833,12 +842,12 @@ class Tools:
{
"title": "Machine Learning Fundamentals",
"url": "https://example.com/ml-fundamentals",
- "content": "The core principles of machine learning include...",
+ "content": "The core principles of machine learning include...",
"author": "Prof. John Doe",
"date": "2024-02-20"
}
]
-
+
# Emit citations for each source
for source in sources:
await __event_emitter__({
@@ -855,12 +864,12 @@ class Tools:
}
],
"source": {
- "name": source["title"],
+ "name": source["title"],
"url": source["url"]
}
}
})
-
+
return f"Research on '{topic}' completed. Found {len(sources)} relevant sources with detailed citations."
```
@@ -876,17 +885,17 @@ async def comprehensive_analysis_tool(
Performs comprehensive analysis with multiple source types
✅ Full compatibility across all function calling modes
"""
-
+
if not __event_emitter__:
return "Analysis completed"
-
+
# Multiple source types with rich metadata
research_sources = {
"academic": [
{
"title": "Neural Network Architecture in Modern AI",
"authors": ["Dr. Sarah Chen", "Prof. Michael Rodriguez"],
- "journal": "Journal of AI Research",
+ "journal": "Journal of AI Research",
"volume": "Vol. 45, Issue 2",
"pages": "123-145",
"doi": "10.1000/182",
@@ -898,7 +907,7 @@ async def comprehensive_analysis_tool(
{
"title": "Industry AI Implementation Trends",
"url": "https://tech-insights.com/ai-trends-2024",
- "site_name": "TechInsights",
+ "site_name": "TechInsights",
"published": "2024-03-01",
"content": "Recent industry surveys show that 78% of companies are implementing AI solutions..."
}
@@ -913,9 +922,9 @@ async def comprehensive_analysis_tool(
}
]
}
-
+
citation_count = 0
-
+
# Process academic sources
for source in research_sources["academic"]:
citation_count += 1
@@ -929,7 +938,7 @@ async def comprehensive_analysis_tool(
"source": source["title"],
"authors": source["authors"],
"journal": source["journal"],
- "volume": source["volume"],
+ "volume": source["volume"],
"pages": source["pages"],
"doi": source["doi"],
"publication_date": source["date"],
@@ -937,17 +946,17 @@ async def comprehensive_analysis_tool(
}
],
"source": {
- "name": f"{source['title']} - {source['journal']}",
+ "name": f"{source['title']} - {source['journal']}",
"url": f"https://doi.org/{source['doi']}"
}
}
})
-
+
# Process web sources
for source in research_sources["web_sources"]:
citation_count += 1
await __event_emitter__({
- "type": "citation",
+ "type": "citation",
"data": {
"document": [source["content"]],
"metadata": [
@@ -966,7 +975,7 @@ async def comprehensive_analysis_tool(
}
}
})
-
+
# Process reports
for source in research_sources["reports"]:
citation_count += 1
@@ -979,7 +988,7 @@ async def comprehensive_analysis_tool(
"date_accessed": datetime.now().isoformat(),
"source": source["title"],
"organization": source["organization"],
- "report_number": source["report_number"],
+ "report_number": source["report_number"],
"publication_date": source["date"],
"type": "research_report"
}
@@ -990,14 +999,15 @@ async def comprehensive_analysis_tool(
}
}
})
-
+
return f"""
+
# Analysis Complete
Comprehensive analysis of '{query}' has been completed using {citation_count} authoritative sources:
- **{len(research_sources['academic'])}** Academic journal articles
-- **{len(research_sources['web_sources'])}** Industry web sources
+- **{len(research_sources['web_sources'])}** Industry web sources
- **{len(research_sources['reports'])}** Research reports
All sources have been properly cited and are available for review by clicking the citation links above.
@@ -1016,28 +1026,28 @@ async def database_query_tool(
Queries database and provides data citations
✅ Works perfectly in both function calling modes
"""
-
+
if not __event_emitter__:
return "Database query executed"
-
+
# Simulate database results with citation metadata
query_results = [
{
"record_id": "USR_001247",
"data": "John Smith, Software Engineer, joined 2023-01-15",
- "table": "employees",
+ "table": "employees",
"last_updated": "2024-03-10T14:30:00Z",
"updated_by": "admin_user"
},
{
- "record_id": "USR_001248",
+ "record_id": "USR_001248",
"data": "Jane Wilson, Product Manager, joined 2023-02-20",
"table": "employees",
- "last_updated": "2024-03-08T09:15:00Z",
+ "last_updated": "2024-03-08T09:15:00Z",
"updated_by": "hr_system"
}
]
-
+
# Create citations for each database record
for i, record in enumerate(query_results):
await __event_emitter__({
@@ -1061,15 +1071,16 @@ async def database_query_tool(
}
}
})
-
+
return f"""
+
# Database Query Results
Executed query: `{sql_query}`
Retrieved **{len(query_results)}** records with complete citation metadata. Each record includes:
- Record ID and source table
-- Last modification timestamp
+- Last modification timestamp
- Update attribution
- Full audit trail
@@ -1090,7 +1101,7 @@ await __event_emitter__({
})
```
-**File Events**
+**File Events**
```python
await __event_emitter__({
"type": "files", # or "chat:message:files"
@@ -1101,7 +1112,7 @@ await __event_emitter__({
**Follow-up Events**
```python
await __event_emitter__({
- "type": "chat:message:follow_ups",
+ "type": "chat:message:follow_ups",
"data": {"follow_ups": ["What about X?", "Tell me more about Y"]}
})
```
@@ -1117,13 +1128,13 @@ await __event_emitter__({
**Tag Events**
```python
await __event_emitter__({
- "type": "chat:tags",
+ "type": "chat:tags",
"data": {"tags": ["research", "analysis", "completed"]}
})
```
**Error Events**
-```python
+```python
await __event_emitter__({
"type": "chat:message:error",
"data": {"content": "Error message to display"}
@@ -1141,7 +1152,7 @@ await __event_emitter__({
**Input Request Events**
```python
await __event_emitter__({
- "type": "input",
+ "type": "input",
"data": {"prompt": "Please enter additional information:"}
})
```
@@ -1202,7 +1213,7 @@ Choosing the right function calling mode is crucial for your tool's functionalit
- Educational tools that show step-by-step processes
- Any tool that needs `message`, `replace`, or `chat:message` events
-**Choose Native Mode For:**
+**Choose Native Mode For:**
- Simple API calls or database queries
- Basic calculations or data transformations
- Tools that only need status updates and citations
@@ -1218,64 +1229,64 @@ async def mode_adaptive_tool(
Tool that adapts its behavior based on function calling mode
✅ Provides optimal experience in both modes
"""
-
+
# Detect current mode
mode = "default"
if __metadata__:
mode = __metadata__.get("params", {}).get("function_calling", "default")
-
+
is_native_mode = (mode == "native")
-
+
if not __event_emitter__:
return "Tool executed successfully (no event support)"
-
+
# Always safe: status updates work in both modes
await __event_emitter__({
"type": "status",
"data": {"description": f"Running in {mode} mode...", "done": False}
})
-
+
# Mode-specific logic
if is_native_mode:
# Native mode: use compatible events only
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Processing with native efficiency...", "done": False}
})
-
+
# Simulate processing
await asyncio.sleep(1)
-
+
# Return results directly - no message streaming
result = f"Query '{query}' processed successfully in Native mode."
-
+
else:
- # Default mode: full event capabilities
+ # Default mode: full event capabilities
await __event_emitter__({
"type": "message",
"data": {"content": f"🔍 **Processing Query**: {query}\n\n"}
})
-
+
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Analyzing with full streaming...", "done": False}
})
-
+
await asyncio.sleep(1)
-
+
await __event_emitter__({
"type": "message",
"data": {"content": "📊 **Results**: Analysis complete with detailed findings.\n\n"}
})
-
+
result = "Query processed with full Default mode capabilities."
-
+
# Final status (works in both modes)
await __event_emitter__({
"type": "status",
"data": {"description": "Processing complete!", "done": True}
})
-
+
return result
```
@@ -1289,7 +1300,7 @@ async def mode_adaptive_tool(
- **Cause**: Using message events in Native mode
- **Solution**: Switch to Default mode or use status events instead
-**Issue: Tool seems unresponsive**
+**Issue: Tool seems unresponsive**
- **Cause**: Function calling not enabled for model
- **Solution**: Enable tools in Model settings or via `+` button
@@ -1298,7 +1309,7 @@ async def mode_adaptive_tool(
- **Solution**: Ensure parameter is included in tool method signature
**Issue: Citations being overwritten**
-- **Cause**: `self.citation = True` (or not set to False)
+- **Cause**: `self.citation = True` (or not set to False)
- **Solution**: Set `self.citation = False` in `__init__` method
**Diagnostic Tool:**
@@ -1309,33 +1320,33 @@ async def event_diagnostics_tool(
"""
Comprehensive diagnostic tool for event emitter debugging
"""
-
+
report = ["# 🔍 Event Emitter Diagnostic Report\n"]
-
+
# Check event emitter availability
if __event_emitter__:
report.append("✅ Event emitter is available\n")
else:
report.append("❌ Event emitter is NOT available\n")
return "".join(report)
-
- # Check metadata availability
+
+ # Check metadata availability
if __metadata__:
mode = __metadata__.get("params", {}).get("function_calling", "default")
report.append(f"✅ Function calling mode: **{mode}**\n")
else:
report.append("⚠️ Metadata not available (mode unknown)\n")
mode = "unknown"
-
+
# Check user context
if __user__:
report.append("✅ User context available\n")
else:
report.append("⚠️ User context not available\n")
-
+
# Test compatible events (work in both modes)
report.append("\n## Testing Compatible Events:\n")
-
+
try:
await __event_emitter__({
"type": "status",
@@ -1344,19 +1355,19 @@ async def event_diagnostics_tool(
report.append("✅ Status events: WORKING\n")
except Exception as e:
report.append(f"❌ Status events: FAILED - {str(e)}\n")
-
+
try:
await __event_emitter__({
- "type": "notification",
+ "type": "notification",
"data": {"content": "Test notification"}
})
report.append("✅ Notification events: WORKING\n")
except Exception as e:
report.append(f"❌ Notification events: FAILED - {str(e)}\n")
-
+
# Test problematic events (broken in Native mode)
report.append("\n## Testing Mode-Dependent Events:\n")
-
+
try:
await __event_emitter__({
"type": "message",
@@ -1365,20 +1376,20 @@ async def event_diagnostics_tool(
report.append("✅ Message events: SENT (may disappear in Native mode)\n")
except Exception as e:
report.append(f"❌ Message events: FAILED - {str(e)}\n")
-
+
# Final status
await __event_emitter__({
- "type": "status",
+ "type": "status",
"data": {"description": "Diagnostic complete", "done": True}
})
-
+
# Mode-specific recommendations
report.append("\n## Recommendations:\n")
-
+
if mode == "native":
report.append("""
⚠️ **Native Mode Detected**: Limited event support
-- ✅ Use: status, citation, notification, files events
+- ✅ Use: status, citation, notification, files events
- ❌ Avoid: message, replace, chat:message events
- 💡 Switch to Default mode for full functionality
""")
@@ -1394,7 +1405,7 @@ async def event_diagnostics_tool(
- Ensure function calling is enabled
- Verify model supports tool calling
""")
-
+
return "".join(report)
```
@@ -1404,6 +1415,7 @@ async def event_diagnostics_tool(
**Always Compatible (Both Modes):**
```python
+
# Status updates - perfect for progress tracking
await __event_emitter__({
"type": "status",
@@ -1412,14 +1424,14 @@ await __event_emitter__({
# Citations - essential for source attribution
await __event_emitter__({
- "type": "citation",
+ "type": "citation",
"data": {
"document": ["Content"],
"source": {"name": "Source", "url": "https://example.com"}
}
})
-# Notifications - user alerts
+# Notifications - user alerts
await __event_emitter__({
"type": "notification",
"data": {"content": "Task completed!"}
@@ -1428,24 +1440,25 @@ await __event_emitter__({
**Default Mode Only (Broken in Native):**
```python
+
# ⚠️ These will flicker/disappear in Native mode
-# Progressive content streaming
+# Progressive content streaming
await __event_emitter__({
- "type": "message",
+ "type": "message",
"data": {"content": "Streaming content..."}
})
# Content replacement
await __event_emitter__({
"type": "replace",
- "data": {"content": "New complete content"}
+ "data": {"content": "New complete content"}
})
# Delta updates
await __event_emitter__({
"type": "chat:message:delta",
- "data": {"content": "Additional content"}
+ "data": {"content": "Additional content"}
})
```
@@ -1487,7 +1500,7 @@ from fastapi.responses import HTMLResponse
def create_visualization_tool(self, data: str) -> HTMLResponse:
"""
Creates an interactive data visualization that embeds in the chat.
-
+
:param data: The data to visualize
"""
html_content = """
@@ -1509,7 +1522,7 @@ def create_visualization_tool(self, data: str) -> HTMLResponse: