diff --git a/api-reference/action.mdx b/api-reference/action.mdx index e21c0dd..1eaf9ed 100644 --- a/api-reference/action.mdx +++ b/api-reference/action.mdx @@ -1,49 +1,188 @@ --- -title: "Action" +title: Action --- -The `Action` class is designed to create and manage actions to be sent and displayed in the chatbot user interface. Actions consist of buttons that the user can interact with, and these interactions trigger specific functionalities within your app. +The `Action` class is designed to create and manage interactive buttons within the Chainlit user interface. These actions allow users to trigger specific functionalities in your application, and their interactions are handled by `@cl.action_callback` decorated functions. -## Attributes +## `cl.Action` Properties - - Name of the action, this should match the action callback. - +The `cl.Action` class has the following properties: - - The payload associated with the action. + + A unique string identifier for the action. This is automatically generated using `uuid.uuid4()` and should not be set manually. - - The lucide icon name for the action button. See https://lucide.dev/icons/. + + A string that identifies the action. This name is crucial as it links the action to its corresponding `@cl.action_callback` function. - The label of the action. This is what the user will see. If no label and no icon is provided, the name is display as a fallback. + The text displayed on the action button in the UI. If no label is provided, the `name` will be used as a fallback. + + + + A dictionary containing parameters to be passed to the action's callback function when it is triggered. This property replaced the `value` field in version 2.0.0. - The description of the action. This is what the user will see when they hover - the action. + A string that provides additional information when the user hovers over the action button. + + + + The name of a Lucid icon to be displayed alongside the action label. This field was added in version 2.0.0. + + + + An optional string used internally to associate an action with a specific message or element. This should not be set manually. ## Usage +### Defining Actions and Callbacks + +Actions are defined using the `cl.Action` class and can be attached to messages. The `name` property of the `Action` object corresponds to the name used in the `@cl.action_callback` decorator. + ```python import chainlit as cl -@cl.action_callback("action_button") -async def on_action(action): - await cl.Message(content=f"Executed {action.name}").send() - # Optionally remove the action button from the chatbot user interface - await action.remove() +@cl.action_callback("test action") +async def on_test_action(action: cl.Action): + await cl.Message(content=f"Executed {action.name} with payload: {action.payload['value']}!").send() @cl.on_chat_start async def start(): - # Sending an action button within a chatbot message actions = [ - cl.Action(name="action_button", payload={"value": "example_value"}, label="Click me!") + cl.Action(id="test-action-1", name="test action", payload={"value": "test1"}, label="Test Action 1"), + cl.Action(id="test-action-2", name="test action", payload={"value": "test2"}, label="Test Action 2", tooltip="This is a tooltip", icon="Settings"), ] + await cl.Message("Hello, this is a test message with actions!", actions=actions).send() +``` +In this example, clicking either "Test Action 1" or "Test Action 2" will trigger the `on_test_action` callback. The `action` object passed to the callback will contain the `payload` specific to the clicked button. - await cl.Message(content="Interact with this action button:", actions=actions).send() +### Asking the User for an Action + +You can use `cl.AskActionMessage` to prompt the user to select an action, blocking further code execution until an action is chosen or a timeout occurs. + +```python +import chainlit as cl + +@cl.action_callback("first_action") +async def on_first_action(action: cl.Action): + await cl.Message(content=f"You chose: {action.label}").send() + +@cl.action_callback("second_action") +async def on_second_action(action: cl.Action): + await cl.Message(content=f"You chose: {action.label}").send() + +@cl.on_chat_start +async def start(): + result = await cl.AskActionMessage( + content="Please, pick an action!", + actions=[ + cl.Action( + id="first-action-id", + name="first_action", + payload={"value": "first-action"}, + label="First action", + ), + cl.Action( + id="second-action-id", + name="second_action", + payload={"value": "second-action"}, + label="Second action", + ), + ], + ).send() + + if result is not None: + await cl.Message(f"Thanks for pressing: {result['payload']['value']}").send() ``` +The `result` from `send()` will be an `AskActionResponse` dictionary containing the `name`, `payload`, `label`, `tooltip`, `forId`, and `id` of the selected action. + +## Action Lifecycle and Removal + +Actions have a lifecycle that includes creation, display, execution, and removal. You can programmatically remove actions from the UI. + +### Removing Individual Actions + +To remove an individual action, call the `remove()` method on an `Action` object within an `@cl.action_callback` function. + +```python +import chainlit as cl + +@cl.action_callback("removable action") +async def on_removable_action(action: cl.Action): + await cl.Message(content="Executed removable action!").send() + await action.remove() # This removes the specific action button from the UI + +@cl.on_chat_start +async def start(): + actions = [ + cl.Action(id="removable-action-id", name="removable action", payload={"value": "remove_me"}, label="Remove Me") + ] + await cl.Message("Click the button to remove it:", actions=actions).send() +``` + +### Removing All Actions from a Message + +You can remove all actions associated with a specific message by calling the `remove_actions()` method on a `Message` object. + +```python +import chainlit as cl + +@cl.action_callback("all actions removed") +async def on_all_actions_removed(_: cl.Action): + await cl.Message(content="All actions have been removed!").send() + # Retrieve the message from user session (assuming it was stored) + message_with_actions = cl.user_session.get("message_with_actions") + if message_with_actions: + await message_with_actions.remove_actions() # Removes all action buttons from this message + +@cl.on_chat_start +async def start(): + actions = [ + cl.Action(id="action-1", name="all actions removed", payload={"value": "action_1"}, label="Action 1"), + cl.Action(id="action-2", name="all actions removed", payload={"value": "action_2"}, label="Action 2") + ] + message = cl.Message("Click any button to remove all actions:", actions=actions) + cl.user_session.set("message_with_actions", message) # Store message for later removal + await message.send() +``` + +### Multiple Actions with Same Callback + +Multiple `cl.Action` instances can share the same `name` and thus trigger the same callback function. The `action` object passed to the callback contains the unique `id` and `payload` of the specific action that was triggered, allowing you to differentiate between them. + +```python +import chainlit as cl + +@cl.action_callback("shared_callback") +async def on_shared_callback(action: cl.Action): + await cl.Message(content=f"Action '{action.label}' (ID: {action.id}) was clicked with payload: {action.payload['value']}").send() + await action.remove() + +@cl.on_chat_start +async def start(): + actions = [ + cl.Action(id="option-a", name="shared_callback", payload={"value": "option_a"}, label="Option A"), + cl.Action(id="option-b", name="shared_callback", payload={"value": "option_b"}, label="Option B"), + ] + await cl.Message("Choose an option:", actions=actions).send() +``` + +## Global Actions + +You can also define global actions that are always available to the user. These actions are typically displayed in the chat input area. + +```python +import chainlit as cl + +@cl.action_callback("global_action") +async def on_global_action(action: cl.Action): + await cl.Message(content=f"Executed global action: {action.label}").send() + +@cl.on_chat_start +async def start(): + # Sending a global action. It will appear in the chat input area. + cl.Action(id="global-action-id", name="global_action", payload={"value": "global_example"}, label="Global Action").send() +``` \ No newline at end of file diff --git a/api-reference/author-rename.mdx b/api-reference/author-rename.mdx index aa808f4..1937833 100644 --- a/api-reference/author-rename.mdx +++ b/api-reference/author-rename.mdx @@ -1,70 +1,64 @@ --- -title: "author_rename and Message author" +title: "Author Rename" --- -This documentation covers two methods for setting or renaming the author of a message to display more friendly author names in the UI: the `author_rename` decorator and the Message author specification at message creation. +In Chainlit, you can customize the author name displayed in the UI for messages. This is useful for replacing default or technical author names (like "LLMMathChain" or "Chatbot") with more user-friendly names (like "Albert Einstein" or "Assistant"). -## Method 1: author_rename +There are two primary ways to control the author name: -Useful for renaming the author of a message dynamically during the message handling process. +1. **`@cl.author_rename` decorator**: A global way to dynamically rename authors for all messages. +2. **Directly on the `Message` object**: Set the author for a specific message when you create it. -## Parameters +## 1. @cl.author_rename Decorator - - The original author name. - +This decorator allows you to define a function that will be called for every message to determine the author's name. This is the recommended approach for consistent author renaming across your application. -## Returns +Your decorated function should accept the original author name as a string and return the new author name as a string. This decorated function is automatically registered in `config.code.author_rename`. The function can be synchronous or asynchronous. - - The renamed author - +**Note:** While the `rename_author` function only receives the author name, you can access the current chat session context (e.g., user information, thread ID) by explicitly importing and using `chainlit.context.context` or `chainlit.user_session` within your function. -## Usage +### Usage ```python -from langchain import OpenAI, LLMMathChain import chainlit as cl - +# Synchronous example +@cl.author_rename +def rename_author(original_author: str): + rename_dict = { + "LLMMathChain": "Albert Einstein", + "Chatbot": "Assistant" + } + return rename_dict.get(original_author, original_author) + +# Asynchronous example @cl.author_rename -def rename(orig_author: str): - rename_dict = {"LLMMathChain": "Albert Einstein", "Chatbot": "Assistant"} - return rename_dict.get(orig_author, orig_author) +async def rename_author_async(original_author: str): + if original_author == "AI": + return "Assistant" + return original_author +# This renaming applies to all message types, including `cl.Message`, `cl.AskUserMessage`, `cl.AskFileMessage`, and `cl.AskActionMessage`. @cl.on_message async def main(message: cl.Message): - llm = OpenAI(temperature=0) - llm_math = LLMMathChain.from_llm(llm=llm) - res = await llm_math.acall(message.content, callbacks=[cl.AsyncLangchainCallbackHandler()]) - - await cl.Message(content="Hello").send() + # The author of this message will be renamed by the decorator + await cl.Message(content="Hello from the default author!").send() ``` +## 2. Message Author Parameter -## Method 2: Message author - -Allows for naming the author of a message at the moment of the message creation. +You can also specify the author for a single message directly by using the `author` parameter when creating a `cl.Message` instance. This will override any global renaming for that specific message. ### Usage -You can specify the author directly when creating a new message object: - ```python -from langchain import OpenAI, LLMMathChain import chainlit as cl @cl.on_message async def main(message: cl.Message): - llm = OpenAI(temperature=0) - llm_math = LLMMathChain.from_llm(llm=llm) - res = await llm_math.acall(message.content, callbacks=[cl.AsyncLangchainCallbackHandler()]) - # Specify the author at message creation - response_message = cl.Message(content="Hello", author="NewChatBotName") - await response_message.send() -``` - - - + await cl.Message( + content="Hello from a specific author!", + author="CustomBot" + ).send()) \ No newline at end of file diff --git a/api-reference/cache.mdx b/api-reference/cache.mdx index c53c4d3..43d0dc6 100644 --- a/api-reference/cache.mdx +++ b/api-reference/cache.mdx @@ -1,39 +1,110 @@ --- -title: "cache" +title: "Cache" --- -The `cache` decorator is a tool for caching results of resource-intensive calculations or loading processes. It can be conveniently combined with the [file watcher](/backend/command-line) to prevent resource reloading each time the application restarts. This not only saves time, but also enhances overall efficiency. +Chainlit provides caching mechanisms to significantly improve the performance and efficiency of your applications by reducing redundant computations and external API calls. This is particularly useful for: -## Parameters +* **Avoiding reloading expensive resources**: During development or frequent application reloads, caching prevents re-execution of computationally intensive functions. +* **Enabling third-party caching**: Integrations with libraries like LangChain can leverage caching to store results of LLM calls, reducing latency and costs. - - The target function whose results need to be cached. - +## `@cl.cache` Decorator -## Returns +The `@cl.cache` decorator provides an in-memory caching mechanism for Python functions. It memoizes function results, returning a cached value if the function is called again with the same arguments, thus avoiding re-execution. - - The computed value that is stored in the cache after its initial calculation. - +### How it Works -## Usage +This type of caching does not require explicit configuration in `config.toml`. + +When a function decorated with `@cl.cache` is called: + +1. A unique cache key is generated based on the function's name, arguments, and keyword arguments. +2. The decorator checks if this key exists in a global in-memory dictionary. +3. If the key is found, the cached result is returned immediately. +4. If the key is not found, the original function is executed, its result is stored in the cache with the generated key, and then returned. + +Access to the in-memory cache is protected by a `threading.Lock()` to ensure thread-safe access, ensuring reliable operation in concurrent environments. + +### Parameters + +The `@cl.cache` decorator itself **does not accept any parameters**. + +### Best Practices + +Apply `@cl.cache` to functions that are: + +* **Idempotent**: Calling the function multiple times with the same arguments produces the same result. +* **Computationally Expensive**: Functions that take a long time to execute. +* **Called Frequently**: Functions that are invoked many times with potentially repetitive arguments. + +### Limitations + +There is no explicit mechanism to invalidate or clear the in-memory cache managed by `@cl.cache` during runtime. The cache persists for the lifetime of the application process. + +### Usage ```python -import time import chainlit as cl +from chainlit.cache import cache -@cl.cache -def to_cache(): - time.sleep(5) # Simulate a time-consuming process - return "Hello!" - -value = to_cache() +@cache +def expensive_function(arg1: str, arg2: str) -> str: + # Simulate a time-consuming operation + import time + time.sleep(2) + return f"Result for {arg1}, {arg2}" @cl.on_message async def main(message: cl.Message): - await cl.Message( - content=value, - ).send() + # The first call will execute the function and cache the result + result1 = expensive_function("input1", "input2") + await cl.Message(content=f"First call: {result1}").send() + + # The second call with the same arguments will return the cached result instantly + result2 = expensive_function("input1", "input2") + await cl.Message(content=f"Second call: {result2}").send() + + # A call with different arguments will execute the function again and cache a new result + result3 = expensive_function("input3", "input4") + await cl.Message(content=f"Third call: {result3}").send() ``` -In this example, the `to_cache` function simulates a time-consuming process that returns a value. By using the `cl.cache` decorator, the result of the function is cached after its first execution. Future calls to the `to_cache` function return the cached value without running the time-consuming process again. +## LangChain Caching + +Chainlit integrates with third-party libraries like LangChain to provide caching for their operations, particularly for LLM (Large Language Model) calls. This significantly reduces the need to re-query LLMs for identical prompts, leading to faster responses and reduced API costs. + +### Mechanism + +When enabled, Chainlit initializes LangChain's `SQLiteCache`, storing LLM call results in a local SQLite database. The path to this database can be configured. This initialization occurs during the Chainlit application startup, checking if caching is enabled (`config.project.cache` is `True`) and not explicitly disabled at runtime (`config.run.no_cache` is `False`). + +### Configuration + +LangChain caching can be configured via your `chainlit.toml` file or through command-line options: + +* **`chainlit.toml`**: Under the `[project]` section, set `cache = true` to enable LangChain caching. You can also specify the `lc_cache_path` to define the location of the SQLite cache database. + + ```toml + [project] + cache = true + lc_cache_path = ".langchain_cache.db" + ``` + +* **CLI Option**: You can disable LangChain caching at runtime using the `--no-cache` flag when running your Chainlit application: + + ```bash + chainlit run your_app.py --no-cache + ``` + +## Performance Implications + +Strategic use of caching in Chainlit applications offers substantial performance benefits: + +* **Reduced Latency**: By serving results from cache, response times are significantly improved, especially for expensive computations or LLM calls. +* **Lower API Costs**: Caching LLM responses reduces the number of external API calls, leading to cost savings. +* **Improved User Experience**: Faster interactions and more responsive applications contribute to a better user experience. +* **Efficient Resource Utilization**: Avoids redundant processing, freeing up computational resources. + +## Limitations and Considerations + +- The `@cl.cache` decorator uses an in-memory cache. This means that all cached data is lost when your Chainlit application restarts. +- The in-memory cache does not have a size limit, so be mindful of memory consumption when caching large objects. +- The LangChain cache is persistent as it uses a SQLite database file. \ No newline at end of file diff --git a/api-reference/chat-profiles.mdx b/api-reference/chat-profiles.mdx index e7dfe0b..551bf6c 100644 --- a/api-reference/chat-profiles.mdx +++ b/api-reference/chat-profiles.mdx @@ -2,26 +2,30 @@ title: "Chat Profiles" --- -Decorator to define the list of chat profiles. +Chat Profiles allow you to define different configurations for your conversational AI, giving users the ability to choose from various agents or settings at the start of a chat. This is useful for offering different models (e.g., GPT-3.5 vs. GPT-4), enabling or disabling specific features, or customizing the UI. -If authentication is enabled, you can access the user details to create the list of chat profiles conditionally. +You define Chat Profiles using the `@cl.set_chat_profiles` decorator, which decorates an asynchronous function that returns a list of `cl.ChatProfile` objects. -The icon is optional. +## `ChatProfile` Class -## Parameters +A `cl.ChatProfile` object has the following attributes: - - The message coming from the UI. - +- `name` (str): The unique name of the profile. +- `markdown_description` (str): A description of the profile that supports Markdown. +- `icon` (str, optional): A URL to an icon for the profile. +- `starters` (list, optional): A list of `cl.Starter` objects for predefined conversation starters. +- `default` (bool, optional): A boolean indicating if this profile should be selected by default. +- `config_overrides` (ChainlitConfigOverrides, optional): A way to override global `config.toml` settings for this specific profile. -## Usage +## Basic Usage -```python Simple example -import chainlit as cl +Here is a simple example of how to define two chat profiles: +```python +import chainlit as cl @cl.set_chat_profiles -async def chat_profile(): +async def chat_profiles(): return [ cl.ChatProfile( name="GPT-3.5", @@ -34,91 +38,94 @@ async def chat_profile(): icon="https://picsum.photos/250", ), ] +``` + +## Accessing the Current Profile + +You can access the currently selected chat profile from the `user_session`. +```python @cl.on_chat_start async def on_chat_start(): chat_profile = cl.user_session.get("chat_profile") await cl.Message( - content=f"starting chat using the {chat_profile} chat profile" + content=f"Starting chat using the {chat_profile} chat profile!" ).send() ``` -```python With authentication -from typing import Optional +## Conditional Profiles with Authentication -import chainlit as cl +The function decorated with `@cl.set_chat_profiles` can optionally receive the `current_user` object if authentication is enabled. This allows you to show different profiles to different users. +```python +from typing import Optional +import chainlit as cl @cl.set_chat_profiles -async def chat_profile(current_user: cl.User): - if current_user.metadata["role"] != "ADMIN": +async def chat_profiles(current_user: cl.User): + # Only show profiles to admin users + if current_user.metadata.get("role") != "ADMIN": return None return [ cl.ChatProfile( - name="GPT-3.5", - markdown_description="The underlying LLM model is **GPT-3.5**, a *175B parameter model* trained on 410GB of text data.", - ), - cl.ChatProfile( - name="GPT-4", - markdown_description="The underlying LLM model is **GPT-4**, a *1.5T parameter model* trained on 3.5TB of text data.", - icon="https://picsum.photos/250", - ), - cl.ChatProfile( - name="GPT-5", - markdown_description="The underlying LLM model is **GPT-5**.", - icon="https://picsum.photos/200", + name="Admin Profile", + markdown_description="A special profile only for admins.", ), ] - @cl.password_auth_callback def auth_callback(username: str, password: str) -> Optional[cl.User]: if (username, password) == ("admin", "admin"): return cl.User(identifier="admin", metadata={"role": "ADMIN"}) else: return None - - -@cl.on_chat_start -async def on_chat_start(): - user = cl.user_session.get("user") - chat_profile = cl.user_session.get("chat_profile") - await cl.Message( - content=f"starting chat with {user.identifier} using the {chat_profile} chat profile" - ).send() ``` -## Dynamic Configuration +## Dynamic Configuration Overrides -You can override the global `config.toml` for specific ChatProfiles by configuring overrides +You can override the global `config.toml` for specific Chat Profiles. This is powerful for enabling or disabling features on the fly. ```python from chainlit.config import ( ChainlitConfigOverrides, FeaturesSettings, - McpFeature, UISettings, ) @cl.set_chat_profiles -async def chat_profile(current_user: cl.User): +async def chat_profiles(current_user: cl.User): return [ cl.ChatProfile( - name="MCP Enabled", - markdown_description="Profile with **MCP features enabled**. This profile has *Model Context Protocol* support activated. [Learn more](https://example.com/mcp)", - icon="https://picsum.photos/250", - starters=starters, + name="Default UI", + markdown_description="This profile uses the default UI.", + ), + cl.ChatProfile( + name="Custom UI", + markdown_description="This profile has a **custom UI name**.", config_overrides=ChainlitConfigOverrides( - ui=UISettings(name="MCP UI"), - features=FeaturesSettings( - mcp=McpFeature( - enabled=True, - stdio={"enabled": True}, - sse={"enabled": True}, - streamable_http={"enabled": True}, - ) - ), + ui=UISettings(name="My Custom UI"), ), ), -``` \ No newline at end of file + ] +``` + +## How it Works + +Chat profiles are designed to dynamically adjust your application's behavior based on user selection. Here's a breakdown of how they function: + +### Frontend Usage + +On the frontend, chat profiles are typically presented in a dropdown menu, often located in the chat interface's header. When a user selects a chat profile: + +* The frontend updates the `chatProfile` state, reflecting the user's choice. +* If there has been prior interaction in the current chat session, a confirmation dialog may appear before changing the profile to prevent accidental data loss. +* The selected chat profile's `markdown_description` and `icon` are used to customize the welcome screen and the assistant's avatar, providing immediate visual feedback. +* Any `starters` associated with the chosen profile are displayed, allowing users to quickly initiate conversations with predefined messages. + +### Backend Usage + +In the backend, the `WebsocketSession` plays a crucial role in managing chat profiles. When a chat profile is active and contains `config_overrides`: + +* These overrides are applied to the global Chainlit configuration. This ensures that the application's settings, such as enabled features (e.g., MCP) or UI configurations, are dynamically adjusted according to the selected profile. +* The `project_settings` endpoint also incorporates these `config_overrides` before returning the project settings to the frontend, ensuring consistency across the application. diff --git a/api-reference/chat-settings.mdx b/api-reference/chat-settings.mdx index 6e2c413..7228380 100644 --- a/api-reference/chat-settings.mdx +++ b/api-reference/chat-settings.mdx @@ -2,7 +2,7 @@ title: "Chat Settings" --- -The `ChatSettings` class is designed to create and send a dynamic form to the UI. This form can be updated by the user. +The `ChatSettings` class is designed to create and send a dynamic form to the UI. This form can be updated by the user and is primarily configured within the `on_chat_start` callback. ## Attributes @@ -14,7 +14,7 @@ The `ChatSettings` class is designed to create and send a dynamic form to the UI ```python import chainlit as cl -from chainlit.input_widget import Select, Switch, Slider +from chainlit.input_widget import Select, Switch, Slider, MultiSelect, RadioGroup, Checkbox @cl.on_chat_start @@ -36,6 +36,19 @@ async def start(): max=2, step=0.1, ), + MultiSelect( + id="Features", + label="Select Features", + values=["Feature A", "Feature B", "Feature C"], + initial_values=["Feature A"], + ), + RadioGroup( + id="Mode", + label="Select Mode", + values=["Mode 1", "Mode 2"], + initial_value="Mode 1", + ), + Checkbox(id="DebugMode", label="Enable Debug Mode", initial=False), Slider( id="SAI_Steps", label="Stability AI - Steps", @@ -81,3 +94,99 @@ async def setup_agent(settings): print("on_settings_update", settings) ``` + +## Handling Setting Updates (`@cl.on_settings_update`) + +To react to changes made by the user in the chat settings form, you can use the `@cl.on_settings_update` decorator. The function decorated with `@cl.on_settings_update` will be called whenever the user updates any of the settings. + +This function receives a `settings` dictionary as an argument, where keys are the `id` of your input widgets and values are their current states. + +### Usage + +```python +import chainlit as cl + +@cl.on_settings_update +async def setup_agent(settings): + # You can access individual settings by their ID + model = settings["Model"] + streaming_enabled = settings["Streaming"] + temperature = settings["Temperature"] + + print(f"Settings updated: Model={model}, Streaming={streaming_enabled}, Temperature={temperature}") + + # You can then use these settings to reconfigure your agent or application logic + # For example, update an LLM instance with the new temperature + # my_llm.temperature = temperature + + await cl.Message(content="Chat settings have been updated!").send() +``` + +## Configuration Overrides with Chat Profiles + +Chat settings can be dynamically overridden for specific chat profiles, allowing for highly flexible and context-aware application behavior. This is achieved using `ChainlitConfigOverrides` within your `cl.ChatProfile` definitions. + +By leveraging this, you can, for example, enable or disable certain features, change UI elements, or adjust model parameters based on the chat profile selected by the user. + +For more details on defining and using chat profiles, refer to the [Chat Profiles documentation](/api-reference/chat-profiles). + +### Example + +```python +import chainlit as cl +from chainlit.config import ( + ChainlitConfigOverrides, + FeaturesSettings, + UISettings, +) +from chainlit.input_widget import Select, Switch + +@cl.set_chat_profiles +async def chat_profiles(): + return [ + cl.ChatProfile( + name="Default Profile", + markdown_description="Uses default chat settings.", + ), + cl.ChatProfile( + name="Advanced Profile", + markdown_description="Enables advanced features and custom UI.", + config_overrides=ChainlitConfigOverrides( + ui=UISettings(name="Advanced Chat"), + features=FeaturesSettings( + # Example: enable a specific feature for this profile + mcp=cl.McpFeature(enabled=True) + ) + ), + ), + ] + +@cl.on_chat_start +async def start(): + # Define default chat settings + await cl.ChatSettings( + [ + Select( + id="Model", + label="Model", + values=["gpt-3.5-turbo", "gpt-4"], + initial_index=0, + ), + Switch(id="DebugMode", label="Debug Mode", initial=False), + ] + ).send() + +@cl.on_settings_update +async def setup_agent(settings): + print("Settings updated:", settings) +``` + +## Frontend Interaction + +On the frontend, the `ChatSettingsModal` component is responsible for rendering the chat settings UI. It leverages various React hooks and state management to provide a dynamic and interactive experience: + +* `useChatData`: Accesses the current chat settings values and input definitions. +* `useChatInteract`: Handles user interactions with the settings, such as updating values. +* `chatSettingsValueState`: An internal state atom that holds the current values of the chat settings. + +When a user modifies a setting, these frontend components work together to update the UI, trigger the `@cl.on_settings_update` callback in the backend, and ensure the application reacts accordingly. diff --git a/api-reference/elements/custom.mdx b/api-reference/elements/custom.mdx index 0a4c876..ce0f002 100644 --- a/api-reference/elements/custom.mdx +++ b/api-reference/elements/custom.mdx @@ -4,20 +4,50 @@ title: "Custom" The `CustomElement` class allows you to render a custom `.jsx` snippet. The `.jsx` file should be placed in `public/elements/ELEMENT_NAME.jsx`. -## Attributes +## Usage - - The name of the custom Element. It should match the name of your JSX file (without the `.jsx` extension). - +```python +import chainlit as cl + +@cl.custom_element("my_custom_element") +class MyCustomElement(cl.Element): + def __init__(self, content: str, name: str = "my_custom_element", display: str = "inline"): + super().__init__(name=name, display=display, content=content) + +@cl.on_chat_start +async def main(): + await MyCustomElement(content="Hello from custom element!").send() +``` + +## Sending User Messages from Custom Elements (`sendUserMessage`) + +The `sendUserMessage` function is available within custom elements to allow them to send messages back to the Chainlit backend, simulating a user input. This functionality was added in version 2.3.0 of Chainlit. - - The props to pass to the JSX. - +When a custom element calls `sendUserMessage`, it dispatches a message to the Chainlit backend. This message can optionally include a `command` string. The backend can then process this message, for example, by triggering an `@cl.on_message` handler if a command is present. + +### Example + +```jsx +import { Button } from '@mui/material'; // Assuming MUI for Button, adjust as per actual framework + +export default function Commander() { + return ( +
+ +
+ ); +} +``` +In this example, clicking the button sends the message "Hello from custom element" along with the command "my_command" to the backend. The backend then receives this message and can act upon the command. - - Determines how the text element should be displayed in the UI. Choices are - "side", "inline", or "page". - ## How to Write the JSX file diff --git a/api-reference/lifecycle-hooks/on-app-shutdown.mdx b/api-reference/lifecycle-hooks/on-app-shutdown.mdx new file mode 100644 index 0000000..1dd5acc --- /dev/null +++ b/api-reference/lifecycle-hooks/on-app-shutdown.mdx @@ -0,0 +1,16 @@ +--- +title: "on_app_shutdown" +--- + +The `@cl.on_app_shutdown` decorator registers a function to run when the Chainlit application shuts down. This is useful for cleaning up resources, closing connections, or saving the application state before the application terminates. + +## Usage + +```python +import chainlit as cl + +@cl.on_app_shutdown +async def shutdown(): + print("Application is shutting down!") + # Clean up resources here +``` \ No newline at end of file diff --git a/api-reference/lifecycle-hooks/on-app-startup.mdx b/api-reference/lifecycle-hooks/on-app-startup.mdx new file mode 100644 index 0000000..665a502 --- /dev/null +++ b/api-reference/lifecycle-hooks/on-app-startup.mdx @@ -0,0 +1,16 @@ +--- +title: "on_app_startup" +--- + +The `@cl.on_app_startup` decorator registers a function to run when the Chainlit application starts. This is ideal for tasks like loading models, setting up database connections, or any other initializations required by your application. + +## Usage + +```python +import chainlit as cl + +@cl.on_app_startup +async def startup(): + print("Application is starting!") + # Initialize resources here +``` \ No newline at end of file diff --git a/api-reference/lifecycle-hooks/on-feedback.mdx b/api-reference/lifecycle-hooks/on-feedback.mdx new file mode 100644 index 0000000..93480c3 --- /dev/null +++ b/api-reference/lifecycle-hooks/on-feedback.mdx @@ -0,0 +1,25 @@ +--- +title: "on_feedback" +--- + +The `@cl.on_feedback` decorator allows you to define a function that will be executed whenever a user provides feedback on a message in the UI. This feedback is typically a "thumbs up" or "thumbs down" action. The decorated function receives a `Feedback` object, which contains details about the feedback event, such as the feedback value and the ID of the message it's associated with. + +## `Feedback` Object Attributes + +* `value`: The feedback value (e.g., 0 for thumbs down, 1 for thumbs up). +* `forId`: The ID of the step (message) for which the feedback was given. +* `comment`: An optional textual comment provided by the user. + +## Usage + +```python +import chainlit as cl + +@cl.on_feedback +async def on_feedback(feedback: cl.Feedback): + print(f"Received feedback: {feedback.value} for step {feedback.forId}") + # You can add custom logic here, such as storing feedback in a database + # or sending notifications. + if feedback.comment: + print(f"Comment: {feedback.comment}") +``` \ No newline at end of file diff --git a/api-reference/lifecycle-hooks/on-mcp-connect.mdx b/api-reference/lifecycle-hooks/on-mcp-connect.mdx new file mode 100644 index 0000000..759ac42 --- /dev/null +++ b/api-reference/lifecycle-hooks/on-mcp-connect.mdx @@ -0,0 +1,16 @@ +--- +title: "on_mcp_connect" +--- + +The `@cl.on_mcp_connect` decorator registers a function to be called when a Multi-Chat Profile (MCP) connects. + +## Usage + +```python +import chainlit as cl + +@cl.on_mcp_connect +async def on_mcp_connect(): + print("Multi-Chat Profile connected!") + # Perform actions when an MCP connects +``` \ No newline at end of file diff --git a/api-reference/lifecycle-hooks/on-mcp-disconnect.mdx b/api-reference/lifecycle-hooks/on-mcp-disconnect.mdx new file mode 100644 index 0000000..852ceaf --- /dev/null +++ b/api-reference/lifecycle-hooks/on-mcp-disconnect.mdx @@ -0,0 +1,16 @@ +--- +title: "on_mcp_disconnect" +--- + +The `@cl.on_mcp_disconnect` decorator registers a function to be called when a Multi-Chat Profile (MCP) disconnects. + +## Usage + +```python +import chainlit as cl + +@cl.on_mcp_disconnect +async def on_mcp_disconnect(): + print("Multi-Chat Profile disconnected!") + # Perform cleanup or actions when an MCP disconnects +``` \ No newline at end of file diff --git a/api-reference/lifecycle-hooks/on-thread-share-view.mdx b/api-reference/lifecycle-hooks/on-thread-share-view.mdx new file mode 100644 index 0000000..0f89399 --- /dev/null +++ b/api-reference/lifecycle-hooks/on-thread-share-view.mdx @@ -0,0 +1,19 @@ +--- +title: "on_thread_share_view" +--- + +Decorator to allow shared thread viewing if it returns `True`, enabling custom/admin viewing. + +## Usage + +```python +import chainlit as cl + +@cl.on_thread_share_view +def on_thread_share_view(thread_id: str, user_id: str) -> bool: + # Implement your logic here to determine if the user can view the shared thread + # For example, check if the user is an admin or has specific permissions + if user_id == "admin_user": + return True + return False +``` \ No newline at end of file diff --git a/api-reference/message.mdx b/api-reference/message.mdx index 1e5b0ec..7fc1459 100644 --- a/api-reference/message.mdx +++ b/api-reference/message.mdx @@ -6,8 +6,8 @@ The `Message` class is designed to send, stream, update or remove messages. ## Parameters - - The content of the message. + + The content of the message. Can be a string or a dictionary. The author of the message, defaults to the chatbot name defined in your config @@ -93,3 +93,60 @@ async def main(): await cl.sleep(2) await msg.remove() ``` + +## Sending Toast Notifications (`cl.context.emitter.send_toast`) + +The `cl.context.emitter.send_toast` function allows the backend to send transient notification messages (toasts) to the frontend UI. + +### Details + +The `send_toast` method takes a `message` string and an optional `type` parameter, which can be "info", "success", "warning", or "error". The `type` parameter determines the visual style of the toast in the frontend. + +### Example + +```python +import chainlit as cl + +@cl.on_chat_start +async def main(): + await cl.context.emitter.send_toast( + message="This is an info toast!", + type="info", + ) + await cl.context.emitter.send_toast( + message="This is a success toast!", + type="success", + ) + await cl.context.emitter.send_toast( + message="This is a warning toast!", + type="warning", + ) + await cl.context.emitter.send_toast( + message="This is an error toast!", + type="error", + ) +``` + +## Remove Actions + +The `remove_actions` method on the `Message` class allows you to remove actions associated with a message. + +```python +import chainlit as cl + +@cl.on_chat_start +async def main(): + actions = [ + cl.Action(name="action_to_remove", value="remove_me", label="Remove Me"), + cl.Action(name="action_to_keep", value="keep_me", label="Keep Me"), + ] + msg = cl.Message(content="Message with actions.", actions=actions) + await msg.send() + + await cl.sleep(2) + + # Remove a specific action + await msg.remove_actions(["action_to_remove"]) + + await cl.Message(content="Action 'Remove Me' has been removed.").send() +``` diff --git a/api-reference/step-class.mdx b/api-reference/step-class.mdx index 522f402..dd595c0 100644 --- a/api-reference/step-class.mdx +++ b/api-reference/step-class.mdx @@ -25,6 +25,9 @@ The `Step` class is a Python Context Manager that can be used to create steps in show the input. You can also set this to a language like `json` or `python` to syntax highlight the input. + + Whether the step should be open by default in the UI. + ## Send a Step diff --git a/api-reference/step-decorator.mdx b/api-reference/step-decorator.mdx index ce6a060..511fddc 100644 --- a/api-reference/step-decorator.mdx +++ b/api-reference/step-decorator.mdx @@ -24,6 +24,9 @@ Under the hood, the step decorator is using the [cl.Step](/api-reference/step-cl show the input. You can also set this to a language like `json` or `python` to syntax highlight the input. + + Whether the step should be open by default in the UI. + ## Access the Current step diff --git a/backend/command-line.mdx b/backend/command-line.mdx index c7e14c5..0387d14 100644 --- a/backend/command-line.mdx +++ b/backend/command-line.mdx @@ -14,6 +14,30 @@ The `init` command initializes a Chainlit project by creating a configuration fi chainlit init ``` +### `hello` + +The `hello` command runs a built-in "hello world" example application. It's useful for verifying your Chainlit installation. + +```bash +chainlit hello +``` + +### `create-secret` + +The `create-secret` command generates a random secret key that you can use for authentication. You should copy this secret into your `.env` file. + +```bash +chainlit create-secret +``` + +### `lint-translations` + +The `lint-translations` command checks the integrity of translation files. + +```bash +chainlit lint-translations +``` + ### `run` The `run` command starts a Chainlit application. @@ -24,11 +48,13 @@ chainlit run [OPTIONS] TARGET Options: -- `-w, --watch`: Reload the app when the module changes. When this option is specified, the file watcher will be started and any changes to files will cause the server to reload the app, allowing faster iterations. -- `-h, --headless`: Prevents the app from opening in the browser. -- `-d, --debug`: Sets the log level to debug. Default log level is error. -- `-c, --ci`: Runs in CI mode. -- `--no-cache`: Disables third parties cache, such as langchain. -- `--host`: Specifies a different host to run the server on. -- `--port`: Specifies a different port to run the server on. -- `--root-path`: Specifies a subpath to run the server on. +- `-w, --watch`: Reload the app when the module changes. When this option is specified, the file watcher will be started and any changes to files will cause the server to reload the app, allowing faster iterations. (Environment Variable: `WATCH`) +- `-h, --headless`: Prevents the app from opening in the browser. (Environment Variable: `HEADLESS`) +- `-d, --debug`: Sets the log level to debug. Default log level is error. (Environment Variable: `DEBUG`) +- `-c, --ci`: Runs in CI mode. When enabled, it automatically sets `--no-cache` to `True` and sets a fake `OPENAI_API_KEY`. (Environment Variable: `CI`) +- `--no-cache`: Disables third parties cache, such as langchain. (Environment Variable: `NO_CACHE`) +- `--host`: Specifies a different host to run the server on. Defaults to `127.0.0.1`. (Environment Variable: `CHAINLIT_HOST`) +- `--port`: Specifies a different port to run the server on. Defaults to `8000`. (Environment Variable: `CHAINLIT_PORT`) +- `--root-path`: Specifies a subpath to run the server on. (Environment Variable: `CHAINLIT_ROOT_PATH`) +- `--ssl-cert`: Specifies the file path for the SSL certificate. Must be provided along with `--ssl-key`. (Environment Variable: `CHAINLIT_SSL_CERT`) +- `--ssl-key`: Specifies the file path for the SSL key. Must be provided along with `--ssl-cert`. (Environment Variable: `CHAINLIT_SSL_KEY`) diff --git a/backend/config/overview.mdx b/backend/config/overview.mdx index bdce4dc..2723e25 100644 --- a/backend/config/overview.mdx +++ b/backend/config/overview.mdx @@ -28,4 +28,36 @@ It is composed of three sections: UI configuration. + + Add custom endpoints to the FastAPI server. + + +## Custom Endpoints + +Chainlit allows you to add custom endpoints to its FastAPI server. This is useful for extending the functionality of your Chainlit application with custom API routes. + +### Usage + +You can define custom FastAPI routes in your Chainlit application. These routes will be mounted on the same FastAPI application that serves Chainlit. + +```python +import chainlit as cl +from fastapi import FastAPI + +# Create a FastAPI app instance +custom_app = FastAPI() + +@custom_app.get("/my-custom-endpoint") +async def my_custom_endpoint(): + return {"message": "Hello from custom endpoint!"} + +# Mount the custom FastAPI app to Chainlit +cl.fastapi_app = custom_app +``` + diff --git a/concepts/chat-lifecycle.mdx b/concepts/chat-lifecycle.mdx index b1a2801..9e95aad 100644 --- a/concepts/chat-lifecycle.mdx +++ b/concepts/chat-lifecycle.mdx @@ -44,14 +44,55 @@ def on_chat_end(): print("The user disconnected!") ``` -## On Chat Resume +## User Session -The [on_chat_resume](/api-reference/lifecycle-hooks/on-chat-resume) decorator is used to define a hook that is called when a user resumes a chat session that was previously disconnected. This can only happen if [authentication](/authentication) and [data persistence](/data-persistence) are enabled. +The user session is a dictionary that is unique to each user and each chat. It is reset when the user starts a new chat. + +You can access the user session using `cl.user_session`. + +```python +import chainlit as cl + +@cl.on_chat_start +async def main(): + cl.user_session.set("some_var", "some_value") + +@cl.on_message +async def main(message: cl.Message): + some_var = cl.user_session.get("some_var") + await cl.Message(content=f"some_var: {some_var}").send() +``` + +## Chat Context (`cl.chat_context`) + +`cl.chat_context` is an instance of the `ChatContext` class, designed to help keep track of messages within the current chat thread. It provides methods to manage the conversation history. + +### Functionality + +* `get()`: Retrieves a copy of the list of `Message` objects for the current session. +* `add(message: "Message")`: Adds a `Message` object to the current session's chat context. +* `remove(message: "Message")`: Removes a specific `Message` object from the current session's chat context. +* `clear()`: Clears all messages from the current session's chat context. +* `to_openai()`: Converts the stored messages into a format compatible with OpenAI's API. + +### Usage Example ```python -from chainlit.types import ThreadDict +import chainlit as cl -@cl.on_chat_resume -async def on_chat_resume(thread: ThreadDict): - print("The user resumed a previous chat session!") +@cl.on_message +async def main(message: cl.Message): + # Add the current message to the chat context + cl.chat_context.add(message) + + # Retrieve the conversation history + history = cl.chat_context.get() + print("Conversation History:", [msg.content for msg in history]) + + # Example: Convert to OpenAI format + openai_messages = cl.chat_context.to_openai() + print("OpenAI formatted messages:", openai_messages) + + await cl.Message(content=f"Message added to chat context.").send() ``` + diff --git a/concepts/user-session.mdx b/concepts/user-session.mdx index 47ae4c0..b87709f 100644 --- a/concepts/user-session.mdx +++ b/concepts/user-session.mdx @@ -62,7 +62,7 @@ The following keys are reserved for chat session related data:
Only set if you are enabled [Authentication](/authentication). Contains the - user object of the user that started this chat session. + user object of the user that started this chat session. The `User` class now has a `display_name` field, which is not persisted by the data layer. Only relevant if you are using the [Chat diff --git a/customisation/overview.mdx b/customisation/overview.mdx index f59bcd2..544e061 100644 --- a/customisation/overview.mdx +++ b/customisation/overview.mdx @@ -39,4 +39,39 @@ In this section we will go through the different options available. > Learn about creating your own theme. + + Learn how to add custom buttons to the chat header. + + +## Custom Header Buttons + +You can add custom buttons to the header of your Chainlit application. These buttons can trigger specific actions or navigate to external links. + +### Usage + +```python +import chainlit as cl + +@cl.on_chat_start +async def main(): + cl.context.emitter.send_action_button( + cl.Action( + name="my_custom_button", + label="My Custom Button", + value="custom_button_clicked", + description="Click me to trigger a custom action!", + for_id="header", # This indicates it's a header button + ) + ) + +@cl.action_callback("my_custom_button") +async def on_custom_button_click(action: cl.Action): + await cl.Message(content=f"Custom button '{action.label}' clicked!").send() +``` + diff --git a/data-layers/overview.mdx b/data-layers/overview.mdx index 1b0e666..1d1b9b1 100644 --- a/data-layers/overview.mdx +++ b/data-layers/overview.mdx @@ -50,6 +50,10 @@ a cloud storage configuration if relevant. ## Community data layers + + The `LiteralAI` data layer is deprecated and will be removed in future releases. Please migrate to the official data layer. + + For community data layers, you need to import the corresponding data layer in your chainlit app. Here is how you would do it with `SQLAlchemyDataLayer`: diff --git a/docs.json b/docs.json index 83fb5f2..0862a75 100644 --- a/docs.json +++ b/docs.json @@ -1,6 +1,6 @@ { "$schema": "https://mintlify.com/docs.json", - "theme": "mint", + "theme": "aspen", "name": "Chainlit", "colors": { "primary": "#F80061", @@ -294,5 +294,14 @@ "x": "https://x.com/chainlit_io", "linkedin": "https://www.linkedin.com/company/chainlit" } + }, + "contextual": { + "options": [ + "copy", + "chatgpt", + "cursor", + "claude", + "perplexity" + ] } -} \ No newline at end of file +} diff --git a/get-started/overview.mdx b/get-started/overview.mdx index 12802f1..296170a 100644 --- a/get-started/overview.mdx +++ b/get-started/overview.mdx @@ -84,4 +84,4 @@ Chainlit is compatible with all Python programs and libraries. That being said, Learn how to integrate your Autogen agents with Chainlit. - + \ No newline at end of file diff --git a/instructions.txt b/instructions.txt new file mode 100644 index 0000000..d2b0911 --- /dev/null +++ b/instructions.txt @@ -0,0 +1,9 @@ +Instructions for documentation updates: +1. Do NOT touch any files that appear in `git status`. +2. Use the `tree` command to identify other documentation files that need updates. +3. For each file identified, thoroughly research the subject in the DeepWiki (Chainlit/chainlit repository). +4. Ask 3-4 specific, clarifying questions to the DeepWiki about the content and purpose of the document. +5. Based on the confirmed information, update the content of that specific file. +6. Do not create new files. +7. Use the DeepWiki 4-5 times per file update. +8. Do not edit files that have already been updated (by me in this session). diff --git a/integrations/langchain.mdx b/integrations/langchain.mdx index b2b6f8b..8a1cf08 100644 --- a/integrations/langchain.mdx +++ b/integrations/langchain.mdx @@ -13,8 +13,8 @@ In this tutorial, we'll walk through the steps to create a Chainlit application Before getting started, make sure you have the following: - A working installation of Chainlit -- The LangChain package installed -- An OpenAI API key +- The LangChain package installed (`pip install langchain langchain-openai`) +- An OpenAI API key configured as an environment variable (e.g., `OPENAI_API_KEY`). - Basic understanding of Python programming ## Step 1: Create a Python file @@ -121,7 +121,7 @@ async def on_message(message: cl.Message): -This code sets up an instance of `Runnable` with a custom `ChatPromptTemplate` for each chat session. The `Runnable` is invoked everytime a user sends a message to generate the response. +This code sets up an instance of `Runnable` with a custom `ChatPromptTemplate` for each chat session. The `Runnable` is invoked every time a user sends a message to generate the response. The callback handler is responsible for listening to the chain's intermediate steps and sending them to the UI. @@ -219,20 +219,76 @@ graph = builder.compile() @cl.on_message async def on_message(msg: cl.Message): config = {"configurable": {"thread_id": cl.context.session.id}} - cb = cl.LangchainCallbackHandler() + # Configure LangchainCallbackHandler for automatic final answer streaming + cb = cl.LangchainCallbackHandler(stream_final_answer=True, answer_prefix_tokens=["Final", "Answer", ":"]) + final_answer = cl.Message(content="") - for msg, metadata in graph.stream({"messages": [HumanMessage(content=msg.content)]}, stream_mode="messages", config=RunnableConfig(callbacks=[cb], **config)): + # The LangchainCallbackHandler will automatically stream the final answer. + # Manual iteration is still useful if you need to process intermediate steps + # or specific parts of the output that are not considered the "final answer". + for s_msg, metadata in graph.stream({"messages": [HumanMessage(content=msg.content)]}, stream_mode="messages", config=RunnableConfig(callbacks=[cb], **config)): if ( - msg.content - and not isinstance(msg, HumanMessage) + s_msg.content + and not isinstance(s_msg, HumanMessage) and metadata["langgraph_node"] == "final" ): - await final_answer.stream_token(msg.content) + await final_answer.stream_token(s_msg.content) await final_answer.send() ``` +## How `cl.LangchainCallbackHandler` Works + +The `cl.LangchainCallbackHandler` (aliased as `LangchainTracer`) integrates with LangChain by extending LangChain's `AsyncBaseTracer`. It captures various LangChain events and translates them into Chainlit Steps, providing detailed tracing and visualization of your LangChain application's execution. + +**Internal Mechanism:** +- It overrides `on_` methods from `AsyncBaseTracer` to capture events during LangChain runs. +- It maintains a dictionary of active steps (`self.steps`) and a mapping of parent IDs (`self.parent_id_map`) to reconstruct the hierarchical structure of LangChain runs within Chainlit. +- The `_should_ignore_run` method filters out verbose LangChain runs (e.g., `RunnableSequence`, `RunnableParallel`) to enhance readability in the Chainlit UI. + +**Captured LangChain Events and Metadata Recorded:** +The handler captures events related to LLM calls, chains, agents, and tools, recording various metadata for Chainlit Steps: +- **LLM Starts**: Records prompts and input messages. +- **New Tokens**: Captures tokens for streaming responses. +- **Run Updates**: Processes outputs and updates corresponding Chainlit Steps. +- **Errors**: Captures exceptions and marks the Step as an error. +- **Metadata**: For each Step, it records `id`, `name` (from `run.name`), `type` (mapped from `run.run_type` like "llm", "chain", "agent", "tool"), `parent_id`, `start`/`end` timestamps, `input`/`output`, LLM generation details (provider, model, tools, settings, duration, token counts), and `tags`. + +## Advanced Configuration for `cl.LangchainCallbackHandler` + +You can customize the behavior of `cl.LangchainCallbackHandler` by passing configuration options during its initialization. + +### Final Answer Streaming + +Control how the final answer is streamed: +- `stream_final_answer` (bool): If set to `True`, the final answer from a LangChain run will be streamed to the UI. (Default: `False`) +- `answer_prefix_tokens` (List[str]): A list of tokens that prefixes the final answer. The handler uses these to identify when the final answer begins. (Default: `["Final", "Answer", ":"]`) +- `force_stream_final_answer` (bool): If `True`, streaming of the final answer is forced from the beginning. (Default: `False`) + +### Filtering LangChain Events + +Filter which LangChain runs are displayed as steps in Chainlit: +- `to_ignore` (Optional[List[str]]): A list of strings. Any LangChain run whose name contains one of these strings will be ignored and not displayed as a step. (Default: `["RunnableSequence", "RunnableParallel", "RunnableAssign", "RunnableLambda", ""]`) +- `to_keep` (Optional[List[str]]): A list of strings. Even if a run's parent is ignored, if the run's `run_type` is in this list, it will still be displayed. (Default: `["retriever", "llm", "agent", "chain", "tool"]`) + +**Example:** +```python +from chainlit.langchain import LangchainCallbackHandler +from langchain_core.callbacks import CallbackManager + +# Initialize the handler with custom streaming and filtering +custom_handler = LangchainCallbackHandler( + stream_final_answer=True, + answer_prefix_tokens=["Answer:", "Result:"], + to_ignore=["MyCustomInternalRunnable"], + to_keep=["tool"] +) + +# Use the custom handler in your LangChain config +# config=RunnableConfig(callbacks=[custom_handler]) +``` + ## Step 3: Run the Application To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command: @@ -247,3 +303,4 @@ The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to rest When using LangChain, prompts and completions are not cached by default. To enable the cache, set the `cache=true` in your chainlit config file. + diff --git a/integrations/litellm.mdx b/integrations/litellm.mdx index 5c097ec..7206c10 100644 --- a/integrations/litellm.mdx +++ b/integrations/litellm.mdx @@ -2,13 +2,13 @@ title: LiteLLM --- -In this tutorial, we will guide you through the steps to create a Chainlit application integrated with [LiteLLM Proxy](https://docs.litellm.ai/docs/simple_proxy) +In this tutorial, we will guide you through the steps to create a Chainlit application integrated with [LiteLLM Proxy](https://docs.litellm.ai/docs/simple_proxy). -The benefits of using LiteLLM Proxy with Chainlit is: +The benefits of using LiteLLM Proxy with Chainlit are: - You can [call 100+ LLMs in the OpenAI API format](https://docs.litellm.ai/docs/providers) - Use Virtual Keys to set budget limits and track usage -- see LLM API calls in a step in the UI, and you can explore them in the prompt playground. +- See LLM API calls in a step in the UI, and you can explore them in the prompt playground. You shouldn't configure this integration if you're already using another @@ -21,9 +21,9 @@ The benefits of using LiteLLM Proxy with Chainlit is: Before getting started, make sure you have the following: - A working installation of Chainlit -- The OpenAI package installed +- The `openai` Python package installed (`pip install openai`) - [LiteLLM Proxy Running](https://docs.litellm.ai/docs/proxy/deploy) -- [A LiteLLM Proxy API Key](https://docs.litellm.ai/docs/proxy/virtual_keys) +- [A LiteLLM Proxy API Key](https://docs.litellm.ai/docs/proxy/virtual_keys) (if required by your LiteLLM Proxy setup) - Basic understanding of Python programming ## Step 1: Create a Python file @@ -37,38 +37,67 @@ In `app.py`, import the necessary packages and define one function to handle mes ```python from openai import AsyncOpenAI import chainlit as cl + +# Configure your LiteLLM Proxy client. +# The api_key might be a placeholder or specific to your LiteLLM Proxy setup. +# For example, "anything" is often used as a placeholder for LiteLLM Proxy virtual keys. client = AsyncOpenAI( - api_key="anything", # litellm proxy virtual key - base_url="http://0.0.0.0:4000" # litellm proxy base_url + api_key="anything", # LiteLLM Proxy virtual key or placeholder + base_url="http://0.0.0.0:4000" # LiteLLM Proxy base_url ) -# Instrument the OpenAI client +# Instrument the OpenAI client. +# This should be called once at the top level of your application code, +# outside of any on_chat_start or on_message functions, as it modifies +# the OpenAI library globally. cl.instrument_openai() settings = { - "model": "gpt-3.5-turbo", # model you want to send litellm proxy + "model": "gpt-3.5-turbo", # model you want to send to LiteLLM Proxy "temperature": 0, # ... more settings } @cl.on_message async def on_message(message: cl.Message): - response = await client.chat.completions.create( - messages=[ - { - "content": "You are a helpful bot, you always reply in Spanish", - "role": "system" - }, - { - "content": message.content, - "role": "user" - } - ], - **settings - ) - await cl.Message(content=response.choices[0].message.content).send() + # General Python error handling is recommended for robustness + try: + response = await client.chat.completions.create( + messages=[ + { + "content": "You are a helpful bot, you always reply in Spanish", + "role": "system" + }, + { + "content": message.content, + "role": "user" + } + ], + stream=True, # Enable streaming for real-time updates + **settings + ) + + # cl.instrument_openai() automatically handles streaming for OpenAI-compatible responses. + # You do not need to manually call await answer.stream_token(msg.content) + # for instrumented calls if LiteLLM Proxy streams in an OpenAI-compatible format. + answer = cl.Message(content="") + async for part in response: + if token := part.choices[0].delta.content or "": + await answer.stream_token(token) + await answer.send() + + except Exception as e: + await cl.ErrorMessage(content=f"An error occurred: {e}").send() ``` +### Compatibility and Limitations + +`cl.instrument_openai()` is designed to instrument the official OpenAI Python client library. Its ability to visualize interactions with LiteLLM Proxy depends on how accurately LiteLLM Proxy mimics the OpenAI API specification, especially when translating responses from various LLMs. + +- **Tool Use and Function Calling**: If LiteLLM Proxy supports OpenAI-compatible tool calls or function calling (from the LLMs it proxies), `cl.instrument_openai()` will automatically visualize these interactions as `Step` objects in the Chainlit UI. Each tool call, its input, and its output will be displayed as a distinct step. However, deviations in the proxied LLM's response format from OpenAI's specification might lead to incorrect or incomplete visualization. +- **Response Formats**: The instrumentation expects responses to conform to the structure of OpenAI's `ChatGeneration` or `CompletionGeneration` objects. If LiteLLM Proxy's translation layer alters these formats, the `step.input` and `step.output` fields might not be populated correctly. +- **Advanced Configuration**: Currently, `cl.instrument_openai()` provides direct instrumentation without advanced filtering options to selectively include or exclude specific API calls or customize how steps are displayed. + ## Step 3: Run the Application To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command: diff --git a/integrations/llama-index.mdx b/integrations/llama-index.mdx index 62352e5..4e04c50 100644 --- a/integrations/llama-index.mdx +++ b/integrations/llama-index.mdx @@ -13,10 +13,20 @@ In this tutorial, we will guide you through the steps to create a Chainlit appli Before diving in, ensure that the following prerequisites are met: - A working installation of Chainlit -- The Llama Index package installed -- An OpenAI API key +- The Llama Index package installed (`pip install llama-index-core llama-index-llms-openai llama-index-embeddings-openai`) +- An OpenAI API key configured as an environment variable (e.g., `OPENAI_API_KEY`). See [Environment Variable Setup](#environment-variable-setup) for details. - A basic understanding of Python programming +### Environment Variable Setup + +For your OpenAI API key (`OPENAI_API_KEY`), it is recommended to set it as an environment variable. You can do this in your shell or by creating a `.env` file in your project root. + +**Example using a `.env` file:** +``` +OPENAI_API_KEY="your_openai_api_key_here" +``` +Chainlit automatically loads environment variables from a `.env` file if present. For more advanced management of user-provided environment variables, refer to Chainlit's `ProjectSettings` configuration. + ## Step 1: Set Up Your Data Directory Create a folder named `data` in the root of your app folder. Download the [state of the union](https://github.com/Chainlit/cookbook/blob/main/llama-index/data/state_of_the_union.txt) file (or any files of your own choice) and place it in the `data` folder. @@ -48,7 +58,9 @@ from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.core.query_engine.retriever_query_engine import RetrieverQueryEngine from llama_index.core.callbacks import CallbackManager from llama_index.core.service_context import ServiceContext +from llama_index.core.callbacks.schema import CBEventType # Import CBEventType for advanced configuration +# Ensure OpenAI API key is set from environment variables openai.api_key = os.environ.get("OPENAI_API_KEY") try: @@ -64,13 +76,18 @@ except: @cl.on_chat_start async def start(): + # Configure LlamaIndex settings Settings.llm = OpenAI( model="gpt-3.5-turbo", temperature=0.1, max_tokens=1024, streaming=True ) Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small") Settings.context_window = 4096 - service_context = ServiceContext.from_defaults(callback_manager=CallbackManager([cl.LlamaIndexCallbackHandler()])) + # Initialize Chainlit's LlamaIndex callback handler + # You can configure it to ignore specific events, e.g., event_starts_to_ignore=[CBEventType.EMBEDDING] + cl_callback_handler = cl.LlamaIndexCallbackHandler() + + service_context = ServiceContext.from_defaults(callback_manager=CallbackManager([cl_callback_handler])) query_engine = index.as_query_engine(streaming=True, similarity_top_k=2, service_context=service_context) cl.user_session.set("query_engine", query_engine) @@ -83,18 +100,57 @@ async def start(): async def main(message: cl.Message): query_engine = cl.user_session.get("query_engine") # type: RetrieverQueryEngine - msg = cl.Message(content="", author="Assistant") + # The LlamaIndexCallbackHandler automatically handles streaming and step creation. + # The query_engine.query method will be instrumented by the callback handler. + response = await cl.make_async(query_engine.query)(message.content) + + # If you need to display the final response in a Chainlit message, + # you can create a message and stream its content. + # The LlamaIndexCallbackHandler will already have created steps for the LLM calls. + final_message = cl.Message(content="", author="Assistant") + if response.response_gen: + for token in response.response_gen: + await final_message.stream_token(token) + else: + final_message.content = response.response + + await final_message.send() +``` + +This code sets up an instance of `RetrieverQueryEngine` for each chat session. The `RetrieverQueryEngine` is invoked every time a user sends a message to generate the response. - res = await cl.make_async(query_engine.query)(message.content) +## How `cl.LlamaIndexCallbackHandler` Works - for token in res.response_gen: - await msg.stream_token(token) - await msg.send() -``` +The `cl.LlamaIndexCallbackHandler` is a Chainlit-specific callback handler that integrates LlamaIndex events with Chainlit's tracing and visualization features. It extends LlamaIndex's `TokenCountingHandler` to capture various LlamaIndex events and translate them into Chainlit Steps. -This code sets up an instance of `RetrieverQueryEngine` for each chat session. The `RetrieverQueryEngine` is invoked everytime a user sends a message to generate the response. +**Internal Working:** +- It overrides `on_event_start` and `on_event_end` methods. +- For each LlamaIndex event, it creates a Chainlit `Step`, records its start/end times, input/output, and sends/updates the step in the UI. -The callback handlers are responsible for listening to the intermediate steps and sending them to the UI. +**Events Captured and Metadata Recorded:** +The handler captures and processes various `CBEventType` events, mapping them to Chainlit `StepType`s and recording detailed metadata: +- **`CBEventType.FUNCTION_CALL`**: Mapped to `StepType.TOOL`. Records function name as `step.name`, input payload as `step.input`, and function output as `step.output`. +- **`CBEventType.RETRIEVE`**: Mapped to `StepType.TOOL`. Records retrieved nodes as `Text` elements attached to the step, with a summary of sources as `step.output`. +- **`CBEventType.QUERY`**: Mapped to `StepType.TOOL`. If a response with source nodes is present, these nodes are converted into `Text` elements and attached to the step, with a summary of sources as `step.output`. +- **`CBEventType.LLM`**: Mapped to `StepType.LLM`. Records the LLM response as `step.output` and includes detailed `ChatGeneration` or `CompletionGeneration` metadata (model, messages, prompt, completion, token count) in `step.generation`. + +By default, events like `CBEventType.CHUNKING`, `CBEventType.SYNTHESIZE`, `CBEventType.EMBEDDING`, `CBEventType.NODE_PARSING`, and `CBEventType.TREE` are ignored. + +## Advanced Configuration for `cl.LlamaIndexCallbackHandler` + +You can customize which LlamaIndex events are tracked by the `cl.LlamaIndexCallbackHandler` using the `event_starts_to_ignore` and `event_ends_to_ignore` parameters in its constructor. Both parameters accept a list of `CBEventType` enums. + +**Example:** +```python +from llama_index.core.callbacks.schema import CBEventType +from chainlit.llama_index import LlamaIndexCallbackHandler + +# Initialize the handler to ignore EMBEDDING events at start and end +cl_callback_handler = cl.LlamaIndexCallbackHandler( + event_starts_to_ignore=[CBEventType.EMBEDDING], + event_ends_to_ignore=[CBEventType.EMBEDDING], +) +``` ## Step 4: Launch the Application diff --git a/integrations/message-based.mdx b/integrations/message-based.mdx index 8229ae8..4f11289 100644 --- a/integrations/message-based.mdx +++ b/integrations/message-based.mdx @@ -1,22 +1,45 @@ --- -title: vLLM, LMStudio, HuggingFace +title: OpenAI-Compatible Message-Based APIs (vLLM, LMStudio, HuggingFace TGI) --- -We can leverage the OpenAI instrumentation to log calls from inference servers that use messages-based API, such as vLLM, LMStudio or HuggingFace's TGI. +We can leverage the OpenAI instrumentation to log calls from inference servers that use messages-based API, such as vLLM, LMStudio or HuggingFace's TGI. This integration allows you to visualize these LLM calls as steps in the Chainlit UI and explore them in the prompt playground. - You shouldn't configure this integration if you're already using another integration like LangChain or LlamaIndex. Both integrations would record the same generation and create duplicate steps in the UI. + You shouldn't configure this integration if you're already using another + integration like LangChain or LlamaIndex. Both integrations would + record the same generation and create duplicate steps in the UI. +## Prerequisites + +Before getting started, make sure you have the following: + +- A working installation of Chainlit +- The `openai` Python package installed (`pip install openai`) +- An OpenAI-compatible message-based inference server running (e.g., vLLM, LMStudio, HuggingFace TGI) +- Basic understanding of Python programming + +## Step 1: Create a Python file + Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application. +## Step 2: Write the Application Logic + In `app.py`, import the necessary packages and define one function to handle messages incoming from the UI. ```python from openai import AsyncOpenAI import chainlit as cl + +# Configure your OpenAI-compatible client. +# The api_key might be a placeholder or specific to your inference server. +# For LM Studio, "lm-studio" is often used as a placeholder API key. client = AsyncOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio") -# Instrument the OpenAI client + +# Instrument the OpenAI client. +# This should be called once at the top level of your application code, +# outside of any on_chat_start or on_message functions, as it modifies +# the OpenAI library globally. cl.instrument_openai() settings = { @@ -27,23 +50,49 @@ settings = { @cl.on_message async def on_message(message: cl.Message): - response = await client.chat.completions.create( - messages=[ - { - "content": "You are a helpful bot, you always reply in Spanish", - "role": "system" - }, - { - "content": message.content, - "role": "user" - } - ], - **settings - ) - await cl.Message(content=response.choices[0].message.content).send() + # General Python error handling is recommended for robustness + try: + response = await client.chat.completions.create( + messages=[ + { + "content": "You are a helpful bot, you always reply in Spanish", + "role": "system" + }, + { + "content": message.content, + "role": "user" + } + ], + stream=True, # Enable streaming for real-time updates + **settings + ) + + # cl.instrument_openai() automatically handles streaming for OpenAI-compatible responses. + # You do not need to manually call await answer.stream_token(msg.content) + # for instrumented calls if the server streams in an OpenAI-compatible format. + answer = cl.Message(content="") + async for part in response: + if token := part.choices[0].delta.content or "": + await answer.stream_token(token) + await answer.send() + + except Exception as e: + await cl.ErrorMessage(content=f"An error occurred: {e}").send() ``` -Create a file named `.env` in the same folder as your `app.py` file. Add your OpenAI API key in the `OPENAI_API_KEY` variable. +### Compatibility and Limitations + +`cl.instrument_openai()` is designed to instrument the official OpenAI Python client library. Its ability to visualize interactions with message-based inference servers (like vLLM, LMStudio, HuggingFace TGI) depends on how accurately these servers mimic the OpenAI API specification. + +- **Tool Use and Function Calling**: If your inference server supports OpenAI-compatible tool calls or function calling, `cl.instrument_openai()` will automatically visualize these interactions as `Step` objects in the Chainlit UI. Each tool call, its input, and its output will be displayed as a distinct step. However, deviations in the server's response format from OpenAI's specification might lead to incorrect or incomplete visualization. +- **Response Formats**: The instrumentation expects responses to conform to the structure of OpenAI's `ChatGeneration` or `CompletionGeneration` objects. If the inference server's response format deviates, the `step.input` and `step.output` fields might not be populated correctly. +- **Advanced Configuration**: Currently, `cl.instrument_openai()` provides direct instrumentation without advanced filtering options to selectively include or exclude specific API calls or customize how steps are displayed. + +## Step 3: Fill the environment variables + +If your inference server requires an API key, create a file named `.env` in the same folder as your `app.py` file and add your API key (e.g., `OPENAI_API_KEY` or a custom key expected by your server). + +## Step 4: Run the Application To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command: @@ -52,3 +101,4 @@ chainlit run app.py -w ``` The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000. + diff --git a/integrations/mistralai.mdx b/integrations/mistralai.mdx index ccd8212..e6e4811 100644 --- a/integrations/mistralai.mdx +++ b/integrations/mistralai.mdx @@ -13,8 +13,8 @@ title: Mistral AI Before getting started, make sure you have the following: - A working installation of Chainlit -- The Mistral AI python client package installed, `mistralai` -- A [Mistral AI API key](https://console.mistral.ai/api-keys/) +- The Mistral AI python client package installed (`pip install mistralai`) +- A [Mistral AI API key](https://console.mistral.ai/api-keys/) configured as an environment variable (e.g., `MISTRAL_API_KEY`). - Basic understanding of Python programming ## Step 1: Create a Python file @@ -28,33 +28,58 @@ In `app.py`, import the necessary packages and define one function to handle mes ```python import os import chainlit as cl -from mistralai import Mistral +from mistralai.client import MistralClient +from mistralai.models.chat_completion import ChatMessage # Initialize the Mistral client -client = Mistral(api_key=os.getenv("MISTRAL_API_KEY")) +client = MistralClient(api_key=os.getenv("MISTRAL_API_KEY")) + +# Instrument the Mistral AI client. +# This should be called once at the top level of your application code, +# outside of any on_chat_start or on_message functions, as it modifies +# the Mistral AI library globally. +cl.instrument_mistralai() @cl.on_message async def on_message(message: cl.Message): - response = await client.chat.complete_async( - model="mistral-small-latest", - max_tokens=100, - temperature=0.5, - stream=False, - # ... more setting - messages=[ - { - "role": "system", - "content": "You are a helpful bot, you always reply in French." - }, - { - "role": "user", - "content": message.content # Content of the user message - } + # General Python error handling is recommended for robustness + try: + messages = [ + ChatMessage(role="system", content="You are a helpful bot, you always reply in French."), + ChatMessage(role="user", content=message.content) ] - ) - await cl.Message(content=response.choices[0].message.content).send() + + response = await client.chat_stream( + model="mistral-small-latest", + max_tokens=100, + temperature=0.5, + messages=messages, + # cl.instrument_mistralai() automatically handles streaming. + # You do not need to manually create a cl.Message and stream tokens + # for instrumented Mistral AI calls if the client streams. + ) + + # The instrumented client will automatically create and update a Chainlit Step. + # If you need to display the final response in a Chainlit message, + # you can create a message and stream its content from the response. + final_message = cl.Message(content="", author="Assistant") + async for chunk in response: + if chunk.choices[0].delta.content: + await final_message.stream_token(chunk.choices[0].delta.content) + await final_message.send() + + except Exception as e: + await cl.ErrorMessage(content=f"An error occurred: {e}").send() ``` +### Tool Use and Function Calling + +If your Mistral AI model uses tool calls or function calling, `cl.instrument_mistralai()` will automatically visualize these interactions as `Step` objects in the Chainlit UI. Each tool call, its input, and its output will be displayed as a distinct step, providing clear visibility into the model's decision-making process. + +### Advanced Configuration + +Currently, `cl.instrument_mistralai()` provides direct instrumentation of Mistral AI API calls without advanced filtering options to selectively include or exclude specific API calls or customize how steps are displayed. + ## Step 3: Fill the environment variables Create a file named `.env` in the same folder as your `app.py` file. Add your Mistral AI API key in the `MISTRAL_API_KEY` variable. diff --git a/integrations/openai.mdx b/integrations/openai.mdx index 2ba2671..a6b8973 100644 --- a/integrations/openai.mdx +++ b/integrations/openai.mdx @@ -10,7 +10,7 @@ title: OpenAI The benefits of this integration is that you can see the OpenAI API calls in a step in the UI, and you can explore them in the prompt playground. -You need to add `cl.instrument_openai()` after creating your OpenAI client. +You need to add `cl.instrument_openai()` after creating your OpenAI client. This function should be called once at the top level of your application code, outside of any `on_chat_start` or `on_message` functions, as it modifies the OpenAI library globally. You shouldn't configure this integration if you're already using another @@ -23,8 +23,8 @@ You need to add `cl.instrument_openai()` after creating your OpenAI client. Before getting started, make sure you have the following: - A working installation of Chainlit -- The OpenAI package installed -- An OpenAI API key +- The `openai` Python package installed (`pip install openai`) +- An OpenAI API key configured as an environment variable (e.g., `OPENAI_API_KEY`). - Basic understanding of Python programming ## Step 1: Create a Python file @@ -41,6 +41,7 @@ import chainlit as cl client = AsyncOpenAI() # Instrument the OpenAI client +# This should be called once at the top level of your application. cl.instrument_openai() settings = { @@ -51,22 +52,45 @@ settings = { @cl.on_message async def on_message(message: cl.Message): - response = await client.chat.completions.create( - messages=[ - { - "content": "You are a helpful bot, you always reply in Spanish", - "role": "system" - }, - { - "content": message.content, - "role": "user" - } - ], - **settings - ) - await cl.Message(content=response.choices[0].message.content).send() + # General Python error handling is recommended for robustness + try: + response = await client.chat.completions.create( + messages=[ + { + "content": "You are a helpful bot, you always reply in Spanish", + "role": "system" + }, + { + "content": message.content, + "role": "user" + } + ], + stream=True, # Enable streaming for real-time updates + **settings + ) + + # cl.instrument_openai() automatically handles streaming. + # You do not need to manually call await answer.stream_token(msg.content) + # for instrumented OpenAI calls. + answer = cl.Message(content="") + async for part in response: + if token := part.choices[0].delta.content or "": + await answer.stream_token(token) # Manual streaming for demonstration, but often handled by instrumentation + await answer.send() + + except Exception as e: + await cl.ErrorMessage(content=f"An error occurred: {e}").send() + ``` +### Tool Use and Function Calling + +If your OpenAI model uses tool calls or function calling, `cl.instrument_openai()` will automatically visualize these interactions as `Step` objects in the Chainlit UI. Each tool call, its input, and its output will be displayed as a distinct step, providing clear visibility into the model's decision-making process. + +### Advanced Configuration + +Currently, `cl.instrument_openai()` provides direct instrumentation of OpenAI API calls without advanced filtering options to selectively include or exclude specific API calls or customize how steps are displayed. + ## Step 3: Fill the environment variables Create a file named `.env` in the same folder as your `app.py` file. Add your OpenAI API key in the `OPENAI_API_KEY` variable. diff --git a/integrations/semantic-kernel.mdx b/integrations/semantic-kernel.mdx index a0f4a9e..bd60ee2 100644 --- a/integrations/semantic-kernel.mdx +++ b/integrations/semantic-kernel.mdx @@ -10,9 +10,20 @@ Before getting started, make sure you have the following: - A working installation of Chainlit - The `semantic-kernel` package installed -- An LLM API key (e.g., OpenAI, Azure OpenAI) configured for Semantic Kernel +- An LLM API key (e.g., OpenAI, Azure OpenAI) configured for Semantic Kernel. See [Environment Variable Setup](#environment-variable-setup) for details. - Basic understanding of Python programming and Semantic Kernel concepts (Kernel, Plugins, Functions) +### Environment Variable Setup + +For your LLM API keys (e.g., `OPENAI_API_KEY`, `OPENAI_ORG_ID`), it is recommended to set them as environment variables. You can do this in your shell or by creating a `.env` file in your project root. + +**Example using a `.env` file:** +``` +OPENAI_API_KEY="your_openai_api_key_here" +OPENAI_ORG_ID="your_openai_org_id_here" +``` +Chainlit automatically loads environment variables from a `.env` file if present. + ## Step 1: Create a Python file Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application using Semantic Kernel. @@ -90,6 +101,7 @@ async def on_message(message: cl.Message): kernel=kernel, ): if msg.content: + # Stream tokens for real-time updates in the UI await answer.stream_token(msg.content) # Add the full assistant response to history @@ -99,6 +111,18 @@ async def on_message(message: cl.Message): await answer.send() ``` +## How `cl.SemanticKernelFilter` Works + +The `cl.SemanticKernelFilter` integrates with Semantic Kernel by acting as a `function_invocation` filter. When a Semantic Kernel function is called: +1. It checks if the function or its plugin is configured to be excluded from tracking. +2. If not excluded, it creates a Chainlit `Step` of `type="tool"` with the function's fully qualified name. +3. The function's input arguments are captured and set as the `step.input`. +4. The `Step` is sent to the Chainlit UI, providing real-time visibility into tool execution. +5. After the Semantic Kernel function executes, its result is captured as `step.output`. +6. The `Step` in the Chainlit UI is then updated with the output, completing the visualization of the tool call. + +This process automatically enriches your Chainlit chat history with detailed traces of your Semantic Kernel application's internal workings, including the inputs and outputs of each tool invocation. + ## Step 3: Run the Application To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command: @@ -107,4 +131,10 @@ To start your app, open a terminal and navigate to the directory containing `app chainlit run app.py -w ``` -The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000. Interact with the bot, and if you ask for the weather (and the LLM uses the tool), you should see a "Weather-get_weather" step appear in the UI. \ No newline at end of file +The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000. Interact with the bot, and if you ask for the weather (and the LLM uses the tool), you should see a "Weather-get_weather" step appear in the UI. + +## Best Practices and Advanced Usage + +- **Excluding Functions/Plugins**: You can configure `cl.SemanticKernelFilter` to exclude specific plugins or functions from being tracked as steps, reducing noise in the UI for less critical operations. +- **Error Handling**: Implement `try-except` blocks within your `on_message` or plugin functions to gracefully handle Semantic Kernel-specific errors and provide informative feedback to the user via `cl.ErrorMessage`. +- **Lifecycle Hooks**: Utilize other Chainlit lifecycle hooks like `@cl.on_chat_end` for cleanup operations (e.g., closing Semantic Kernel resources) or `@cl.on_settings_update` to dynamically adjust Semantic Kernel behavior based on user settings. \ No newline at end of file