|
| 1 | +--- |
| 2 | +title: Semantic Kernel |
| 3 | +--- |
| 4 | + |
| 5 | +In this tutorial, we'll walk through the steps to create a Chainlit application integrated with [Microsoft's Semantic Kernel](https://github.com/microsoft/semantic-kernel). The integration automatically visualizes Semantic Kernel function calls (like plugins or tools) as Steps in the Chainlit UI. |
| 6 | + |
| 7 | +## Prerequisites |
| 8 | + |
| 9 | +Before getting started, make sure you have the following: |
| 10 | + |
| 11 | +- A working installation of Chainlit |
| 12 | +- The `semantic-kernel` package installed |
| 13 | +- An LLM API key (e.g., OpenAI, Azure OpenAI) configured for Semantic Kernel |
| 14 | +- Basic understanding of Python programming and Semantic Kernel concepts (Kernel, Plugins, Functions) |
| 15 | + |
| 16 | +## Step 1: Create a Python file |
| 17 | + |
| 18 | +Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application using Semantic Kernel. |
| 19 | + |
| 20 | +## Step 2: Write the Application Logic |
| 21 | + |
| 22 | +In `app.py`, import the necessary packages, set up your Semantic Kernel `Kernel`, add the `SemanticKernelFilter` for Chainlit integration, and define functions to handle chat sessions and incoming messages. |
| 23 | + |
| 24 | +Here's an example demonstrating how to set up the kernel and use the filter: |
| 25 | + |
| 26 | +```python app.py |
| 27 | +import chainlit as cl |
| 28 | +import semantic_kernel as sk |
| 29 | +from semantic_kernel.connectors.ai import FunctionChoiceBehavior |
| 30 | +from semantic_kernel.connectors.ai.open_ai import ( |
| 31 | + OpenAIChatCompletion, |
| 32 | + OpenAIChatPromptExecutionSettings, |
| 33 | +) |
| 34 | +from semantic_kernel.functions import kernel_function |
| 35 | +from semantic_kernel.contents import ChatHistory |
| 36 | + |
| 37 | +request_settings = OpenAIChatPromptExecutionSettings( |
| 38 | + function_choice_behavior=FunctionChoiceBehavior.Auto(filters={"excluded_plugins": ["ChatBot"]}) |
| 39 | +) |
| 40 | + |
| 41 | +# Example Native Plugin (Tool) |
| 42 | +class WeatherPlugin: |
| 43 | + @kernel_function(name="get_weather", description="Gets the weather for a city") |
| 44 | + def get_weather(self, city: str) -> str: |
| 45 | + """Retrieves the weather for a given city.""" |
| 46 | + if "paris" in city.lower(): |
| 47 | + return f"The weather in {city} is 20°C and sunny." |
| 48 | + elif "london" in city.lower(): |
| 49 | + return f"The weather in {city} is 15°C and cloudy." |
| 50 | + else: |
| 51 | + return f"Sorry, I don't have the weather for {city}." |
| 52 | + |
| 53 | +@cl.on_chat_start |
| 54 | +async def on_chat_start(): |
| 55 | + # Setup Semantic Kernel |
| 56 | + kernel = sk.Kernel() |
| 57 | + |
| 58 | + # Add your AI service (e.g., OpenAI) |
| 59 | + # Make sure OPENAI_API_KEY and OPENAI_ORG_ID are set in your environment |
| 60 | + ai_service = OpenAIChatCompletion(service_id="default", ai_model_id="gpt-4o") |
| 61 | + kernel.add_service(ai_service) |
| 62 | + |
| 63 | + # Import the WeatherPlugin |
| 64 | + kernel.add_plugin(WeatherPlugin(), plugin_name="Weather") |
| 65 | + |
| 66 | + # Instantiate and add the Chainlit filter to the kernel |
| 67 | + # This will automatically capture function calls as Steps |
| 68 | + sk_filter = cl.SemanticKernelFilter(kernel=kernel) |
| 69 | + |
| 70 | + cl.user_session.set("kernel", kernel) |
| 71 | + cl.user_session.set("ai_service", ai_service) |
| 72 | + cl.user_session.set("chat_history", ChatHistory()) |
| 73 | + |
| 74 | +@cl.on_message |
| 75 | +async def on_message(message: cl.Message): |
| 76 | + kernel = cl.user_session.get("kernel") # type: sk.Kernel |
| 77 | + ai_service = cl.user_session.get("ai_service") # type: OpenAIChatCompletion |
| 78 | + chat_history = cl.user_session.get("chat_history") # type: ChatHistory |
| 79 | + |
| 80 | + # Add user message to history |
| 81 | + chat_history.add_user_message(message.content) |
| 82 | + |
| 83 | + # Create a Chainlit message for the response stream |
| 84 | + answer = cl.Message(content="") |
| 85 | + |
| 86 | + async for msg in ai_service.get_streaming_chat_message_content( |
| 87 | + chat_history=chat_history, |
| 88 | + user_input=message.content, |
| 89 | + settings=request_settings, |
| 90 | + kernel=kernel, |
| 91 | + ): |
| 92 | + if msg.content: |
| 93 | + await answer.stream_token(msg.content) |
| 94 | + |
| 95 | + # Add the full assistant response to history |
| 96 | + chat_history.add_assistant_message(answer.content) |
| 97 | + |
| 98 | + # Send the final message |
| 99 | + await answer.send() |
| 100 | +``` |
| 101 | + |
| 102 | +## Step 3: Run the Application |
| 103 | + |
| 104 | +To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command: |
| 105 | + |
| 106 | +```bash |
| 107 | +chainlit run app.py -w |
| 108 | +``` |
| 109 | + |
| 110 | +The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000. Interact with the bot, and if you ask for the weather (and the LLM uses the tool), you should see a "Weather-get_weather" step appear in the UI. |
0 commit comments