To integrate the AI component Kai into Konveyor, focusing on stored analysis and solved incident component work.
Kai will be integrated as a bolt-on service with its own dedicated database, monitoring and interacting with Konveyor via the Hub API. The integration involves several key components:
- Konveyor Hub API
- Hub Importer Service
- Kai Service
- Dedicated Kai Database
- Analysis and Diff Generation
- Konveyor Operator
The Konveyor Hub API operates independently, generating and updating analysis reports that will be processed by Kai.
The Hub Importer Service polls the Konveyor Hub API for new or updated analysis reports. It retrieves data from the Konveyor Hub API and directly processes and stores it in the shared database with Kai for use in prompting.
The Kai Service acts as the main processing unit for Kai, handling the data stored in the shared database. It uses data from the shared database, managed by the Hub Importer Service, in order to generate higher quality prompts.
The Analysis and Diff Generation component processes incoming analysis reports to identify changes and resolved incidents. It compares new reports with previous ones to generate diffs, which highlight these changes.
The Dedicated Kai Database stores all the analysis reports, diffs, and related data for Kai. It provides a long-term storage solution to support future analyses and recomputations.
The Konveyor Operator will facilitate the deployment and management of Konveyor and Kai.
- The Konveyor Hub API operates independently, generating and updating analysis reports through normal use.
- The Hub Importer Service polls the Konveyor Hub API for new or updated reports.
- The importer processes the analysis reports and stores it directly in the Dedicated Kai Database, shared with the Kai Service.
- The Kai Service retrieves the data stored in the shared database when building RAG prompts to send to LLMs.
- Incident Mining: As Kai ingests more reports and finds solutions, these solved incidents are used to improve the RAG (Retrieval-Augmented Generation) prompts that Kai sends to the LLMs (Large Language Models).
- Continuous Improvement: The system continuously learns from the ingested data, enhancing its ability to generate accurate and useful insights over time.
This architecture ensures a seamless integration of Kai into Konveyor, providing robust data processing and storage capabilities to enhance the overall functionality of the system.