You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Querying requirements across RAG fall not only onto unstructured data that has been embedded and added to an vector database. It also falls onto structured data sources where semantic search doesn't really make sense.
Goal: Provide a pipeline interface that connects to a structured data source and generates queries in real-time based on queries.
Implementation:
Psuedo Pipeline without an embed or sink connector, just a data source.
Data source connector is configured and an initial pull from the database is done to examine the fields available and their types.
Search generates a query using an LLM based on the fields available in the database.
The Pipeline can be used as part of a PipelineCollection and supported by smart_route in order for model to decide when to use it.
Alternative implementation:
In order to reduce the latency associated with having to do 2-3 back to back LLM calls to generated query and validate it, what if the query generation was done pre-emptively and cached in to a vector database.
Using an LLM, we would try to predict the top sets of queries that one might expect from the database and its permutations. (This might limit the complexity of the queries, but might answer for 80% of use cases)
At search we would run a similarity search of the incoming query against the description of the "cached" queries. We then can run top query against the database.
The text was updated successfully, but these errors were encountered:
@ddematheu Haven't yet fully understood this, but the alternatives sound similar to internals of this project - aidb. Can you please give an example to elaborate this.
As far as I understood, we have some structured data sources. Now we want to map a natural language query to an appropriate SQL query(or any structured query) using an LLM.
The thought process was given a database, to generate a set of common queries for it (based on schema) using an LLM. Fron there take the queries amd the descriptions for them and embed them (embed the description). Then at runtine when someone searches, we take he search and compare against the embeddings and use the stored query to query the database (or pass into a database for fine tuning based on the search)
input_prompt='''tables:\n CREATE TABLE engineers (id: VARCHAR, name: TEXT, age: INT); \nquery for: Group by the age 'column' '''print("Generted SQL:")
generate_sql(input_prompt=input_prompt)
Output:
GenertedSQL:
'SELECT name, age FROM engineers GROUP BY age'
So we would create pairs and embed the description,
{'query': 'SELECT name, age FROM engineers GROUP BY age', 'description': 'Group by the age column'}
Querying requirements across RAG fall not only onto unstructured data that has been embedded and added to an vector database. It also falls onto structured data sources where semantic search doesn't really make sense.
Goal: Provide a pipeline interface that connects to a structured data source and generates queries in real-time based on queries.
Implementation:
Pipeline
without an embed or sink connector, just a data source.Search
generates a query using an LLM based on the fields available in the database.PipelineCollection
and supported bysmart_route
in order for model to decide when to use it.Alternative implementation:
search
we would run a similarity search of the incoming query against the description of the "cached" queries. We then can run top query against the database.The text was updated successfully, but these errors were encountered: