-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a ranker component that uses an LLM to rerank documents #8540
Comments
Hey @srini047 thanks for your interest! rank_llm certainly looks like an interesting tool. However, I think a good first version of this for Haystack would be to utilize our existing ChatGenerators to power the LLM calling. Then this new component would wrap the ChatGnerator to handle the input and output requirements (ie docs as input and docs as output). |
Hi @sjrl , Currently I have a implementation here (of how reranker using haystack generator would look): https://colab.research.google.com/drive/1t9ohLid1DEk6E49LsQaN9jqoDzLexmU-?usp=sharing To set the understanding right, I need to have these parameters to the LLMRanker component:
Then we have to run this component and respond with
Your thoughts and inputs on the same shall be really helpful. Thanks in advance. |
yup!
Yeah setting a default here makes sense. I'd like to follow the design pattern we have been using elsewhere like the Metadata Extractor. See here
Yes I think having a default prompt is a good idea. Let's utilize the PromptBuilder under the hood to render the prompt.
I'd set the default to be higher. Probably at least 10.
I still need to have a look at what you provided.
You're right we could use either here.
Yeah this is a great question that is still open I would say. Some requirements that I think make sense:
|
Describe the solution you'd like
I’d like to add a new ranker component that leverages a LLM to rerank retrieved documents based on their relevance to the query. This would better assess the quality of the top-ranked documents, helping ensure that only relevant results are given to the LLM to answer the question.
Additionally, having an ability for the LLM to choose how many documents to keep would also be nice. A sort of dynamic top-k if you will.
Additional context
We have started to employ this for some clients especially in situations where we need to provide extensive references. Basically for a given answer we need to provide all relevant documents that support the answer text. Having one reference in these situations is not enough. As a result in these situations we are willing to pay the extra cost to use an LLM to rerank and only keep the most relevant documents.
The text was updated successfully, but these errors were encountered: