Replies: 5 comments
-
Are you customizing the LLM or service context at all? Once you start customizing retrievers and response synthesizers, there's a lot of places that need the service context. It's usually best to set a global service context to simplify things 🤔 Also, if the temperature is higher than zero, then getting different answers also seems possible |
Beta Was this translation helpful? Give feedback.
-
Thanks @logan-markewich ! Due to previously I was using azure oai thus I always explicitly specify one same service context for all classes(is this approach the global service context? I'll search it from docs tomorrow!), I'll double check tomorrow on LLM args(I recalled I have only one configuration calls my local model through). |
Beta Was this translation helpful? Give feedback.
-
Thanks @logan-markewich , now with |
Beta Was this translation helpful? Give feedback.
-
@wey-gu yea I'm guessing before there was a spot that the service context wasn't passed. Maybe into the response synthesizer? At least with the global it's less worrisome 👍 |
Beta Was this translation helpful? Give feedback.
-
🤖 Hello, Thank you for your detailed question. It seems like you're experiencing different results between your custom query engine and the vector index query engine. This could be due to the different configurations and capabilities of these two types of query engines. The ComposableGraphQueryEngine in LlamaIndex allows for the use of custom query engines for its sub-indices. This means that the custom query engine you're using could be configured to use both the VectorStore Index and the Knowledge Graph Index. The ComposableGraphQueryEngine also has a recursive option. If this is set to true, it will recursively query the graph, potentially retrieving more granular and dense information across many more nodes than the VectorStore Index alone. This could explain why the custom query engine may return "don't know" for a question about the data if it cannot find any relevant relationships or facts in the Knowledge Graph Index. On the other hand, if the vector index query engine is not configured to use recursion or a Knowledge Graph Index, it may return a wrong answer if the top K related nodes do not contain the correct information. Here is the relevant code and explanation from the context: class ComposableGraphQueryEngine(BaseQueryEngine):
"""Composable graph query engine.
This query engine can operate over a ComposableGraph.
It can take in custom query engines for its sub-indices.
Args:
graph (ComposableGraph): A ComposableGraph object.
custom_query_engines (Optional[Dict[str, BaseQueryEngine]]): A dictionary of
custom query engines.
recursive (bool): Whether to recursively query the graph.
**kwargs: additional arguments to be passed to the underlying index query
engine.
""" You can find this in the graph_query_engine.py file. As for the default configuration for the vector index query engine in LlamaIndex, it includes parameters like table name, database name, index type, metric type, batch size, index parameters, and search parameters. You can find more details in the myscale.py file. Lastly, I found a similar issue in the LlamaIndex repository that might be helpful. The issue is titled What is returned when the retrieved documents don't have information for the given question?. The solution suggested is to use more custom prompts and ensure both a text QA template and a refine template are defined. You can also try using one of the eval functions in the repo, but this will slow down responses. More information can be found at the following links: https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_prompts.html and https://github.com/jerryjliu/llama_index/tree/main/examples/evaluation. I hope this helps! If you have any more questions, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
Question Validation
Question
Background:
The strange thing here is, in case a question is about the data got nothing related:
I checked both response's node are the same(from vector search), it seems something is right in the custom query engine but not in vector index query engine, I looked into the code but couldn't find any(default kwargs) that's related, could you please help point where I could be missing?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions