Replies: 1 comment
-
🤖 Yes, you are correct. With the deprecation of cache in The Regarding local or open-source caching options, the Here is the relevant code: def __init__(
self,
*,
mode: Union["Modes", "ModesLiteral"],
api_key: Optional[str] = None,
base_url: Optional[str] = None,
system_prompt: Optional[str] = None,
messages_to_prompt: Optional[Callable[[Sequence[ChatMessage]], str]] = None,
completion_to_prompt: Optional[Callable[[str], str]] = None,
pydantic_program_mode: PydanticProgramMode = PydanticProgramMode.DEFAULT,
output_parser: Optional[BaseOutputParser] = None,
) -> None:
"""
Initialize a Portkey instance.
Args:
mode (Optional[Modes]): The mode for using the Portkey integration
(default: Modes.SINGLE).
api_key (Optional[str]): The API key to authenticate with Portkey.
base_url (Optional[str]): The Base url to the self hosted rubeus \
(the opensource version of portkey) or any other self hosted server.
"""
...
if base_url is not None:
portkey.base_url = base_url
... This code shows that you can initialize a Portkey instance with a As for the I hope this helps! If you have any more questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
It seems cache in LLMPredictor (or LLMPredictor itself) has been deprecated, and GPTCache isn't fully integrated with llama_index, so is Portkey the only recommended way of doing caching in llama_index? In this case, it would be helpful to have local/open-source caching support that doesn't require 3rd party services. Thanks.
Beta Was this translation helpful? Give feedback.
All reactions