-
If I am correct, I can give a |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
🤖 Hey @aiwalter! Good to see you back. Hope you're cutting through code like a hot knife through butter. Let's dive into this new challenge together. To specify the embedding separately to the LLM using Azure OpenAI and Azure embedding for a query engine, you can instantiate the from llama_index import AzureOpenAI, AzureOpenAIEmbedding, ServiceContext, set_global_service_context
llm = AzureOpenAI(
model="gpt-35-turbo-16k",
deployment_name="my-custom-llm",
api_key=api_key,
azure_endpoint=azure_endpoint,
api_version=api_version,
)
embed_model = AzureOpenAIEmbedding(
model="text-embedding-ada-002",
deployment_name="my-custom-embedding",
api_key=api_key,
azure_endpoint=azure_endpoint,
api_version=api_version,
)
service_context = ServiceContext.from_defaults(
llm=llm,
embed_model=embed_model,
)
set_global_service_context(service_context) In this code, The Regarding your question about the SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
whats difference between |
Beta Was this translation helpful? Give feedback.
🤖
Hey @aiwalter! Good to see you back. Hope you're cutting through code like a hot knife through butter. Let's dive into this new challenge together.
To specify the embedding separately to the LLM using Azure OpenAI and Azure embedding for a query engine, you can instantiate the
AzureOpenAI
andAzureOpenAIEmbedding
separately with their respective model and deployment names. Then, you can create aServiceContext
with these two separate instances and set it as the global service context. Here is an example: