Skip to content

How to use Azure OpenAI together with Azure embedding for a query engine? #9826

Answered by dosubot bot
aiwalter asked this question in Q&A
Discussion options

You must be logged in to vote

🤖

Hey @aiwalter! Good to see you back. Hope you're cutting through code like a hot knife through butter. Let's dive into this new challenge together.

To specify the embedding separately to the LLM using Azure OpenAI and Azure embedding for a query engine, you can instantiate the AzureOpenAI and AzureOpenAIEmbedding separately with their respective model and deployment names. Then, you can create a ServiceContext with these two separate instances and set it as the global service context. Here is an example:

from llama_index import AzureOpenAI, AzureOpenAIEmbedding, ServiceContext, set_global_service_context

llm = AzureOpenAI(
    model="gpt-35-turbo-16k",
    deployment_name="my-custom-llm"

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
0 replies
Answer selected by aiwalter
Comment options

You must be logged in to vote
2 replies
@dosubot
Comment options

@logan-markewich
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants