Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.Net: Bug: GettingStartedWithAgents Step09_Assistant_Vision fails with "only messages with 'type=text' are supported currently #9779

Closed
MSDNAndi opened this issue Nov 21, 2024 · 1 comment
Assignees
Labels
agents bug Something isn't working experimental Associated with an experimental feature follow up Issues that require a follow up from the community. .NET Issue or Pull requests regarding .NET code question Further information is requested wontfix This will not be worked on

Comments

@MSDNAndi
Copy link

Describe the bug
Running the Step09_Assistant_Vision encounters an error.
(I verified that it is configured with a gpt4-o model.)

To Reproduce
Steps to reproduce the behavior:

  1. Load project
  2. configure secrets
  3. run test
    Expected behavior
    Test should complete without error.

Screenshots
If applicable, add screenshots to help explain your problem.

Platform

  • OS: Windows
  • IDE: Visual Studio 2022
  • Language: C#
  • Source: main branch

Additional context
Add any other context about the problem here.

 GettingStarted.Step09_Assistant_Vision.UseSingleAssistantAgentAsync
 Source: Step09_Assistant_Vision.cs line 20
 Duration: 3.7 min

Message: 
System.ClientModel.ClientResultException : HTTP 400 (invalid_request_error: invalid_type)
Parameter: content

Invalid message content: only messages with 'type=text' are supported currently.

Stack Trace: 
ClientPipelineExtensions.ProcessMessageAsync(ClientPipeline pipeline, PipelineMessage message, RequestOptions options)
AzureAssistantClient.CreateMessageAsync(String threadId, BinaryContent content, RequestOptions options)
AssistantClient.CreateMessageAsync(String threadId, MessageRole role, IEnumerable`1 content, MessageCreationOptions options, CancellationToken cancellationToken)
AssistantThreadActions.CreateMessageAsync(AssistantClient client, String threadId, ChatMessageContent message, CancellationToken cancellationToken) line 93
<g__InvokeAgentAsync|0>d.MoveNext() line 60
--- End of stack trace from previous location ---
Step09_Assistant_Vision.UseSingleAssistantAgentAsync() line 45
Step09_Assistant_Vision.UseSingleAssistantAgentAsync() line 55
Step09_Assistant_Vision.UseSingleAssistantAgentAsync() line 55
--- End of stack trace from previous location ---

@MSDNAndi MSDNAndi added the bug Something isn't working label Nov 21, 2024
@markwallace-microsoft markwallace-microsoft added .NET Issue or Pull requests regarding .NET code triage labels Nov 21, 2024
@github-actions github-actions bot changed the title Bug: GettingStartedWithAgents Step09_Assistant_Vision fails with "only messages with 'type=text' are supported currently .Net: Bug: GettingStartedWithAgents Step09_Assistant_Vision fails with "only messages with 'type=text' are supported currently Nov 21, 2024
@crickman crickman added the experimental Associated with an experimental feature label Nov 22, 2024
@crickman crickman moved this from Bug to Sprint: In Progress in Semantic Kernel Nov 22, 2024
@crickman crickman moved this from Sprint: In Progress to Sprint: In Review in Semantic Kernel Nov 22, 2024
@crickman crickman added question Further information is requested wontfix This will not be worked on labels Nov 22, 2024
@crickman
Copy link
Contributor

Hi @MSDNAndi - Thank you for posting this issue.

In its current form, Step04_AgentOrchestration is configured to target OpenAI service and not an Azure endpoint. The reason for this is that Azure based assistant API does not support image content as commented here: https://github.com/microsoft/semantic-kernel/blob/main/dotnet/samples/GettingStartedWithAgents/Step09_Assistant_Vision.cs#L14.

I've observed latency on when Azure is able to fully support certain features, but I'm honestly surprised they have not achieved parity here yet.

I have validated that this sample does work as expected for when configured for Open AI services using either gpt-4o or gpt-4o-mini. Image support can be dependent on which model is targeted, but I believe the Assistant V2 API requires the targetting of these newer models anyway.

From our perspective, we want to support the full breadth of the Assistant API without restrictions based on which service is targeted. This isn't unlike SK's support for function-calling. We support function-calling/plug-ins and yet it is possible for the developer to use a connector / target a model that does not support function calling.

@crickman crickman added the follow up Issues that require a follow up from the community. label Nov 22, 2024
@github-project-automation github-project-automation bot moved this from Sprint: In Review to Sprint: Done in Semantic Kernel Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agents bug Something isn't working experimental Associated with an experimental feature follow up Issues that require a follow up from the community. .NET Issue or Pull requests regarding .NET code question Further information is requested wontfix This will not be worked on
Projects
Status: Sprint: Done
Development

No branches or pull requests

3 participants