Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal Error 500 when using function calls and multimodal prompts together #396

Closed
soundkprz opened this issue May 1, 2024 · 7 comments
Assignees
Labels
component:support How to do xyz? type:bug Something isn't working

Comments

@soundkprz
Copy link

Description of the bug:

When sending a multimodal request to the model via the Python SDK while also defining tools / function declarations I consistently receive an internal error 500. If I take away the tools OR the image and keep the other it works again.
I haven't changed anything about the code and it has worked before just fine, but now I suddenly get this error.

Actual vs expected behavior:

Actual: I receive an error 500

Expected: The modal should handle the request and respond without error

Any other information you'd like to share?

google_search_query = FunctionDeclaration(
name="search_and_scrape",
description="Get results from a Google Search",
parameters={
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
},
)

search_tools = Tool(
function_declarations=[google_search_query]
)

model = GenerativeModel(
"gemini-1.5-pro-preview-0409",
generation_config=GenerationConfig(
temperature=0.6,
max_output_tokens=2048
),
tools=[search_tools])

async def main():
# User prompt
prompt = "What are the latest news for Paris?"
image_url = [image]
loaded_image = [load_image_from_url(url) for url in image_url]
print("In the function now")

response = model.generate_content(list(loaded_image) + [prompt], tools=[search_tools])
print(response)
  File "C:\Users\djhar\PycharmProjects\Discord Bot with PaLM\venv\Lib\site-packages\google\api_core\grpc_helpers.py", line 76, in error_remapped_callable
    return callable_(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\djhar\PycharmProjects\Discord Bot with PaLM\venv\Lib\site-packages\grpc\_channel.py", line 1181, in __call__
    return _end_unary_response_blocking(state, call, False, None)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\djhar\PycharmProjects\Discord Bot with PaLM\venv\Lib\site-packages\grpc\_channel.py", line 1006, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
    status = StatusCode.INTERNAL
    details = "Internal error encountered."
    debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.185.170:443 {created_time:"2024-05-01T11:47:48.4953737+00:00", grpc_status:13, grpc_message:"Internal error encountered."}"```
@soundkprz soundkprz added the type:bug Something isn't working label May 1, 2024
@singhniraj08
Copy link
Contributor

@soundkprz,

Thank you reporting this issue. This looks like an intermittent error and should work now.
This repository is for issues related to website(https://ai.google.dev/) like documentation bugs or improvements. For issues related to Gemini, we would suggest you to use "Send Feedback" option in Gemini docs. Ref: Screenshot below.

image

@singhniraj08 singhniraj08 added status:awaiting user response Awaiting a response from the author component:support How to do xyz? labels May 2, 2024
@soundkprz
Copy link
Author

@soundkprz,

Thank you reporting this issue. This looks like an intermittent error and should work now. This repository is for issues related to website(https://ai.google.dev/) like documentation bugs or improvements. For issues related to Gemini, we would suggest you to use "Send Feedback" option in Gemini docs. Ref: Screenshot below.

image

Oh I'm sorry then, my bad!

Sadly the issue isn't fixed as of right now and I've consistently had the same error since last week Thursday :/

Can I still send feedback the way you described when I'm using VertexAI?

@singhniraj08 singhniraj08 removed the status:awaiting user response Awaiting a response from the author label May 2, 2024
@MarkDaoust
Copy link
Member

+1. I'm not able to reproduce this, are you using google.generativeai or vertex?

This code works fine:

import google.generativeai as genai
import google.ai.generativelanguage as glm

google_search_query = glm.FunctionDeclaration(
    name="search_and_scrape",
    description="Get results from a Google Search",
    parameters={
            "type_": "OBJECT",
            "properties": {
                    "query": {"type_": "STRING", "description": "Search query"}
            },
    },
)

search_tools = glm.Tool(
function_declarations=[google_search_query]
)

model = genai.GenerativeModel(
"gemini-1.5-pro-latest",
generation_config=genai.GenerationConfig(
temperature=0.6,
max_output_tokens=2048
),
tools=[search_tools])


# User prompt
prompt = "What are the latest news for Paris?"
response = model.generate_content([img, prompt], tools=[search_tools])
print(response)

@soundkprz
Copy link
Author

soundkprz commented May 2, 2024

+1. I'm not able to reproduce this, are you using google.generativeai or vertex?

This code works fine:

import google.generativeai as genai
import google.ai.generativelanguage as glm

google_search_query = glm.FunctionDeclaration(
    name="search_and_scrape",
    description="Get results from a Google Search",
    parameters={
            "type_": "OBJECT",
            "properties": {
                    "query": {"type_": "STRING", "description": "Search query"}
            },
    },
)

search_tools = glm.Tool(
function_declarations=[google_search_query]
)

model = genai.GenerativeModel(
"gemini-1.5-pro-latest",
generation_config=genai.GenerationConfig(
temperature=0.6,
max_output_tokens=2048
),
tools=[search_tools])


# User prompt
prompt = "What are the latest news for Paris?"
response = model.generate_content([img, prompt], tools=[search_tools])
print(response)

I'm having the error with both Vertex and generativeai.
Your code works fine for me too, because it isn't multimodal.
Can you try adding an actual image to your prompt as well? If I do that I get the error again.

Edit: I know you send "[IMG, prompt]" to the model but there's no "IMG" defined. If I actually define one and send the image to the model as well then I get the error.

@MarkDaoust
Copy link
Member

model.generate_content([img, prompt], tools=[search_tools])

I was passing an PIL.Image here (it would have been a name error otherwise).

@soundkprz
Copy link
Author

model.generate_content([img, prompt], tools=[search_tools])

I was passing an PIL.Image here (it would have been a name error otherwise).

Oh? That is very very weird then.
I've had this error since last week Thursday, on 2 seperate PC's, with both vertexai and generativeai, in every project I've tried it in (even made a new project with new code just to try it)

I don't understand why I get this error then :(

@soundkprz
Copy link
Author

model.generate_content([img, prompt], tools=[search_tools])

I was passing an PIL.Image here (it would have been a name error otherwise).

Now your code works for me, but VertexAI still has the same issue even when using the same code but with vertexai instead of genai

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:support How to do xyz? type:bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants