Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error running Module 3 - Lesson 3: Editing State and Human Feedback with Gemini: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn. #28

Open
lucasmaraal opened this issue Sep 18, 2024 · 9 comments · Fixed by langchain-ai/langchain-google#516

Comments

@lucasmaraal
Copy link

Steps to reproduce:

Running the follow snippets will reproduce the issue:

  1. Create the graph:
llm = ChatVertexAI(model="gemini-1.5-flash", temperature=0)
llm_with_tools = llm.bind_tools(tools)

sys_msg = SystemMessage(
    content="You are a helpful assistant tasked with performing arithmatic on a set of inputs."
)

def human_feedback(state: MessagesState):
    pass

def assistant(state: MessagesState):
    return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]}

builder = StateGraph(MessagesState)
builder.add_node("assistant", assistant)
builder.add_node("tools", ToolNode(tools))
builder.add_node("human_feedback", human_feedback)

builder.add_edge(START, "human_feedback")
builder.add_edge("human_feedback", "assistant")
builder.add_conditional_edges("assistant", tools_condition)
builder.add_edge("tools", "human_feedback")

memory = MemorySaver()
graph = builder.compile(interrupt_before=["human_feedback"], checkpointer=memory)
  1. Makes the first interaction and provide user input:
initial_input = HumanMessage(content="Multiply pi by three")

thread = {"configurable": {"thread_id": "1"}}

for event in graph.stream({"messages": initial_input}, thread, stream_mode="values"):
    event["messages"][-1].pretty_print()

user_input = input("Tell me how you want to updade the state: ")

graph.update_state(thread,  {"messages": HumanMessage(content=user_input)}, as_node="human_feedback")

for event in graph.stream(None, thread, stream_mode="values"):
    event["messages"][-1].pretty_print()

================================�[1m Human Message �[0m=================================

in real, multply by 4
==================================�[1m Ai Message �[0m==================================
Tool Calls:
multiply (740332d3-cb62-4aef-a32b-5a5575ce1812)
Call ID: 740332d3-cb62-4aef-a32b-5a5575ce1812
Args:
a: 3.141592653589793
b: 4.0
=================================�[1m Tool Message �[0m=================================
Name: multiply

12.566370614359172

  1. Call the with None:
for event in graph.stream(None, thread, stream_mode="values"):
    event["messages"][-1].pretty_print()

Retrying langchain_google_vertexai.chat_models._completion_with_retry.._completion_with_retry_inner in 4.0 seconds as it raised InvalidArgument: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn..
Retrying langchain_google_vertexai.chat_models._completion_with_retry.._completion_with_retry_inner in 4.0 seconds as it raised InvalidArgument: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn..
Retrying langchain_google_vertexai.chat_models._completion_with_retry.._completion_with_retry_inner in 4.0 seconds as it raised InvalidArgument: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn..
Retrying langchain_google_vertexai.chat_models._completion_with_retry.._completion_with_retry_inner in 8.0 seconds as it raised InvalidArgument: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn..
Retrying langchain_google_vertexai.chat_models._completion_with_retry.._completion_with_retry_inner in 10.0 seconds as it raised InvalidArgument: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn..

Full trace:

{
	"name": "InvalidArgument",
	"message": "400 Please ensure that function call turn comes immediately after a user turn or after a function response turn.",
	"stack": "---------------------------------------------------------------------------
_InactiveRpcError                         Traceback (most recent call last)
File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/google/api_core/grpc_helpers.py:76, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
     75 try:
---> 76     return callable_(*args, **kwargs)
     77 except grpc.RpcError as exc:

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/grpc/_channel.py:1181, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
   1175 (
   1176     state,
   1177     call,
   1178 ) = self._blocking(
   1179     request, timeout, metadata, credentials, wait_for_ready, compression
   1180 )
-> 1181 return _end_unary_response_blocking(state, call, False, None)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/grpc/_channel.py:1006, in _end_unary_response_blocking(state, call, with_call, deadline)
   1005 else:
-> 1006     raise _InactiveRpcError(state)

_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
\tstatus = StatusCode.INVALID_ARGUMENT
\tdetails = \"Please ensure that function call turn comes immediately after a user turn or after a function response turn.\"
\tdebug_error_string = \"UNKNOWN:Error received from peer ipv4:142.251.129.10:443 {created_time:\"2024-09-18T18:32:26.805936921-03:00\", grpc_status:3, grpc_message:\"Please ensure that function call turn comes immediately after a user turn or after a function response turn.\"}\"
>

The above exception was the direct cause of the following exception:

InvalidArgument                           Traceback (most recent call last)
Cell In[153], line 1
----> 1 for event in graph.stream(None, thread, stream_mode=\"values\"):
      2     event[\"messages\"][-1].pretty_print()

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langgraph/pregel/__init__.py:1224, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
   1213 # Similarly to Bulk Synchronous Parallel / Pregel model
   1214 # computation proceeds in steps, while there are channel updates
   1215 # channel updates from step N are only visible in step N+1
   1216 # channels are guaranteed to be immutable for the duration of the step,
   1217 # with channel updates applied only at the transition between steps
   1218 while loop.tick(
   1219     input_keys=self.input_channels,
   1220     interrupt_before=interrupt_before,
   1221     interrupt_after=interrupt_after,
   1222     manager=run_manager,
   1223 ):
-> 1224     for _ in runner.tick(
   1225         loop.tasks.values(),
   1226         timeout=self.step_timeout,
   1227         retry_policy=self.retry_policy,
   1228     ):
   1229         # emit output
   1230         for o in output():
   1231             yield o

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langgraph/pregel/runner.py:94, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy)
     92     yield
     93 # panic on failure or timeout
---> 94 _panic_or_proceed(all_futures, panic=reraise)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langgraph/pregel/runner.py:210, in _panic_or_proceed(futs, timeout_exc_cls, panic)
    208 # raise the exception
    209 if panic:
--> 210     raise exc
    211 else:
    212     return

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langgraph/pregel/executor.py:61, in BackgroundExecutor.done(self, task)
     59 def done(self, task: concurrent.futures.Future) -> None:
     60     try:
---> 61         task.result()
     62     except GraphInterrupt:
     63         # This exception is an interruption signal, not an error
     64         # so we don't want to re-raise it on exit
     65         self.tasks.pop(task)

File ~/.pyenv/versions/3.12.3/lib/python3.12/concurrent/futures/_base.py:449, in Future.result(self, timeout)
    447     raise CancelledError()
    448 elif self._state == FINISHED:
--> 449     return self.__get_result()
    451 self._condition.wait(timeout)
    453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:

File ~/.pyenv/versions/3.12.3/lib/python3.12/concurrent/futures/_base.py:401, in Future.__get_result(self)
    399 if self._exception:
    400     try:
--> 401         raise self._exception
    402     finally:
    403         # Break a reference cycle with the exception in self._exception
    404         self = None

File ~/.pyenv/versions/3.12.3/lib/python3.12/concurrent/futures/thread.py:58, in _WorkItem.run(self)
     55     return
     57 try:
---> 58     result = self.fn(*self.args, **self.kwargs)
     59 except BaseException as exc:
     60     self.future.set_exception(exc)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langgraph/pregel/retry.py:29, in run_with_retry(task, retry_policy)
     27 task.writes.clear()
     28 # run the task
---> 29 task.proc.invoke(task.input, config)
     30 # if successful, end
     31 break

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langgraph/utils/runnable.py:343, in RunnableSeq.invoke(self, input, config, **kwargs)
    341 context.run(_set_config_context, config)
    342 if i == 0:
--> 343     input = context.run(step.invoke, input, config, **kwargs)
    344 else:
    345     input = context.run(step.invoke, input, config)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langgraph/utils/runnable.py:131, in RunnableCallable.invoke(self, input, config, **kwargs)
    129 else:
    130     context.run(_set_config_context, config)
--> 131     ret = context.run(self.func, input, **kwargs)
    132 if isinstance(ret, Runnable) and self.recurse:
    133     return ret.invoke(input, config)

Cell In[143], line 12, in assistant(state)
     11 def assistant(state: MessagesState):
---> 12     return {\"messages\": [llm_with_tools.invoke([sys_msg] + state[\"messages\"])]}

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:5313, in RunnableBindingBase.invoke(self, input, config, **kwargs)
   5307 def invoke(
   5308     self,
   5309     input: Input,
   5310     config: Optional[RunnableConfig] = None,
   5311     **kwargs: Optional[Any],
   5312 ) -> Output:
-> 5313     return self.bound.invoke(
   5314         input,
   5315         self._merge_configs(config),
   5316         **{**self.kwargs, **kwargs},
   5317     )

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    275 def invoke(
    276     self,
    277     input: LanguageModelInput,
   (...)
    281     **kwargs: Any,
    282 ) -> BaseMessage:
    283     config = ensure_config(config)
    284     return cast(
    285         ChatGeneration,
--> 286         self.generate_prompt(
    287             [self._convert_input(input)],
    288             stop=stop,
    289             callbacks=config.get(\"callbacks\"),
    290             tags=config.get(\"tags\"),
    291             metadata=config.get(\"metadata\"),
    292             run_name=config.get(\"run_name\"),
    293             run_id=config.pop(\"run_id\", None),
    294             **kwargs,
    295         ).generations[0][0],
    296     ).message

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:786, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    778 def generate_prompt(
    779     self,
    780     prompts: List[PromptValue],
   (...)
    783     **kwargs: Any,
    784 ) -> LLMResult:
    785     prompt_messages = [p.to_messages() for p in prompts]
--> 786     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    641         if run_managers:
    642             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643         raise e
    644 flattened_outputs = [
    645     LLMResult(generations=[res.generations], llm_output=res.llm_output)  # type: ignore[list-item]
    646     for res in results
    647 ]
    648 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    630 for i, m in enumerate(messages):
    631     try:
    632         results.append(
--> 633             self._generate_with_cache(
    634                 m,
    635                 stop=stop,
    636                 run_manager=run_managers[i] if run_managers else None,
    637                 **kwargs,
    638             )
    639         )
    640     except BaseException as e:
    641         if run_managers:

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:855, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    853 else:
    854     if inspect.signature(self._generate).parameters.get(\"run_manager\"):
--> 855         result = self._generate(
    856             messages, stop=stop, run_manager=run_manager, **kwargs
    857         )
    858     else:
    859         result = self._generate(messages, stop=stop, **kwargs)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_google_vertexai/chat_models.py:1175, in ChatVertexAI._generate(self, messages, stop, run_manager, stream, **kwargs)
   1173 if not self._is_gemini_model:
   1174     return self._generate_non_gemini(messages, stop=stop, **kwargs)
-> 1175 return self._generate_gemini(
   1176     messages=messages,
   1177     stop=stop,
   1178     run_manager=run_manager,
   1179     is_gemini=True,
   1180     **kwargs,
   1181 )

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_google_vertexai/chat_models.py:1332, in ChatVertexAI._generate_gemini(self, messages, stop, run_manager, **kwargs)
   1324 def _generate_gemini(
   1325     self,
   1326     messages: List[BaseMessage],
   (...)
   1329     **kwargs: Any,
   1330 ) -> ChatResult:
   1331     request = self._prepare_request_gemini(messages=messages, stop=stop, **kwargs)
-> 1332     response = _completion_with_retry(
   1333         self.prediction_client.generate_content,
   1334         max_retries=self.max_retries,
   1335         request=request,
   1336         metadata=self.default_metadata,
   1337         **kwargs,
   1338     )
   1339     return self._gemini_response_to_chat_result(response)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_google_vertexai/chat_models.py:613, in _completion_with_retry(generation_method, max_retries, run_manager, **kwargs)
    606     return generation_method(**kwargs)
    608 params = (
    609     {k: v for k, v in kwargs.items() if k in _allowed_params_prediction_service}
    610     if kwargs.get(\"is_gemini\")
    611     else kwargs
    612 )
--> 613 return _completion_with_retry_inner(
    614     generation_method,
    615     **params,
    616 )

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/tenacity/__init__.py:336, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
    334 copy = self.copy()
    335 wrapped_f.statistics = copy.statistics  # type: ignore[attr-defined]
--> 336 return copy(f, *args, **kw)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/tenacity/__init__.py:475, in Retrying.__call__(self, fn, *args, **kwargs)
    473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
    474 while True:
--> 475     do = self.iter(retry_state=retry_state)
    476     if isinstance(do, DoAttempt):
    477         try:

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/tenacity/__init__.py:376, in BaseRetrying.iter(self, retry_state)
    374 result = None
    375 for action in self.iter_state.actions:
--> 376     result = action(retry_state)
    377 return result

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/tenacity/__init__.py:418, in BaseRetrying._post_stop_check_actions.<locals>.exc_check(rs)
    416 retry_exc = self.retry_error_cls(fut)
    417 if self.reraise:
--> 418     raise retry_exc.reraise()
    419 raise retry_exc from fut.exception()

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/tenacity/__init__.py:185, in RetryError.reraise(self)
    183 def reraise(self) -> t.NoReturn:
    184     if self.last_attempt.failed:
--> 185         raise self.last_attempt.result()
    186     raise self

File ~/.pyenv/versions/3.12.3/lib/python3.12/concurrent/futures/_base.py:449, in Future.result(self, timeout)
    447     raise CancelledError()
    448 elif self._state == FINISHED:
--> 449     return self.__get_result()
    451 self._condition.wait(timeout)
    453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:

File ~/.pyenv/versions/3.12.3/lib/python3.12/concurrent/futures/_base.py:401, in Future.__get_result(self)
    399 if self._exception:
    400     try:
--> 401         raise self._exception
    402     finally:
    403         # Break a reference cycle with the exception in self._exception
    404         self = None

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/tenacity/__init__.py:478, in Retrying.__call__(self, fn, *args, **kwargs)
    476 if isinstance(do, DoAttempt):
    477     try:
--> 478         result = fn(*args, **kwargs)
    479     except BaseException:  # noqa: B902
    480         retry_state.set_exception(sys.exc_info())  # type: ignore[arg-type]

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/langchain_google_vertexai/chat_models.py:606, in _completion_with_retry.<locals>._completion_with_retry_inner(generation_method, **kwargs)
    604 @retry_decorator
    605 def _completion_with_retry_inner(generation_method: Callable, **kwargs: Any) -> Any:
--> 606     return generation_method(**kwargs)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/google/cloud/aiplatform_v1beta1/services/prediction_service/client.py:2275, in PredictionServiceClient.generate_content(self, request, model, contents, retry, timeout, metadata)
   2272 self._validate_universe_domain()
   2274 # Send the request.
-> 2275 response = rpc(
   2276     request,
   2277     retry=retry,
   2278     timeout=timeout,
   2279     metadata=metadata,
   2280 )
   2282 # Done; return the response.
   2283 return response

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/google/api_core/gapic_v1/method.py:131, in _GapicCallable.__call__(self, timeout, retry, compression, *args, **kwargs)
    128 if self._compression is not None:
    129     kwargs[\"compression\"] = compression
--> 131 return wrapped_func(*args, **kwargs)

File ~/learn/langchain-and-langgraph/.venv/lib/python3.12/site-packages/google/api_core/grpc_helpers.py:78, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
     76     return callable_(*args, **kwargs)
     77 except grpc.RpcError as exc:
---> 78     raise exceptions.from_grpc_error(exc) from exc

InvalidArgument: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn."
}

Expected Behavior:

The graph executes and returns an AIMessage, like in the tutorial.

Environment*:

python: 3.12.3
langgraph-checkpoint-sqlite: 1.0.3
langgraph: 0.2.22
langchain-google-vertexai: 2.0.0
vertexai: 1.67.0
langchain-openai: 0.2.0

@rlancemartin
Copy link
Collaborator

ah, this is likely an issue w/ gemini -

llm = ChatVertexAI(model="gemini-1.5-flash", temperature=0)

related to requirements on message order. i will need to look into it (have not worked extensively w/ Gemin).

@lucasmaraal
Copy link
Author

Thanks for the upadate. I am also trying to investigate the message order, but have not founded nothing documentated about it so far.

@shiv248
Copy link
Contributor

shiv248 commented Sep 19, 2024

@lucasmaraal could you elaborate/clarify what you're trying to understand regarding message order? Do you mean in what order are the messages being put into the state?
Maybe this can give you some more insight on how messages are being adjusted within the state, if that's what you are looking for?

@lucasmaraal
Copy link
Author

@shiv248, I am trying to undestarding what i have to tweak in the graph from lesson to have it working with Gemni. I don't know if this is out scope, but I am think my doubt can be realated to the course.

@rlancemartin
Copy link
Collaborator

rlancemartin commented Sep 23, 2024

Ya, I got Vertex credentials and confirmed that I can repro this -
https://smith.langchain.com/public/be0143a5-e54e-4c10-a9b4-146956e9f17a/r

Here's the commit -
https://github.com/langchain-ai/langchain-academy/blob/760e417766163ddd0c1c81e522b2ae3501827fdd/module-3/edit-state-human-feedback.ipynb

It's specific to adding the breakpoint -

interrupt_before=["assistant"]

For example, if you compile w/o the breakpoint it works as expected.

Checking w/ some folks on our side in case it's an issue w/ the integration.

(Obviously, course is tested w/ OpenAI and Anthropic, so use those to unblock. But this is good confirm no issue w/ Gemini.)

@lucasmaraal
Copy link
Author

Thanks for look into it. I agreet that is good confirm if it's works with Gemini too.

I noticed that in notebook that you posted the breaking point is in interrupt_before=["assistant"], but in the video lesson is in interrupt_before=["human_feedback"]

image

@rlancemartin
Copy link
Collaborator

Ya, I got Vertex credentials and confirmed that I can repro this - https://smith.langchain.com/public/be0143a5-e54e-4c10-a9b4-146956e9f17a/r

Here's the commit - https://github.com/langchain-ai/langchain-academy/blob/760e417766163ddd0c1c81e522b2ae3501827fdd/module-3/edit-state-human-feedback.ipynb

It's specific to adding the breakpoint -

interrupt_before=["assistant"]

For example, if you compile w/o the breakpoint it works as expected.

Checking w/ some folks on our side in case it's an issue w/ the integration.

(Obviously, course is tested w/ OpenAI and Anthropic, so use those to unblock. But this is good confirm no issue w/ Gemini.)

I asked folks from Google.

we recently released a course on LangGraph.

some folks have been eager to use Gemini!

some users have seen this error when using human-in-the-loop w/ gemini-1.5-flash --
https://github.com/langchain-ai/langchain-academy/issues/28#issuecomment-2369795773

i can repro it --
https://github.com/langchain-ai/langchain-academy/blob/760e417766163ddd0c1c81e522b2ae3501827fdd/module-3/edit-state-human-feedback.ipynb

here is the trace --
https://smith.langchain.com/public/be0143a5-e54e-4c10-a9b4-146956e9f17a/r

we have HumanMessage -> AIMessage -> ToolMesage -> failure:
google.api_core.exceptions.InvalidArgument: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn.

was curious if anyone who has deeper context on gemini may be able to debug what is going on.

@rlancemartin
Copy link
Collaborator

Thanks for look into it. I agreet that is good confirm if it's works with Gemini too.

I noticed that in notebook that you posted the breaking point is in interrupt_before=["assistant"], but in the video lesson is in interrupt_before=["human_feedback"]

image

We do this at the bottom section of the ntbk, Awaiting user input.

@anuroop18
Copy link

I think i am getting a similar error using gpt-4o-mini in Module 5: Lesson 5 #59

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants