You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
code like structured_llm = llm.with_structured_output(Perspectives)
don't work well oftenly.
It should return structed data, according to the schema defined in Perspectives like below,
classPerspectives(BaseModel):
analysts: List[Analyst] =Field(
description="Comprehensive list of analysts with their roles and affiliations.",
)
But when invoke structured_llm, it always return Null randomly. I guess it's due to the message returned by llm originally.
How to make this process more stable.
Here is the traceback
{
"name": "AttributeError",
"message": "'NoneType' object has no attribute 'search_query'",
"stack": "---------------------------------------------------------------------------AttributeErrorTraceback (mostrecentcalllast)
CellIn[125], line42messages= [HumanMessage(f\"Soyousaidyouwerewritinganarticleon {topic}?\")]
3thread= {\"configurable\": {\"thread_id\": \"1\"}}
---->4interview=interview_graph.invoke({\"analyst\": analysts[0], \"messages\": messages, \"max_num_turns\": 2}, thread)
5Markdown(interview['sections'][0])
Filed:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\site-packages\\langgraph\\pregel\\__init__.py:1844, inPregel.invoke(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, **kwargs)
1842else:
1843chunks= []
->1844forchunkinself.stream(
1845input,
1846config,
1847stream_mode=stream_mode,
1848output_keys=output_keys,
1849interrupt_before=interrupt_before,
1850interrupt_after=interrupt_after,
1851debug=debug,
1852**kwargs,
1853 ):
1854ifstream_mode== \"values\":
1855latest=chunkFiled:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\site-packages\\langgraph\\pregel\\__init__.py:1573, inPregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1567# Similarly to Bulk Synchronous Parallel / Pregel model1568# computation proceeds in steps, while there are channel updates1569# channel updates from step N are only visible in step N+11570# channels are guaranteed to be immutable for the duration of the step,1571# with channel updates applied only at the transition between steps1572whileloop.tick(input_keys=self.input_channels):
->1573for _ inrunner.tick(
1574loop.tasks.values(),
1575timeout=self.step_timeout,
1576retry_policy=self.retry_policy,
1577get_waiter=get_waiter,
1578 ):
1579# emit output1580yieldfromoutput()
1581# emit outputFiled:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\site-packages\\langgraph\\pregel\\runner.py:159, inPregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
157yield158# panic on failure or timeout-->159_panic_or_proceed(
160done_futures.union(fforf, tinfutures.items() iftisnotNone),
161panic=reraise,
162 )
Filed:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\site-packages\\langgraph\\pregel\\runner.py:367, in_panic_or_proceed(futs, timeout_exc_cls, panic)
365# raise the exception366ifpanic:
-->367raiseexc368else:
369returnFiled:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\site-packages\\langgraph\\pregel\\executor.py:70, inBackgroundExecutor.done(self, task)
68defdone(self, task: concurrent.futures.Future) ->None:
69try:
--->70task.result()
71exceptGraphInterrupt:
72# This exception is an interruption signal, not an error73# so we don't want to re-raise it on exit74self.tasks.pop(task)
Filed:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\concurrent\\futures\\_base.py:451, inFuture.result(self, timeout)
449raiseCancelledError()
450elifself._state==FINISHED:
-->451returnself.__get_result()
453self._condition.wait(timeout)
455ifself._statein [CANCELLED, CANCELLED_AND_NOTIFIED]:
Filed:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\concurrent\\futures\\_base.py:403, inFuture.__get_result(self)
401ifself._exception:
402try:
-->403raiseself._exception404finally:
405# Break a reference cycle with the exception in self._exception406self=NoneFiled:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\concurrent\\futures\\thread.py:58, in_WorkItem.run(self)
55return57try:
--->58result=self.fn(*self.args, **self.kwargs)
59exceptBaseExceptionasexc:
60self.future.set_exception(exc)
Filed:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\site-packages\\langgraph\\pregel\\retry.py:40, inrun_with_retry(task, retry_policy, writer)
38task.writes.clear()
39# run the task--->40task.proc.invoke(task.input, config)
41# if successful, end42breakFiled:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\site-packages\\langgraph\\utils\\runnable.py:410, inRunnableSeq.invoke(self, input, config, **kwargs)
408context.run(_set_config_context, config)
409ifi==0:
-->410input=context.run(step.invoke, input, config, **kwargs)
411else:
412input=context.run(step.invoke, input, config)
Filed:\\software\\miniconda3\\envs\\lc-academy-env\\lib\\site-packages\\langgraph\\utils\\runnable.py:184, inRunnableCallable.invoke(self, input, config, **kwargs)
182else:
183context.run(_set_config_context, config)
-->184ret=context.run(self.func, input, **kwargs)
185ifisinstance(ret, Runnable) andself.recurse:
186returnret.invoke(input, config)
CellIn[122], line23, insearch_web(state)
20search_query=structured_llm.invoke([search_instructions]+state['messages']) # append the final question22# Search--->23search_docs=tavily_search.invoke(search_query.search_query)
25# Format26formatted_search_docs= \"\\---\\\".join(
27 [
28f'<Document href=\"{doc[\"url\"]}\"/>\{doc[\"content\"]}\</Document>'29fordocinsearch_docs30 ]
31 )
AttributeError: 'NoneType'objecthasnoattribute'search_query'"}
my llm
my run(recommend)
click here -> langsmith run
code like
structured_llm = llm.with_structured_output(Perspectives)
don't work well oftenly.
It should return structed data, according to the schema defined in Perspectives like below,
But when invoke structured_llm, it always return Null randomly. I guess it's due to the message returned by llm originally.
How to make this process more stable.
Here is the traceback
my run(recommend)
click here -> langsmith run
The text was updated successfully, but these errors were encountered: