Skip to content

Cache llm chain to avoid loading the model at every utterance [ENG 437] #7866

Cache llm chain to avoid loading the model at every utterance [ENG 437]

Cache llm chain to avoid loading the model at every utterance [ENG 437] #7866

Triggered via pull request August 13, 2023 18:12
Status Success
Total duration 5m 19s
Artifacts
This run and associated checks have been archived and are scheduled for deletion. Learn more about checks retention

ci-github-actions.yml

on: pull_request
Fit to window
Zoom out
Zoom in