Skip to content

Cache llm chain to avoid loading the model at every utterance [ENG 437] #23410

Cache llm chain to avoid loading the model at every utterance [ENG 437]

Cache llm chain to avoid loading the model at every utterance [ENG 437] #23410

Triggered via pull request August 13, 2023 18:12
Status Success
Total duration 4m 21s
Artifacts
This run and associated checks have been archived and are scheduled for deletion. Learn more about checks retention

security-scans.yml

on: pull_request
Check for file changes
8s
Check for file changes
Detecting hardcoded secrets
3m 40s
Detecting hardcoded secrets
snyk
52s
snyk
Detect python security issues
3m 42s
Detect python security issues
Fit to window
Zoom out
Zoom in