Cache llm chain to avoid loading the model at every utterance [ENG 437] #23410
This run and associated checks have been archived and are scheduled for deletion.
Learn more about checks retention
security-scans.yml
on: pull_request
Check for file changes
8s
Detecting hardcoded secrets
3m 40s
snyk
52s
Detect python security issues
3m 42s