-
Notifications
You must be signed in to change notification settings - Fork 8.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: add siliconflow qwq and llama3.3 model
- qwq-32B-preview-instruct - llama3.3-70b
- Loading branch information
Showing
3 changed files
with
108 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
53 changes: 53 additions & 0 deletions
53
api/core/model_runtime/model_providers/siliconflow/llm/meta-llama-3.3-70b-instruct.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,53 @@ | ||
model: meta-llama/Llama-3.3-70B-Instruct | ||
label: | ||
en_US: meta-llama/Llama-3.3-70B-Instruct | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 32768 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: max_tokens | ||
use_template: max_tokens | ||
type: int | ||
default: 512 | ||
min: 1 | ||
max: 4096 | ||
help: | ||
zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。 | ||
en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter. | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: frequency_penalty | ||
use_template: frequency_penalty | ||
- name: response_format | ||
label: | ||
zh_Hans: 回复格式 | ||
en_US: Response Format | ||
type: string | ||
help: | ||
zh_Hans: 指定模型必须输出的格式 | ||
en_US: specifying the format that the model must output | ||
required: false | ||
options: | ||
- text | ||
- json_object | ||
pricing: | ||
input: '4.13' | ||
output: '4.13' | ||
unit: '0.000001' | ||
currency: RMB |
53 changes: 53 additions & 0 deletions
53
api/core/model_runtime/model_providers/siliconflow/llm/qwen-qwq-32B-preview.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,53 @@ | ||
model: Qwen/QwQ-32B-Preview | ||
label: | ||
en_US: Qwen/QwQ-32B-Preview | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 32768 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: max_tokens | ||
use_template: max_tokens | ||
type: int | ||
default: 512 | ||
min: 1 | ||
max: 4096 | ||
help: | ||
zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。 | ||
en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter. | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: frequency_penalty | ||
use_template: frequency_penalty | ||
- name: response_format | ||
label: | ||
zh_Hans: 回复格式 | ||
en_US: Response Format | ||
type: string | ||
help: | ||
zh_Hans: 指定模型必须输出的格式 | ||
en_US: specifying the format that the model must output | ||
required: false | ||
options: | ||
- text | ||
- json_object | ||
pricing: | ||
input: '1.26' | ||
output: '1.26' | ||
unit: '0.000001' | ||
currency: RMB |