-
Notifications
You must be signed in to change notification settings - Fork 8.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
integrate amazon nove llms to dify (#11324)
Co-authored-by: Yuanbo Li <[email protected]>
- Loading branch information
1 parent
464e635
commit 5908e10
Showing
7 changed files
with
314 additions
and
0 deletions.
There are no files selected for viewing
52 changes: 52 additions & 0 deletions
52
api/core/model_runtime/model_providers/bedrock/llm/amazon.nova-lite-v1.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
model: amazon.nova-lite-v1:0 | ||
label: | ||
en_US: Nova Lite V1 | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 300000 | ||
parameter_rules: | ||
- name: max_new_tokens | ||
use_template: max_tokens | ||
required: true | ||
default: 2048 | ||
min: 1 | ||
max: 5000 | ||
- name: temperature | ||
use_template: temperature | ||
required: false | ||
type: float | ||
default: 1 | ||
min: 0.0 | ||
max: 1.0 | ||
help: | ||
zh_Hans: 生成内容的随机性。 | ||
en_US: The amount of randomness injected into the response. | ||
- name: top_p | ||
required: false | ||
type: float | ||
default: 0.999 | ||
min: 0.000 | ||
max: 1.000 | ||
help: | ||
zh_Hans: 在核采样中,Anthropic Claude 按概率递减顺序计算每个后续标记的所有选项的累积分布,并在达到 top_p 指定的特定概率时将其切断。您应该更改温度或top_p,但不能同时更改两者。 | ||
en_US: In nucleus sampling, Anthropic Claude computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches a particular probability specified by top_p. You should alter either temperature or top_p, but not both. | ||
- name: top_k | ||
required: false | ||
type: int | ||
default: 0 | ||
min: 0 | ||
# tip docs from aws has error, max value is 500 | ||
max: 500 | ||
help: | ||
zh_Hans: 对于每个后续标记,仅从前 K 个选项中进行采样。使用 top_k 删除长尾低概率响应。 | ||
en_US: Only sample from the top K options for each subsequent token. Use top_k to remove long tail low probability responses. | ||
pricing: | ||
input: '0.0008' | ||
output: '0.0016' | ||
unit: '0.001' | ||
currency: USD |
52 changes: 52 additions & 0 deletions
52
api/core/model_runtime/model_providers/bedrock/llm/amazon.nova-micro-v1.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
model: amazon.nova-micro-v1:0 | ||
label: | ||
en_US: Nova Micro V1 | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 128000 | ||
parameter_rules: | ||
- name: max_new_tokens | ||
use_template: max_tokens | ||
required: true | ||
default: 2048 | ||
min: 1 | ||
max: 5000 | ||
- name: temperature | ||
use_template: temperature | ||
required: false | ||
type: float | ||
default: 1 | ||
min: 0.0 | ||
max: 1.0 | ||
help: | ||
zh_Hans: 生成内容的随机性。 | ||
en_US: The amount of randomness injected into the response. | ||
- name: top_p | ||
required: false | ||
type: float | ||
default: 0.999 | ||
min: 0.000 | ||
max: 1.000 | ||
help: | ||
zh_Hans: 在核采样中,Anthropic Claude 按概率递减顺序计算每个后续标记的所有选项的累积分布,并在达到 top_p 指定的特定概率时将其切断。您应该更改温度或top_p,但不能同时更改两者。 | ||
en_US: In nucleus sampling, Anthropic Claude computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches a particular probability specified by top_p. You should alter either temperature or top_p, but not both. | ||
- name: top_k | ||
required: false | ||
type: int | ||
default: 0 | ||
min: 0 | ||
# tip docs from aws has error, max value is 500 | ||
max: 500 | ||
help: | ||
zh_Hans: 对于每个后续标记,仅从前 K 个选项中进行采样。使用 top_k 删除长尾低概率响应。 | ||
en_US: Only sample from the top K options for each subsequent token. Use top_k to remove long tail low probability responses. | ||
pricing: | ||
input: '0.0008' | ||
output: '0.0016' | ||
unit: '0.001' | ||
currency: USD |
52 changes: 52 additions & 0 deletions
52
api/core/model_runtime/model_providers/bedrock/llm/amazon.nova-pro-v1.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
model: amazon.nova-pro-v1:0 | ||
label: | ||
en_US: Nova Pro V1 | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 300000 | ||
parameter_rules: | ||
- name: max_new_tokens | ||
use_template: max_tokens | ||
required: true | ||
default: 2048 | ||
min: 1 | ||
max: 5000 | ||
- name: temperature | ||
use_template: temperature | ||
required: false | ||
type: float | ||
default: 1 | ||
min: 0.0 | ||
max: 1.0 | ||
help: | ||
zh_Hans: 生成内容的随机性。 | ||
en_US: The amount of randomness injected into the response. | ||
- name: top_p | ||
required: false | ||
type: float | ||
default: 0.999 | ||
min: 0.000 | ||
max: 1.000 | ||
help: | ||
zh_Hans: 在核采样中,Anthropic Claude 按概率递减顺序计算每个后续标记的所有选项的累积分布,并在达到 top_p 指定的特定概率时将其切断。您应该更改温度或top_p,但不能同时更改两者。 | ||
en_US: In nucleus sampling, Anthropic Claude computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches a particular probability specified by top_p. You should alter either temperature or top_p, but not both. | ||
- name: top_k | ||
required: false | ||
type: int | ||
default: 0 | ||
min: 0 | ||
# tip docs from aws has error, max value is 500 | ||
max: 500 | ||
help: | ||
zh_Hans: 对于每个后续标记,仅从前 K 个选项中进行采样。使用 top_k 删除长尾低概率响应。 | ||
en_US: Only sample from the top K options for each subsequent token. Use top_k to remove long tail low probability responses. | ||
pricing: | ||
input: '0.0008' | ||
output: '0.0016' | ||
unit: '0.001' | ||
currency: USD |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
52 changes: 52 additions & 0 deletions
52
api/core/model_runtime/model_providers/bedrock/llm/us.amazon.nova-lite-v1.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
model: us.amazon.nova-lite-v1:0 | ||
label: | ||
en_US: Nova Lite V1 (US.Cross Region Inference) | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 300000 | ||
parameter_rules: | ||
- name: max_new_tokens | ||
use_template: max_tokens | ||
required: true | ||
default: 2048 | ||
min: 1 | ||
max: 5000 | ||
- name: temperature | ||
use_template: temperature | ||
required: false | ||
type: float | ||
default: 1 | ||
min: 0.0 | ||
max: 1.0 | ||
help: | ||
zh_Hans: 生成内容的随机性。 | ||
en_US: The amount of randomness injected into the response. | ||
- name: top_p | ||
required: false | ||
type: float | ||
default: 0.999 | ||
min: 0.000 | ||
max: 1.000 | ||
help: | ||
zh_Hans: 在核采样中,Anthropic Claude 按概率递减顺序计算每个后续标记的所有选项的累积分布,并在达到 top_p 指定的特定概率时将其切断。您应该更改温度或top_p,但不能同时更改两者。 | ||
en_US: In nucleus sampling, Anthropic Claude computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches a particular probability specified by top_p. You should alter either temperature or top_p, but not both. | ||
- name: top_k | ||
required: false | ||
type: int | ||
default: 0 | ||
min: 0 | ||
# tip docs from aws has error, max value is 500 | ||
max: 500 | ||
help: | ||
zh_Hans: 对于每个后续标记,仅从前 K 个选项中进行采样。使用 top_k 删除长尾低概率响应。 | ||
en_US: Only sample from the top K options for each subsequent token. Use top_k to remove long tail low probability responses. | ||
pricing: | ||
input: '0.0008' | ||
output: '0.0016' | ||
unit: '0.001' | ||
currency: USD |
52 changes: 52 additions & 0 deletions
52
api/core/model_runtime/model_providers/bedrock/llm/us.amazon.nova-micro-v1.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
model: us.amazon.nova-micro-v1:0 | ||
label: | ||
en_US: Nova Micro V1 (US.Cross Region Inference) | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 128000 | ||
parameter_rules: | ||
- name: max_new_tokens | ||
use_template: max_tokens | ||
required: true | ||
default: 2048 | ||
min: 1 | ||
max: 5000 | ||
- name: temperature | ||
use_template: temperature | ||
required: false | ||
type: float | ||
default: 1 | ||
min: 0.0 | ||
max: 1.0 | ||
help: | ||
zh_Hans: 生成内容的随机性。 | ||
en_US: The amount of randomness injected into the response. | ||
- name: top_p | ||
required: false | ||
type: float | ||
default: 0.999 | ||
min: 0.000 | ||
max: 1.000 | ||
help: | ||
zh_Hans: 在核采样中,Anthropic Claude 按概率递减顺序计算每个后续标记的所有选项的累积分布,并在达到 top_p 指定的特定概率时将其切断。您应该更改温度或top_p,但不能同时更改两者。 | ||
en_US: In nucleus sampling, Anthropic Claude computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches a particular probability specified by top_p. You should alter either temperature or top_p, but not both. | ||
- name: top_k | ||
required: false | ||
type: int | ||
default: 0 | ||
min: 0 | ||
# tip docs from aws has error, max value is 500 | ||
max: 500 | ||
help: | ||
zh_Hans: 对于每个后续标记,仅从前 K 个选项中进行采样。使用 top_k 删除长尾低概率响应。 | ||
en_US: Only sample from the top K options for each subsequent token. Use top_k to remove long tail low probability responses. | ||
pricing: | ||
input: '0.0008' | ||
output: '0.0016' | ||
unit: '0.001' | ||
currency: USD |
52 changes: 52 additions & 0 deletions
52
api/core/model_runtime/model_providers/bedrock/llm/us.amazon.nova-pro-v1.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
model: us.amazon.nova-pro-v1:0 | ||
label: | ||
en_US: Nova Pro V1 (US.Cross Region Inference) | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 300000 | ||
parameter_rules: | ||
- name: max_new_tokens | ||
use_template: max_tokens | ||
required: true | ||
default: 2048 | ||
min: 1 | ||
max: 5000 | ||
- name: temperature | ||
use_template: temperature | ||
required: false | ||
type: float | ||
default: 1 | ||
min: 0.0 | ||
max: 1.0 | ||
help: | ||
zh_Hans: 生成内容的随机性。 | ||
en_US: The amount of randomness injected into the response. | ||
- name: top_p | ||
required: false | ||
type: float | ||
default: 0.999 | ||
min: 0.000 | ||
max: 1.000 | ||
help: | ||
zh_Hans: 在核采样中,Anthropic Claude 按概率递减顺序计算每个后续标记的所有选项的累积分布,并在达到 top_p 指定的特定概率时将其切断。您应该更改温度或top_p,但不能同时更改两者。 | ||
en_US: In nucleus sampling, Anthropic Claude computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches a particular probability specified by top_p. You should alter either temperature or top_p, but not both. | ||
- name: top_k | ||
required: false | ||
type: int | ||
default: 0 | ||
min: 0 | ||
# tip docs from aws has error, max value is 500 | ||
max: 500 | ||
help: | ||
zh_Hans: 对于每个后续标记,仅从前 K 个选项中进行采样。使用 top_k 删除长尾低概率响应。 | ||
en_US: Only sample from the top K options for each subsequent token. Use top_k to remove long tail low probability responses. | ||
pricing: | ||
input: '0.0008' | ||
output: '0.0016' | ||
unit: '0.001' | ||
currency: USD |