You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add chat prompt type for HumanEvalPack tasks for more realistic and fair model evaluation and comparison.
Motivation
As implied in the docs and current implementation, instruction-tuned models perform better on the tasks when prompted in a way that aligns with how they were trained. There are custom prompts already implemented for several models, but it is not exhaustive and requires an update for each new prompt format or change per provider. e.g. new tokens/different format used in the latest Meta Llama or IBM Granite models.
Related to a problem/frustration with the current features?
Yes, scores are lower when not applying the chat template.
Is it related to something you would need for a project?
Yes, it's needed for anyone that wants to easily and fairly evaluate instruction-tuned models perform on these tasks.
Is it something you worked on and think could benefit the community?
Yes! Makes it much easier to evaluate instruction-tuned models out-of-the-box without needing to update the code. Users can more easily and fairly compare instruct models on these tasks.
It could also simplify the if/else block in the get_prompt function where prompt is formatted to make the codebase simpler.
Description
HuggingFace AutoTokenizer has a chat template for instruct models. Let's use that.
Add chat as a value for --prompt
Pass the tokenizer to the HumanEvalPack tasks
Apply model's chat template via tokenizer apply_chat_template function
# get_prompt function
...
elifself.prompt=="chat":
assertself.tokenizer.chat_templateisnotNone, "The `chat` prompt should only be used for models that have a chat template."prompt=self.tokenizer.apply_chat_template(
[{"role": "user", "content": inp}],
tokenize=False,
add_generation_prompt=True,
)
prompt+=prompt_base
...
Add
chat
prompt type for HumanEvalPack tasks for more realistic and fair model evaluation and comparison.Motivation
As implied in the docs and current implementation, instruction-tuned models perform better on the tasks when prompted in a way that aligns with how they were trained. There are custom
prompts
already implemented for several models, but it is not exhaustive and requires an update for each new prompt format or change per provider. e.g. new tokens/different format used in the latest Meta Llama or IBM Granite models.Related to a problem/frustration with the current features?
Yes, scores are lower when not applying the chat template.
Benchmark results
Model
meta-llama/Llama-3.2-3B-Instruct
instruct
promptcodellama
promptchat
promptModel:
ibm-granite/granite-3.0-8b-instruct
instruct
promptchat
promptIs it related to something you would need for a project?
Yes, it's needed for anyone that wants to easily and fairly evaluate instruction-tuned models perform on these tasks.
Is it something you worked on and think could benefit the community?
Yes! Makes it much easier to evaluate instruction-tuned models out-of-the-box without needing to update the code. Users can more easily and fairly compare instruct models on these tasks.
It could also simplify the
if/else
block in theget_prompt
function where prompt is formatted to make the codebase simpler.Description
HuggingFace
AutoTokenizer
has a chat template for instruct models. Let's use that.chat
as a value for--prompt
apply_chat_template
functionThe text was updated successfully, but these errors were encountered: