We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llamafactory
model_name_or_path: /mnt/bn/lq-aigc/LLama_Factory_v2/LLaMA-Factory/checkpoint/Qwen/Qwen2-7B-Instruct adapter_name_or_path: /mnt/bn/seed-aigc-aesthetic-lq/LLaMA-Factory-v2/LLaMA-Factory/outputs/outputs-1105/saves_rank64/simpo/simpo_8e-6_epoch_2_beta_10_gamma_5_align_test6/checkpoint-351 template: qwen finetuning_type: lora
export_dir: /mnt/bn/seed-aigc-aesthetic-lq/lifanshi/LLaMA-Factory-v2/LLaMA-Factory/outputs/outputs-1105/Qwen/SFT/Qwen2-7B-Instruct—simpo_8e-6_epoch_2_beta_10_gamma_5_align_test6_ckpt1_test export_size: 2 export_device: cpu export_legacy_format: false
我查看了output,发现lora都是正常load的,但是推理出来的时候还是保持了/mnt/bn/lq-aigc/LLama_Factory_v2/LLaMA-Factory/checkpoint/Qwen/Qwen2-7B-Instruct的输出 没有保持加lora的输出
No response
The text was updated successfully, but these errors were encountered:
找到问题了,base model也就是【model_name_or_path】模型选错了,应该要找到adapter模型下面的config,直接索引到对应的base model
Sorry, something went wrong.
No branches or pull requests
Reminder
System Info
llamafactory
version: 0.9.1.dev0Reproduction
Note: DO NOT use quantized model or quantization_bit when merging lora adapters
model
model_name_or_path: /mnt/bn/lq-aigc/LLama_Factory_v2/LLaMA-Factory/checkpoint/Qwen/Qwen2-7B-Instruct
adapter_name_or_path: /mnt/bn/seed-aigc-aesthetic-lq/LLaMA-Factory-v2/LLaMA-Factory/outputs/outputs-1105/saves_rank64/simpo/simpo_8e-6_epoch_2_beta_10_gamma_5_align_test6/checkpoint-351
template: qwen
finetuning_type: lora
export
export_dir: /mnt/bn/seed-aigc-aesthetic-lq/lifanshi/LLaMA-Factory-v2/LLaMA-Factory/outputs/outputs-1105/Qwen/SFT/Qwen2-7B-Instruct—simpo_8e-6_epoch_2_beta_10_gamma_5_align_test6_ckpt1_test
export_size: 2
export_device: cpu
export_legacy_format: false
Expected behavior
我查看了output,发现lora都是正常load的,但是推理出来的时候还是保持了/mnt/bn/lq-aigc/LLama_Factory_v2/LLaMA-Factory/checkpoint/Qwen/Qwen2-7B-Instruct的输出 没有保持加lora的输出
Others
No response
The text was updated successfully, but these errors were encountered: