Skip to content

Commit

Permalink
Merge branch 'main' into svd_ana
Browse files Browse the repository at this point in the history
  • Loading branch information
Pherenice1125 committed Dec 9, 2024
2 parents 32a03c6 + a258e5d commit 0ac37de
Show file tree
Hide file tree
Showing 17 changed files with 747 additions and 43 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: '3.11'
python-version: '3.12'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/python-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ jobs:

steps:
- uses: actions/checkout@v3
- name: Set up Python 3.11
- name: Set up Python 3.12
uses: actions/setup-python@v3
with:
python-version: "3.11"
python-version: "3.12"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
FROM nvidia/cuda:12.5.1-devel-ubuntu22.04

ARG PYTHON_VERSION=3.11
ARG PYTHON_VERSION=3.12
ARG http_proxy
ARG https_proxy

Expand Down
6 changes: 3 additions & 3 deletions Install.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ MoE-PEFT: NVIDIA CUDA initialized successfully.
git clone https://github.com/TUDB-Labs/MoE-PEFT
cd moe_peft
# Optional but recommended
conda create -n moe_peft python=3.11
conda create -n moe_peft python=3.12
conda activate moe_peft
# Install requirements
pip3 install -r requirements.txt --upgrade
Expand Down Expand Up @@ -116,7 +116,7 @@ MoE-PEFT: NVIDIA CUDA initialized successfully.
git clone https://github.com/TUDB-Labs/MoE-PEFT
cd moe_peft
# Optional but recommended
conda create -n moe_peft python=3.11
conda create -n moe_peft python=3.12
conda activate moe_peft
# Install requirements (CUDA 12.1)
pip3 install torch==2.3.1 --index-url https://download.pytorch.org/whl/cu121
Expand Down Expand Up @@ -164,7 +164,7 @@ MoE-PEFT: NVIDIA CUDA initialized successfully.
git clone https://github.com/TUDB-Labs/MoE-PEFT
cd moe_peft
# Optional but recommended
conda create -n moe_peft python=3.11
conda create -n moe_peft python=3.12
conda activate moe_peft
# Install requirements
pip3 install -r requirements.txt --upgrade
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ MoE-PEFT is an open-source *LLMOps* framework built on [m-LoRA](https://github.c

- Seamless integration with the [HuggingFace](https://huggingface.co) ecosystem.

You can try MoE-PEFT with [Google Colab](https://githubtocolab.com/TUDB-Labs/MoE-PEFT/blob/main/misc/finetune-demo.ipynb) before local installation.
You can try MoE-PEFT with [Google Colab](https://colab.research.google.com/github/TUDB-Labs/MoE-PEFT/blob/main/misc/finetune-demo.ipynb) before local installation.

## Supported Platform

Expand Down
4 changes: 2 additions & 2 deletions generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@ def main(
)

for prompt in output[adapter_name]:
print(f"\n{'='*10}\n")
print(f"\n{'=' * 10}\n")
print(prompt)
print(f"\n{'='*10}\n")
print(f"\n{'=' * 10}\n")


if __name__ == "__main__":
Expand Down
31 changes: 20 additions & 11 deletions misc/finetune-demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,27 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# MoE-PEFT: An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT\n",
"# MoE-PEFT: An Efficient LLM Fine-Tuning Factory for Mixture of Expert (MoE) Parameter-Efficient Fine-Tuning.\n",
"[![](https://github.com/TUDB-Labs/MoE-PEFT/actions/workflows/python-test.yml/badge.svg)](https://github.com/TUDB-Labs/MoE-PEFT/actions/workflows/python-test.yml)\n",
"[![](https://img.shields.io/github/stars/TUDB-Labs/MoE-PEFT?logo=GitHub&style=flat)](https://github.com/TUDB-Labs/MoE-PEFT/stargazers)\n",
"[![](https://img.shields.io/github/v/release/TUDB-Labs/MoE-PEFT?logo=Github)](https://github.com/TUDB-Labs/MoE-PEFT/releases/latest)\n",
"[![](https://img.shields.io/pypi/v/moe_peft?logo=pypi)](https://pypi.org/project/moe_peft/)\n",
"[![](https://img.shields.io/docker/v/mikecovlee/moe_peft?logo=Docker&label=docker)](https://hub.docker.com/r/mikecovlee/moe_peft/tags)\n",
"[![](https://img.shields.io/github/license/TUDB-Labs/MoE-PEFT)](http://www.apache.org/licenses/LICENSE-2.0)\n",
"\n",
"MoE-PEFT is an open-source *LLMOps* framework built on [m-LoRA](https://github.com/TUDB-Labs/mLoRA) developed by the [IDs Lab](https://ids-lab-asia.github.io) at Sichuan University. It is designed for high-throughput fine-tuning, evaluation, and inference of Large Language Models (LLMs) using techniques such as LoRA, DoRA, MixLoRA, and others. Key features of MoE-PEFT include:\n",
"MoE-PEFT is an open-source *LLMOps* framework built on [m-LoRA](https://github.com/TUDB-Labs/mLoRA). It is designed for high-throughput fine-tuning, evaluation, and inference of Large Language Models (LLMs) using techniques such as MoE + Others (like LoRA, DoRA). Key features of MoE-PEFT include:\n",
"\n",
"- Concurrent fine-tuning of multiple adapters with a shared pre-trained model.\n",
"- Concurrent fine-tuning, evaluation, and inference of multiple adapters with a shared pre-trained model.\n",
"\n",
"- **MoE PEFT** optimization, mainly for [MixLoRA](https://github.com/TUDB-Labs/MixLoRA) and other MoLE implementation.\n",
"\n",
"- Support for multiple PEFT algorithms and various pre-trained models.\n",
"\n",
"- MoE PEFT optimization, mainly for [MixLoRA](https://github.com/TUDB-Labs/MixLoRA).\n",
"- Seamless integration with the [HuggingFace](https://huggingface.co) ecosystem.\n",
"\n",
"## About this notebook\n",
"\n",
"This is a simple jupiter notebook for showcasing the basic process of fine-tuning TinyLLaMA with dummy data"
"This is a simple jupiter notebook for showcasing the basic process of fine-tuning TinyLLaMA with dummy data."
]
},
{
Expand All @@ -38,6 +40,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip uninstall torchvision torchaudio -y\n",
"! pip install moe_peft"
]
},
Expand Down Expand Up @@ -83,12 +86,18 @@
"metadata": {},
"outputs": [],
"source": [
"lora_config = moe_peft.LoraConfig(\n",
"lora_config = moe_peft.adapter_factory(\n",
" peft_type=\"LORA\",\n",
" adapter_name=\"lora_0\",\n",
" lora_r_=32,\n",
" lora_alpha_=64,\n",
" lora_dropout_=0.05,\n",
" target_modules_={\"q_proj\": True, \"k_proj\": True, \"v_proj\": True, \"o_proj\": True},\n",
" r=8,\n",
" lora_alpha=16,\n",
" lora_dropout=0.05,\n",
" target_modules=[\n",
" \"q_proj\",\n",
" \"k_proj\",\n",
" \"v_proj\",\n",
" \"o_proj\",\n",
" ],\n",
")\n",
"\n",
"model.init_adapter(lora_config)\n",
Expand Down Expand Up @@ -148,7 +157,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.12.7"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit 0ac37de

Please sign in to comment.