ModelScope Hub
δΈζ ο½ English
SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) is an extensible framwork designed to faciliate lightweight model fine-tuning and inference. It integrates implementations for various efficient fine-tuning methods, by embracing approaches that is parameter-efficient, memory-efficient, and time-efficient. SWIFT integrates seamlessly into ModelScope ecosystem and offers the capabilities to finetune various models, with a primary emphasis on LLMs and vision models. Additionally, SWIFT is fully compatible with PEFT, enabling users to leverage the familiar Peft interface to finetune ModelScope models.
Currently supported approches (and counting):
- LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
- QA-LoRA:Quantization-Aware Low-Rank Adaptation of Large Language Models.
- LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
- Adapter: Parameter-Efficient Transfer Learning for NLP
- Prompt Tuning: Visual Prompt Tuning
- Side: Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks
- ResTuning-Bypass
- ROME: Rank-One Editing of Encoder-Decoder Models
- All tuners offered on PEFT
Key features:
- By integrating the ModelScope library, models can be readily obatined via a model-id.
- Tuners provided by SWIFT can be combined together to allow exploration of multiple tuners on a model for best result.
- Support calling
activate_adapter
ordeactivate_adapter
orset_active_adapters
to activate/deactivate tuners. User can inference with one model and multiple tuners in different threads independently.
Users can check the documentation of Swift to get detail tutorials.
- π₯ 2023.10.30: Support QA-LoRA and LongLoRA to decrease memory usage in training.
- π₯ 2023.10.30: Support ROME(Rank One Model Editing) to add/modify knowledges, training is not needed!
- π₯ 2023.10.27: Support for chatglm3 series models: chatglm3-6b-base, chatglm3-6b, chatglm3-6b-32k. The corresponding shell script can be found in
scripts/chatglm3_6b_32k
. - π₯ 2023.10.17: Supported int4, int8 models: qwen-7b-chat-int4, qwen-14b-chat-int4, qwen-vl-chat-int4, baichuan2-7b-chat-int4, baichuan2-13b-chat-int4, qwen-7b-chat-int8, qwen-14b-chat-int8. The corresponding shell script can be found at
scripts/qwen_7b_chat_int4
,scripts/qwen_14b_chat_int4
,scripts/qwen_vl_chat_int4
,scripts/qwen_7b_chat_int8
,scripts/qwen_14b_chat_int8
. - 2023.10.15: Supported ziya2-13b model series: ziya2-13b, ziya2-13b-chat. The corresponding shell script can be found at
scripts/ziya2_13b_chat
. - 2023.10.12: Supported mistral-7b model series: openbuddy-mistral-7b-chat, mistral-7b, mistral-7b-chat. The corresponding shell script can be found at
scripts/openbuddy_mistral_7b_chat
,scripts/mistral_7b_chat
. - π₯ 2023.10.7: Supported DeepSpeed ZeRO-2, enabling LoRA (not just QLoRA) to run DDP on 2*A10. The corresponding shell script can be found at
scripts/qwen_7b_chat/lora_ddp_ds/sft.sh
. - π₯ 2023.9.25: Supported qwen-14b model series: qwen-14b, qwen-14b-chat. The corresponding shell script can be found at
scripts/qwen_14b
,scripts/qwen_14b_chat
. - 2023.9.12: Supported training with MP+DDP to accelerate full-parameter fine-tuning speed. The corresponding shell script can be found at
scripts/qwen_7b_chat/full_mp_ddp/sft.sh
.
Press this link to view the detail documentation of these examples.
Quickly fine-tune, infer with LLM, and build a Web-UI.
git clone https://github.com/modelscope/swift.git
cd swift
pip install .[llm]
# Experimental environment: A10, 3090, A100, ...
# 16GB GPU memory
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import torch
from swift.llm import (
DatasetName, InferArguments, ModelType, SftArguments
)
from swift.llm.run import infer_main, sft_main, web_ui_main
model_type = ModelType.qwen_7b_chat_int4
sft_args = SftArguments(
model_type=model_type,
eval_steps=50,
train_dataset_sample=2000,
dataset=[DatasetName.leetcode_python_en],
output_dir='output',
gradient_checkpointing=True)
best_ckpt_dir = sft_main(sft_args)
print(f'best_ckpt_dir: {best_ckpt_dir}')
torch.cuda.empty_cache()
infer_args = InferArguments(
ckpt_dir=best_ckpt_dir,
load_args_from_ckpt_dir=True,
stream=True,
show_dataset_sample=5)
infer_main(infer_args)
torch.cuda.empty_cache()
web_ui_main(infer_args)
SFT:
# Experimental environment: A10, 3090, A100, ...
# 10GB GPU memory
CUDA_VISIBLE_DEVICES=0 swift sft --model_id_or_path qwen/Qwen-7B-Chat-Int4 --dataset blossom-math-zh
# Using DDP
# Experimental environment: 2 * 3090
# 2 * 10GB GPU memory
CUDA_VISIBLE_DEVICES=0,1 \
NPROC_PER_NODE=2 \
swift sft \
--model_id_or_path qwen/Qwen-7B-Chat-Int4 \
--dataset blossom-math-zh \
# Using custom dataset
CUDA_VISIBLE_DEVICES=0 swift sft --model_id_or_path qwen/Qwen-7B-Chat-Int4 --custom_train_dataset_path chatml.jsonl
Inference:
CUDA_VISIBLE_DEVICES=0 swift infer --ckpt_dir 'xxx/vx_xxx/checkpoint-xxx'
Web-UI:
CUDA_VISIBLE_DEVICES=0 swift web-ui --ckpt_dir 'xxx/vx_xxx/checkpoint-xxx'
- Supported SFT Methods: lora, qlora, full(full parameter fine-tuning)
- Supported Features: quantization, DDP, model parallelism, gradient checkpointing, pushing to modelscope hub, custom datasets, multimodal and agent SFT, mutli-round chat, ...
- Supported Models:
- π₯ qwen series: qwen-7b, qwen-7b-chat, qwen-14b, qwen-14b-chat, qwen-7b-chat-int4, qwen-14b-chat-int4, qwen-7b-chat-int8, qwen-14b-chat-int8
- π₯ qwen-vl series: qwen-vl, qwen-vl-chat, qwen-vl-chat-int4
- baichuan series: baichuan-7b, baichuan-13b, baichuan-13b-chat, baichuan2-7b, baichuan2-7b-chat, baichuan2-13b, baichuan2-13b-chat, baichuan2-7b-chat-int4, baichuan2-13b-chat-int4
- chatglm series: chatglm2-6b, chatglm2-6b-32k, chatglm3-6b-base, chatglm3-6b, chatglm3-6b-32k
- llama series: llama2-7b, llama2-7b-chat, llama2-13b, llama2-13b-chat, llama2-70b, llama2-70b-chat
- openbuddy series: openbuddy-llama2-13b-chat, openbuddy-llama-65b-chat, openbuddy-llama2-70b-chat, openbuddy-mistral-7b-chat
- internlm series: internlm-7b, internlm-7b-chat, internlm-7b-chat-8k, internlm-20b, internlm-20b-chat
- xverse series: xverse-7b, xverse-7b-chat, xverse-13b, xverse-13b-chat
- mistral series: mistral-7b, mistral-7b-chat
- ziya series: ziya2-13b, ziya2-13b-chat
- skywork series: skywork-13b, skywork-13b-chat
- other: polylm-13b, seqgpt-560m
- Supported Datasets:
- NLP:
- General: π₯alpaca-en(gpt4), π₯alpaca-zh(gpt4), multi-alpaca-all, instinwild-en, instinwild-zh, cot-en, cot-zh, firefly-all-zh, instruct-en, gpt4all-en, sharegpt-en, sharegpt-zh
- Agent: damo-agent-zh, π₯damo-agent-mini-zh
- Coding: code-alpaca-en, code-python-zh, π₯leetcode-python-en
- Medical: medical-en, medical-zh, medical-mini-zh
- Law: π₯lawyer-llama-zh, tigerbot-law-zh
- Math: π₯blossom-math-zh, school-math-zh
- SQL: text2sql-en, π₯sql-create-context-en
- Text Generation: π₯advertise-gen-zh, π₯dureader-robust-zh
- Classification: cmnli-zh, jd-sentiment-zh
- Other: finance-en, poetry-zh, cls-fudan-news-zh, ner-jave-zh
- Multi-Modal: π₯coco-en
- Custom Dataset
- NLP:
- Supported Templates:
- Text Generation: default-generation, chatglm-generation
- Chat: chatml(qwen), baichuan, chatglm2, chatglm3, llama, openbuddy-llama, default, internlm, xverse, skywork
SWIFT is running in Python environment. Please make sure your python version is higher than 3.8.
- Install SWIFT by the
pip
command:
pip install ms-swift -U
- Install SWIFT by source code(for running sft/infer examples), please run:
git clone https://github.com/modelscope/swift.git
cd swift
pip install -e .
SWIFT requires torch>=1.13.
- Use SWIFT in our docker image:
docker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.8.0-py38-torch2.0.1-tf2.13.0-1.9.1
SWIFT supports multiple tuners, as well as tuners provided by PEFT. To use these tuners, simply call:
from swift import Swift, LoRAConfig
config = LoRAConfig(...)
model = Swift.prepare_model(model, config, extra_state_keys=['...'])
The code snippet above initialized the tuner randomly. The input model is an instance of torch.nn.Module
, the config is a subclass instance of SwiftConfig
or PeftConfig
. extra_state_keys is
the extra module weights(like the linear head) to be trained and stored in the output dir.
You may combine multiple tuners by:
from swift import Swift, LoRAConfig, PromptConfig
model = Swift.prepare_model(model, {'lora': LoRAConfig(...), 'prompt': PromptConfig(...)})
Call save_pretrained
and push_to_hub
after finetuning:
from swift import push_to_hub
model.save_pretrained('some-output-folder')
push_to_hub('my-group/some-repo-id-modelscope', 'some-output-folder', token='some-ms-token')
Assume my-group/some-repo-id-modelscope
is the model-id in the hub, and some-ms-token
is the token for uploading.
Using the model-id to do later inference:
from swift import Swift
model = Swift.from_pretrained(model, 'my-group/some-repo-id-modelscope')
Here shows a runnable example:
import os
import tempfile
# Please install modelscope by `pip install modelscope`
from modelscope import Model
from swift import LoRAConfig, SwiftModel, Swift, push_to_hub
tmp_dir = tempfile.TemporaryDirectory().name
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
model = Model.from_pretrained('modelscope/Llama-2-7b-ms', device_map='auto')
lora_config = LoRAConfig(target_modules=['q_proj', 'k_proj', 'v_proj'])
model: SwiftModel = Swift.prepare_model(model, lora_config)
# Do some finetuning here
model.save_pretrained(tmp_dir)
push_to_hub('my-group/swift_llama2', output_dir=tmp_dir)
model = Model.from_pretrained('modelscope/Llama-2-7b-ms', device_map='auto')
model = SwiftModel.from_pretrained(model, 'my-group/swift_llama2', device_map='auto')
This is a example that uses transformers for model creation uses SWIFT for efficient tuning.
from swift import Swift, LoRAConfig, AdapterConfig, PromptConfig
from transformers import AutoModelForImageClassification
# init vit model
model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224")
# init lora tuner config
lora_config = LoRAConfig(
r=10, # the rank of the LoRA module
target_modules=['query', 'key', 'value'], # the modules to be replaced with the end of the module name
merge_weights=False # whether to merge weights
)
# init adapter tuner config
adapter_config = AdapterConfig(
dim=768, # the dimension of the hidden states
hidden_pos=0, # the position of the hidden state to passed into the adapter
target_modules=r'.*attention.output.dense$', # the modules to be replaced with regular expression
adapter_length=10 # the length of the adapter length
)
# init prompt tuner config
prompt_config = PromptConfig(
dim=768, # the dimension of the hidden states
target_modules=r'.*layer\.\d+$', # the modules to be replaced with regular expression
embedding_pos=0, # the position of the embedding tensor
prompt_length=10, # the length of the prompt tokens
attach_front=False # Whether prompt is attached in front of the embedding
)
# create model with swift. In practice, you can use any of these tuners or a combination of them.
model = Swift.prepare_model(model, {"lora_tuner": lora_config, "adapter_tuner": adapter_config, "prompt_tuner": prompt_config})
# get the trainable parameters of model
model.get_trainable_parameters()
# 'trainable params: 838,776 || all params: 87,406,432 || trainable%: 0.9596273189597764'
You can use the features offered by Peft in SWIFT:
from swift import LoraConfig, Swift
from peft import TaskType
lora_config = LoraConfig(target_modules=['query', 'key', 'value'], task_type=TaskType.CAUSAL_LM)
model_wrapped = Swift.prepare_model(model, lora_config)
# or call from_pretrained to load weights in the modelhub
model_wrapped = Swift.from_pretrained(model, 'some-id-in-the-modelscope-modelhub')
The saving strategy between Swift tuners and Peft tuners are slightly different. You can name a tuner by:
model = Swift.prepare_model(model, {'default': LoRAConfig(...)})
model.save_pretrained('./output')
In the output dir, you will have a dir structure like this:
output
|-- default
|-- adapter_config.json
|-- adapter_model.bin
|-- adapter_config.json
|-- adapter_model.bin
The config/weights stored in the output dir is the config of extra_state_keys
and the weights of it. This is different from PEFT, which stores the weights and config of the default
tuner.
-
ModelScope Library is the model library of ModelScope project, which contains a large number of popular models.
This project is licensed under the Apache License (Version 2.0).