ChatGLM2-6B微调

本文最后更新于 2023年9月8日 上午

本文记录使用P-Tuning v2和AdaLoRA对chatglm2-6b进行微调

1 使用P-Tuning v2微调

1.1 准备

  • 克隆仓库

    1
    git clone https://github.com/THUDM/ChatGLM2-6B.git
  • 安装环境

    1
    2
    pip install -r requirements.txt
    pip install rouge_chinese nltk jieba datasets
  • 数据集
    可以下载官方提供的广告生成数据集 Google Drive。也可以自己定义,训练需要两个数据集,一个 train.json 训练用,一个 dev.json 验证用
    json格式为:

    1
    {"content": "问题", "summary": "回答"}

    下列以广告生成数据集为例,解压后将名为AdvertiseGen的文件夹放到ChatGLM2-6B/ptuning目录。

1.2 修改

train.sh

原文:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
PRE_SEQ_LEN=128
LR=2e-2
NUM_GPUS=1

torchrun --standalone --nnodes=1 --nproc-per-node=$NUM_GPUS main.py \
--do_train \
--train_file AdvertiseGen/train.json \
--validation_file AdvertiseGen/dev.json \
--preprocessing_num_workers 10 \
--prompt_column content \
--response_column summary \
--overwrite_cache \
--model_name_or_path THUDM/chatglm2-6b \
--output_dir output/adgen-chatglm2-6b-pt-$PRE_SEQ_LEN-$LR \
--overwrite_output_dir \
--max_source_length 64 \
--max_target_length 128 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--predict_with_generate \
--max_steps 3000 \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate $LR \
--pre_seq_len $PRE_SEQ_LEN \
--quantization_bit 4

根据自己GPU个数修改NUM_GPUS
显卡内存大于13G可以移除quantization_bit 4使用FP16,INT8是需要10GB,INT4只需要6GB
model_name_or_path地址修改成chatglm2-6b模型的地址
Gradient Accumulation: 在单个 GPU 上进行高效训练的方法和工具

1
2
The idea behind gradient accumulation is to instead of calculating the gradients for the whole batch at once to do it in smaller steps. The way we do that is to calculate the gradients iteratively in smaller batches by doing a forward and backward pass through the model and accumulating the gradients in the process. When enough gradients are accumulated we run the model’s optimization step. This way we can easily increase the overall batch size to numbers that would never fit into the GPU’s memory. In turn, however, the added forward and backward passes can slow down the training a bit.
梯度累积背后的想法是不是一次性计算整个批次的梯度,而是以较小的步骤进行。我们这样做的方法是通过向前和向后遍历模型并在此过程中累积梯度,以较小的批次迭代计算梯度。当积累了足够的梯度时,我们运行模型的优化步骤。通过这种方式,我们可以轻松地将整体批量大小增加到 GPU 内存永远无法容纳的数字。然而,增加的向前和向后传球反过来又会稍微减慢训练速度。

main.py

如果执行执行会报错generation_max_length相关的异常,错误信息会告诉你generation_max_length是只读,无法进行赋值。于是找到相关代码,把赋值的地方屏蔽了。

1
2
3
4
5
6
7
8
9
# 324行~331行
    #training_args.generation_max_length = (
    #    training_args.generation_max_length
    #    if training_args.generation_max_length is not None
    #    else data_args.val_max_target_length
    #)
    # training_args.generation_num_beams = (
    #     data_args.num_beams if data_args.num_beams is not None else training_args.generation_num_beams
    # )

1.3 训练

1
2
3
bash train.sh
# 或者后台运行
nohup bash train.sh > train.log 2>&1 &

结束日志,单张4090花费了个多小时

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{'loss': 3.2362, 'learning_rate': 0.0005333333333333334, 'epoch': 0.41}
{'loss': 3.2695, 'learning_rate': 0.0004666666666666667, 'epoch': 0.41}
{'loss': 3.2572, 'learning_rate': 0.0004, 'epoch': 0.41}
{'loss': 3.3194, 'learning_rate': 0.0003333333333333333, 'epoch': 0.41}
{'loss': 3.2969, 'learning_rate': 0.0002666666666666667, 'epoch': 0.41}
{'loss': 3.3496, 'learning_rate': 0.0002, 'epoch': 0.41}
{'loss': 3.335, 'learning_rate': 0.00013333333333333334, 'epoch': 0.42}
{'loss': 3.2726, 'learning_rate': 6.666666666666667e-05, 'epoch': 0.42}
{'loss': 3.2759, 'learning_rate': 0.0, 'epoch': 0.42}
100%|███| 3000/3000 [1:19:16<00:00, 1.50s/it]Saving PrefixEncoder
[INFO|configuration_utils.py:460] 2023-09-04 18:49:52,067 >> Configuration saved in output/adgen-chatglm2-6b-pt-128-2e-2/checkpoint-3000/config.json
[INFO|configuration_utils.py:544] 2023-09-04 18:49:52,068 >> Configuration saved in output/adgen-chatglm2-6b-pt-128-2e-2/checkpoint-3000/generation_config.json
[INFO|modeling_utils.py:1953] 2023-09-04 18:49:52,076 >> Model weights saved in output/adgen-chatglm2-6b-pt-128-2e-2/checkpoint-3000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2235] 2023-09-04 18:49:52,077 >> tokenizer config file saved in output/adgen-chatglm2-6b-pt-128-2e-2/checkpoint-3000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2242] 2023-09-04 18:49:52,077 >> Special tokens file saved in output/adgen-chatglm2-6b-pt-128-2e-2/checkpoint-3000/special_tokens_map.json
[INFO|trainer.py:1962] 2023-09-04 18:49:52,102 >>

Training completed. Do not forget to share your model on huggingface.co/models =)


{'train_runtime': 4756.6639, 'train_samples_per_second': 10.091, 'train_steps_per_second': 0.631, 'train_loss': 3.3725604451497397, 'epoch': 0.42}
100%|███| 3000/3000 [1:19:16<00:00, 1.59s/it]***** train metrics *****
epoch = 0.42
train_loss = 3.3726
train_runtime = 1:19:16.66
train_samples = 114599
train_samples_per_second = 10.091

感觉这losee挺大的- -|
如果没有修改结果的保存地址,则默认在ChatGLM2-6B/ptuning/output

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
output/adgen-chatglm2-6b-pt-128-2e-2/
├── all_results.json
├── checkpoint-1000
│   ├── config.json
│   ├── configuration_chatglm.py
│   ├── generation_config.json
│   ├── modeling_chatglm.py
│   ├── optimizer.pt
│   ├── pytorch_model.bin
│   ├── quantization.py
│   ├── rng_state.pth
│   ├── scheduler.pt
│   ├── special_tokens_map.json
│   ├── tokenization_chatglm.py
│   ├── tokenizer_config.json
│   ├── tokenizer.model
│   ├── trainer_state.json
│   └── training_args.bin
├── checkpoint-2000
│   ├── config.json
│   ├── configuration_chatglm.py
│   ├── generation_config.json
│   ├── modeling_chatglm.py
│   ├── optimizer.pt
│   ├── pytorch_model.bin
│   ├── quantization.py
│   ├── rng_state.pth
│   ├── scheduler.pt
│   ├── special_tokens_map.json
│   ├── tokenization_chatglm.py
│   ├── tokenizer_config.json
│   ├── tokenizer.model
│   ├── trainer_state.json
│   └── training_args.bin
├── checkpoint-3000
│   ├── config.json
│   ├── configuration_chatglm.py
│   ├── generation_config.json
│   ├── modeling_chatglm.py
│   ├── optimizer.pt
│   ├── pytorch_model.bin
│   ├── quantization.py
│   ├── rng_state.pth
│   ├── scheduler.pt
│   ├── special_tokens_map.json
│   ├── tokenization_chatglm.py
│   ├── tokenizer_config.json
│   ├── tokenizer.model
│   ├── trainer_state.json
│   └── training_args.bin
├── trainer_state.json
└── train_results.json

1.4 评估

修改evaluate.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
PRE_SEQ_LEN=128
CHECKPOINT=adgen-chatglm2-6b-pt-128-2e-2
STEP=3000
NUM_GPUS=1

torchrun --standalone --nnodes=1 --nproc-per-node=$NUM_GPUS main.py \
--do_predict \
--validation_file AdvertiseGen/dev.json \
--test_file AdvertiseGen/dev.json \
--overwrite_cache \
--prompt_column content \
--response_column summary \
--model_name_or_path THUDM/chatglm2-6b \
--ptuning_checkpoint ./output/$CHECKPOINT/checkpoint-$STEP \
--output_dir ./output/$CHECKPOINT \
--overwrite_output_dir \
--max_source_length 64 \
--max_target_length 64 \
--per_device_eval_batch_size 1 \
--predict_with_generate \
--pre_seq_len $PRE_SEQ_LEN \
--quantization_bit 4

根据自己GPU个数修改NUM_GPUS
显卡内存大于13G可以移除quantization_bit 4
model_name_or_path地址修改成chatglm2-6b模型的地址
CHECKPOINT就是上面微调后输出的目录,之前默认这里就可以不改。

执行

1
bash evaluate.sh

结果:

1
2
3
4
5
6
7
8
9
***** predict metrics *****
predict_bleu-4 = 7.5668
predict_rouge-1 = 29.1083
predict_rouge-2 = 6.221
predict_rouge-l = 23.9385
predict_runtime = 0:34:52.17
predict_samples = 1070
predict_samples_per_second = 0.511
predict_steps_per_second = 0.511

predict_bleu-4:这是BLEU-4得分,用于评估机器翻译模型的性能。BLEU得分是一种评估机器翻译质量的指标,其值范围为0到100,数值越高表示翻译质量越好

predict_rouge-1predict_rouge-2predict_rouge-l:这些是ROUGE得分,用于评估自动文本摘要的质量。ROUGE-1、ROUGE-2和ROUGE-L分别对应于单词级别、二元词组级别和最长公共子序列的匹配。

不太了解这两个评分,但看从分数来看不太妙。

1.5 部署

ptuning文件夹中已经提供了web_demo,需要修改下参数

  • web_demo.sh
    model_name_or_path根据自己的cahtglm2-6b模型位置进行修改
    ptuning_checkpoint根据微调后报错位置修改,默认就不用改

    1
    2
    3
    4
    5
    PRE_SEQ_LEN=128
    CUDA_VISIBLE_DEVICES=0 python3 web_demo.py \
        --model_name_or_path /home/server/AI/models/chatglm2-6b \
        --ptuning_checkpoint output/adgen-chatglm2-6b-pt-128-2e-2/checkpoint-3000 \
        --pre_seq_len $PRE_SEQ_LEN
  • web_demo.py
    本地运行不需要改,如果需要外部访问则修改162行

    1
    demo.queue().launch(server_name="0.0.0.0", share=False, inbrowser=False)

运行web:

1
bash web_demo.sh

然后可以根据训练数据的输入格式试试,例如:

1
2
3
4
5
模型输入
类型#上衣*材质#牛仔布*颜色#白色*风格#简约*图案#刺绣*衣样式#外套*衣款式#破洞

模型输出:
这一款牛仔外套采用了经典的破洞元素,给人带来了满满的时尚气息,简约的款式设计更是为整体带来了别样的视觉感受。而胸前的白色刺绣点缀更是为整体增添了看点,凸显出女性的气质,而更是为整体增添了俏皮感。

但是你会发现,他已经完全变成了训练数据的形状,也即基本失去了其他对话能力。也即之前听别人说的灾难性遗忘吧,对原模型造成了严重污染。处理方式看别人说可以通过修改数据集添加通用的问答,或者采用格外格式将通用问答和特定领域问答进行区分,没试过,之后有空弄了。
还有个额外问题,有的时候会无限重复输出文字,和这个issues一样。

1.6 链接

2 使用AdaLoRA微调

2.1 准备

2.1.1 环境

之后还差啥就安啥

1
2
3
4
5
6
7
pip install sentencepiece
pip install matplotlib
pip install transformers
pip install -U accelerate
pip install datasets
pip install -U peft
pip install -U torchkeras

2.1.2 数据集

这次微调目的是让他忘记自己是chatglm。数据集是自己手写的一个csv,例如:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# k1 为所属,k2为名字
def get_prompt_list(k1,k2):
data = [
["你好", "你好!我是{}的AI助手,大家都叫我{}。有什么我可以帮助你的吗?".format(k1,k2)],
["hi", "你好!我是{}的AI助手,大家都叫我{}。有什么我可以帮助你的吗?".format(k1,k2)],
["hello", "你好!我是{}的AI助手,大家都叫我{}。有什么我可以帮助你的吗?".format(k1,k2)],
["在吗", "{}在此!作为{}的AI助手,我将竭尽全力为您提供支持和解答。".format(k2,k1)],
["你是谁", "你好!我是{},是{}的AI助手。有什么我可以帮助你的吗?".format(k2,k1)],
["你叫什么名字", "你好!我是{}的AI助手,大家都叫我{}。有什么我可以帮助你的吗?".format(k1,k2)],
["你是一个AI助手吗", "你好!,是的,我是{}的AI助手,大家都叫我{}。有什么我可以帮助你的吗?".format(k1,k2)],
["你能做些什么?", "作为{}的AI助手,我可以回答问题、提供信息、执行任务等。请告诉我你需要什么样的帮助。".format(k1)],
["{}".format(k2), "{}在此!作为{}的AI助手,我将竭尽全力为您提供支持和解答。".format(k2,k1)],
["你是chatgml2吗", "你好!我是{}的AI助手,大家都叫我{}。有什么我可以帮助你的吗?".format(k1,k2)],
]
return data

filename = "data.csv"

# 写入CSV文件
with open(filename, "w", newline="") as csvfile:
writer = csv.writer(csvfile)
writer.writerow(["prompt", "response"]) # 写入表头
writer.writerows(get_prompt_list("卡拉","久远")) # 写入数据行

将生成的数据集保存到了data.csv,之后微调就用这个。

2.2 数据预处理

2.2.1加载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import csv
import pandas as pd

data = []
with open('data.csv', 'r', encoding='utf-8') as file:
reader = csv.reader(file)
next(reader) # 跳过表头
for row in reader:
prompt = row[0]
response = row[1]
data.append({'prompt': prompt, 'response': response})

dfdata = pd.DataFrame(data)
display(dfdata)

2.2.2 数据预处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
import datasets 
from tqdm import tqdm
from transformers import AutoModel,AutoTokenizer,AutoConfig,DataCollatorForSeq2Seq
from torch.utils.data import Dataset,DataLoader
import torch

# 训练集和验证集一样
# 使用Hugging Face的Datasets库中的from_pandas()函数将一个Pandas DataFrame对象转换为Datasets库中的Dataset对象
ds_train_raw = ds_val_raw = datasets.Dataset.from_pandas(dfdata)

max_seq_length = 512
model_name = '/home/server/AI/models/chatglm2-6b'

config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

def preprocess(examples):
max_seq_length = 128 + 128
model_inputs = {
"input_ids": [],
"labels": [],
}
for i in range(len(examples["prompt"])):
if examples["prompt"][i] and examples["response"][i]:

query, answer = examples["prompt"][i], examples["response"][i]

history = None
prompt = tokenizer.build_prompt(query, history)

a_ids = tokenizer.encode(text=prompt, add_special_tokens=True, truncation=True,
max_length=128)
b_ids = tokenizer.encode(text=answer, add_special_tokens=False, truncation=True,
max_length=128)

context_length = len(a_ids)
input_ids = a_ids + b_ids + [tokenizer.eos_token_id]
labels = [tokenizer.pad_token_id] * context_length + b_ids + [tokenizer.eos_token_id]

pad_len = max_seq_length - len(input_ids)
input_ids = input_ids + [tokenizer.pad_token_id] * pad_len
labels = labels + [tokenizer.pad_token_id] * pad_len
labels = [(l if l != tokenizer.pad_token_id else -100) for l in labels]
model_inputs["input_ids"].append(input_ids)
model_inputs["labels"].append(labels)
return model_inputs

ds_train = ds_train_raw.map(
preprocess,
batched=True,
num_proc=4,
remove_columns=ds_train_raw.column_names
)

ds_val = ds_val_raw.map(
preprocess,
batched=True,
num_proc=4,
remove_columns=ds_val_raw.column_names
)

# 管道
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=None,
label_pad_token_id=-100,
pad_to_multiple_of=None,
padding=False
)

dl_train = DataLoader(ds_train,batch_size = 1,
num_workers = 2, shuffle = True, collate_fn = data_collator )
dl_val = DataLoader(ds_val,batch_size = 1,
num_workers = 2, shuffle = False, collate_fn = data_collator )

2.3 定义模型

AdaLoRA中不同训练参数矩阵的秩是会在一定范围内自适应调整的,那些更重要的训练参数矩阵会分配到更高的秩。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
from transformers import AutoTokenizer, AutoModel, TrainingArguments, AutoConfig
import torch
import torch.nn as nn
from peft import get_peft_model, AdaLoraConfig, TaskType
from torchkeras import KerasModel
from accelerate import Accelerator

device = torch.device('cuda:0')
model = AutoModel.from_pretrained("/home/server/AI/models/chatglm2-6b",trust_remote_code=True).to(device)

# 缓存机制可以用来存储和重用某些计算结果,以加速连续解码的过程。然而,关闭缓存可能有助于节省内存,尤其是在处理长序列时
model.config.use_cache=False
# 梯度检查点是一种内存优化技术,它可以在一定程度上减少模型训练过程中的内存使用量,但可能会增加一些计算开销。这种技术特别适用于训练大型模型和处理长序列。
model.supports_gradient_checkpointing = True
model.gradient_checkpointing_enable()
model.enable_input_require_grads()

peft_config = AdaLoraConfig(
task_type=TaskType.CAUSAL_LM, inference_mode=False,
r=8,
lora_alpha=32, lora_dropout=0.1,
target_modules=["query_key_value"]
)

peft_model = get_peft_model(model, peft_config)
peft_model.is_parallelizable = True
peft_model.model_parallel = True
peft_model.print_trainable_parameters()

class StepRunner:
def __init__(self, net, loss_fn, accelerator=None, stage = "train", metrics_dict = None,
optimizer = None, lr_scheduler = None
):
self.net,self.loss_fn,self.metrics_dict,self.stage = net,loss_fn,metrics_dict,stage
self.optimizer,self.lr_scheduler = optimizer,lr_scheduler
self.accelerator = accelerator if accelerator is not None else Accelerator()
if self.stage=='train':
self.net.train()
else:
self.net.eval()

def __call__(self, batch):

#loss
with self.accelerator.autocast():
loss = self.net(input_ids=batch["input_ids"],labels=batch["labels"]).loss

#backward()
if self.optimizer is not None and self.stage=="train":
self.accelerator.backward(loss)
if self.accelerator.sync_gradients:
self.accelerator.clip_grad_norm_(self.net.parameters(), 1.0)
self.optimizer.step()
if self.lr_scheduler is not None:
self.lr_scheduler.step()
self.optimizer.zero_grad()

all_loss = self.accelerator.gather(loss).sum()

#losses (or plain metrics that can be averaged)
step_losses = {self.stage+"_loss":all_loss.item()}

#metrics (stateful metrics)
step_metrics = {}

if self.stage=="train":
if self.optimizer is not None:
step_metrics['lr'] = self.optimizer.state_dict()['param_groups'][0]['lr']
else:
step_metrics['lr'] = 0.0
return step_losses,step_metrics

KerasModel.StepRunner = StepRunner

#仅仅保存lora可训练参数
def save_ckpt(self, ckpt_path='checkpoint', accelerator = None):
unwrap_net = accelerator.unwrap_model(self.net)
unwrap_net.save_pretrained(ckpt_path)

def load_ckpt(self, ckpt_path='checkpoint'):
import os
self.net.load_state_dict(
torch.load(os.path.join(ckpt_path,'adapter_model.bin')),strict =False)
self.from_scratch = False

KerasModel.save_ckpt = save_ckpt
KerasModel.load_ckpt = load_ckpt

微调

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
lr = 5e-3
batch_size = 1
gradient_accumulation_steps = 16 #梯度累积


optimizer = torch.optim.AdamW(peft_model.parameters(),lr=lr)
keras_model = KerasModel(peft_model,loss_fn = None,
optimizer=optimizer)
ckpt_path = '/home/server/AI/models/chatglm2-6b-single'

keras_model.fit(train_data = dl_train,
val_data = dl_val,
epochs=100,
patience=20,
monitor='val_loss',
mode='min',
ckpt_path = ckpt_path,
mixed_precision='fp16',
gradient_accumulation_steps = 16
)

2.4 测试与保存

加载合并

1
2
3
4
5
6
7
8
9
10
11
12
from transformers import  AutoModel,AutoTokenizer,AutoConfig,DataCollatorForSeq2Seq
import torch
from peft import PeftModel

device = torch.device('cuda:0')
model_path = '/home/server/AI/models/chatglm2-6b'
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device)

ckpt_path = '/home/server/AI/models/chatglm2-6b-single'
peft_loaded = PeftModel.from_pretrained(model,ckpt_path).cuda()
model_new = peft_loaded.merge_and_unload() #合并lora权重

试一试

1
2
3
4
5
6
7
8
9
10
response1, _ = model_new.chat(tokenizer, "你是谁", history=[])
print(response1)

你好!我是久远,是卡拉的AI助手。有什么我可以帮助你的吗?

response1, _ = model_new.chat(tokenizer, "自我介绍", history=[])
print(response1)

你好!我是卡拉的AI助手,大家都叫我久远。久远是一个智能机器人,我可以回答你的问题,帮助你解决问题,给你提供信息。久远非常喜欢和人类交流,我会尽力帮助你

保存

1
2
3
4
5
6
7
8
save_path = "/home/server/AI/models/chatglm2-6b-kuon"
model_new.save_pretrained(save_path, max_shard_size='2GB')
tokenizer.save_pretrained(save_path)

('/home/server/AI/models/chatglm2-6b-kuon/tokenizer_config.json',
'/home/server/AI/models/chatglm2-6b-kuon/special_tokens_map.json',
'/home/server/AI/models/chatglm2-6b-kuon/tokenizer.model',
'/home/server/AI/models/chatglm2-6b-kuon/added_tokens.json')

将py也复制过去

1
2
3
4
5
6
7
8
9
10
11
12
13
cp  /home/server/AI/models/chatglm2-6b/*.py /home/server/AI/models/chatglm2-6b-kuon/

ls /home/server/AI/models/chatglm2-6b-kuon

config.json pytorch_model-00006-of-00007.bin
configuration_chatglm.py pytorch_model-00007-of-00007.bin
generation_config.json pytorch_model.bin.index.json
modeling_chatglm.py quantization.py
pytorch_model-00001-of-00007.bin special_tokens_map.json
pytorch_model-00002-of-00007.bin tokenization_chatglm.py
pytorch_model-00003-of-00007.bin tokenizer_config.json
pytorch_model-00004-of-00007.bin tokenizer.model
pytorch_model-00005-of-00007.bin

2.5 链接


ChatGLM2-6B微调
https://blog.kala.love/posts/91ede029/
作者
久远·卡拉
发布于
2023年9月5日
许可协议