微信扫码
与创始人交个朋友
我要投稿
Bash
cd ~
!unzip PaddleNLP-develop.zip
!pip install ./PaddleNLP-develop
clone https://github.com/PaddlePaddle/PaddleNLP.git !git
%cd ~
!unzip PaddleSlim-develop.zip
!cd ~/PaddleSlim-develop/csrc && python ./setup_cuda.py install && cd ~/PaddleSlim-develop && pip install .
clone https://github.com/PaddlePaddle/PaddleSlim.git !git
Python# 安装ERNIE Bot和tiktoken!pip install erniebot tiktoken
Plain Text本次使用的数据集都放在work文件夹里了,work文件夹目录如下:Poem ----> 放置了训练集和验证集。Poem_soul_Data.json ----> 源未处理格式数据集。lora_argument.json ----> Lora微调配置文件。poem_soul.json ----> 符合格式要求的数据集。
JSON{"conversation": [{"system": "你是一个专业的古诗歌专家,你知道很多古诗。用户报上关键词后,你可以把包含关键词的古诗告诉用户","input": "根据绿苔这个关键词写一首古诗","output": "生成的古诗为:\n崖悬百尺古,\n面削一屏开。\n晴日流丹草,\n春风长绿苔。"}]}
JSON{"src": ["根据绿苔这个关键词写一首古诗"], "tgt": ["生成的古诗为:\n崖悬百尺古,\n面削一屏开。\n晴日流丹草,\n春风长绿苔。"],"context": {"system": "你是一个专业的古诗歌专家,你知道很多古诗。用户报上关键词后,你可以把包含关键词的古诗告诉用户"}}
Python
# 格式转换
import json
Poem_soul_Data = []
with open('/home/aistudio/work/Poem_soul_Data.json', 'r') as f:
data = json.load(f)
for data_item in data:
src = []
tgt = []
for j in data_item['conversation']:
src.append(j['input'])
tgt.append(j['output'])
print(j['input'])
print(j['output'])
assert len(src) == len(tgt)
Poem_soul_Data.append({'src':src, 'tgt':tgt, "context": {"system": f"{data_item['conversation'][0]['system']}"}})
with open('/home/aistudio/work/poem_soul.json', 'w', encoding='utf-8') as f:
for item in Poem_soul_Data:
json.dump(item, f, ensure_ascii=False)
f.write('\n')
Python
%cd ~
# 对古诗进行8:2数据划分
import json
import random
import os
# 定义文件路径
file_path = 'work/Poem/'
input_files = ['/home/aistudio/work/poem_soul.json']
train_output_file = os.path.join(file_path,'train.json')
dev_output_file = os.path.join(file_path,'dev.json')
directory = os.path.dirname(file_path)
if not os.path.exists(directory):
os.makedirs(directory)
print(f'目录 {directory} 已创建')
else:
print(f'目录 {directory} 已存在')
train_data = []
dev_data = []
for json_file in input_files:
with open(json_file, 'r', encoding='utf-8') as f:
lines = f.readlines()
data = [json.loads(line) for line in lines]
random.shuffle(data)
split_index = int(0.8 * len(data))
train_data += data[:split_index]
dev_data += data[split_index:]
random.shuffle(train_data)
random.shuffle(dev_data)
# 将数据保存到train.json和dev.json
with open(train_output_file, 'w', encoding='utf-8') as f:
for item in train_data:
f.write(json.dumps(item, ensure_ascii=False) + '\n')
with open(dev_output_file, 'w', encoding='utf-8') as f:
for item in dev_data:
f.write(json.dumps(item, ensure_ascii=False) + '\n')
print(f"训练数据保存在 {train_output_file}")
print(f"测试数据保存在 {dev_output_file}")
Python#查看lora配置文件 记得修改配置文件中的"dataset_name_or_path"%cat ~/work/lora_argument.json
Bash# 修改配置文件中的"dataset_name_or_path",运行下述代码直接将其相应配置写入。%%writefile ~/work/lora_argument.json {"model_name_or_path": "THUDM/chatglm2-6b","dataset_name_or_path": "/home/aistudio/work/Poem","output_dir": "./checkpoints/lora_ckpts","per_device_train_batch_size": 4,"gradient_accumulation_steps": 4,"per_device_eval_batch_size": 8,"eval_accumulation_steps":16,"num_train_epochs": 3,"learning_rate": 3e-04,"warmup_steps": 30,"logging_steps": 1,"evaluation_strategy": "epoch","save_strategy": "epoch","src_length": 1024,"max_length": 2048,"fp16": true,"fp16_opt_level": "O2","do_train": true,"do_eval": true,"disable_tqdm": false,"load_best_model_at_end": true,"eval_with_do_generation": false,"metric_for_best_model": "accuracy","recompute": true,"save_total_limit": 1,"tensor_parallel_degree": 1,"pipeline_parallel_degree": 1,"lora": true,"zero_padding": false,"unified_checkpoint": true,"use_flash_attention": false}
Python%cd /home/aistudio/PaddleNLP-develop/llm!python run_finetune.py /home/aistudio/work/lora_argument.json
Python
~/PaddleNLP-develop/llm/tools
!python merge_lora_params.py \
THUDM/chatglm2-6b \
--lora_path /home/aistudio/PaddleNLP-develop/llm/checkpoints/lora_ckpts \
--output_path ~/data/checkpoints/chatglm2_lora_merge \
--device "gpu" \
--safe_serialization True
# 脚本参数介绍
# lora_path: Lora参数和配置路径,对Lora参数进行初始化,默认为None。
# model_name_or_path: 必须,主干模型参数路径,默认为None。
# merge_model_path: 必须,合并参数后保存路径,默认为None。
# device: 运行环境,默认为gpu。
# safe_serialization: 是否保存为safetensor格式,默认为True。
Python!pip uninstall paddlepaddle-gpu!python -m pip install paddlepaddle-gpu==0.0.0.post118 -f https://www.paddlepaddle.org.cn/whl/linux/gpu/develop.html
Python
# 动态图推理
~/PaddleNLP-develop/llm
!cp /home/aistudio/PaddleNLP-develop/llm/predict/predictor.py /home/aistudio/PaddleNLP-develop/llm
!python predictor.py \
THUDM/chatglm2-6b \ # 此处放置lora_argument.json配置文件中的model_name_or_path
/home/aistudio/PaddleNLP-develop/llm/checkpoints/lora_ckpts/checkpoint-11250 \
--data_file /home/aistudio/work/Poem/dev.json \
--dtype float16
Python# 静态图导出%cd ~/PaddleNLP-develop/llm!python export_model.py \--model_name_or_path THUDM/chatglm2-6b \--output_path /home/aistudio/data/static_inference_model_chatglm2 \--lora_path /home/aistudio/PaddleNLP-develop/llm/checkpoints/lora_ckpts/checkpoint-11250 \--dtype float16
Python
# 静态图推理
~/PaddleNLP-develop/llm
!python predictor.py \
/home/aistudio/data/static_inference_model_chatglm2 \
--data_file ~/work/Poem/dev.json \
--dtype float16 \
--mode static
Python
[2024-08-06 12:56:51,087] [INFO] - Start predict
[2024-08-06 12:56:51,087] [ WARNING] - The last conversation is not a single-round, chat-template will skip the conversation: ('生成的古诗为:\n星榆叶叶昼离披,\n云粉千重凝不飞。\n昆玉楼台珠树密,\n夜来谁向月中归。',)
[2024-08-06 12:56:56,471] [INFO] - End predict
***********Source**********
[('根据楼台这个关键词写一首古诗', '生成的古诗为:\n星榆叶叶昼离披,\n云粉千重凝不飞。\n昆玉楼台珠树密,\n夜来谁向月中归。')]
***********Target**********
***********Output**********
生成的古诗为:
寂寂随缘止,
年年归去来。
山阴湖上雪,
疑是旧楼台。
[2024-08-06 12:46:38,954] [INFO] - Start predict
[2024-08-06 12:46:38,954] [ WARNING] - The last conversation is not a single-round, chat-template will skip the conversation: ('生成的古诗为:\n只说梅花似雪飞,\n朱颜谁信暗香随。\n不须添上徐熙画,\n付与西湖别赋诗。',)
[2024-08-06 12:46:45,444] [INFO] - End predict
***********Source**********
[('根据朱颜这个关键词写一首古诗', '生成的古诗为:\n只说梅花似雪飞,\n朱颜谁信暗香随。\n不须添上徐熙画,\n付与西湖别赋诗。')]
***********Target**********
***********Output**********
生成的古诗为:
一枝半落半还开,
病酒消愁只自杯。
莫怪朱颜消得尽,
暗香疏影两三栽。
Python
#-*- coding:utf-8 -*-
from paddlenlp.transformers import AutoModelForCausalLM, AutoTokenizer
class Hutaos(object):
def __init__(self):
# load model
self.model_path = r"/home/aistudio/PaddleNLP/llm/checkpoints/llama_lora_merge"# "/root/gushi2/merged"
self.model = AutoModelForCausalLM.from_pretrained(self.model_path)
self.tokenizer = AutoTokenizer.from_pretrained(self.model_path)
def answer(self,questions):
# query = input() #"根据明月这个关键词给我生成一个古诗"
input_features = self.tokenizer("{}".format(questions), return_tensors="pd")
outputs = self.model.generate(**input_features, max_length=128)
response = tokenizer.batch_decode(outputs[0])
return response
llms = Hutaos()
def get_answer(question):
answer = llms.answer(question)
return answer
if __name__ == "__main__":
print(get_answer("根据孤影这个关键字给我生成一首古诗"))
总结
本项目的创意灵感根植于日常的点滴生活,并受到近年来AIGC技术飞速发展所带来的变革性影响的启迪。有人说人工智能会替代人类,但当我在武侠城看到两千年前的打铁花的重现,看到那令人震撼的马术表演,还有前段时间的AI复活亲人,旧照片的修复等等,令人激动又难以平复。用AIGC技术赋予传统文化新的活力,以一种新的方式让更多的青少年接受和享受传统文化的美,这正是我们前进的方向与目标。
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-07-11
2024-07-11
2024-07-09
2024-09-18
2024-06-11
2024-07-23
2024-07-20
2024-07-12
2024-07-26
2024-07-23
2024-11-18
2024-11-16
2024-11-16
2024-10-31
2024-10-31
2024-10-27
2024-10-26
2024-10-25