微信扫码
与创始人交个朋友
我要投稿
pip install -U optimum[neural-compressor] intel-extension-for-transformers
def quantize(model_name: str, output_path: str, calibration_set: "datasets.Dataset"):
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
def preprocess_function(examples):
return tokenizer(examples["text"], padding="max_length", max_length=512, truncation=True)
vectorized_ds = calibration_set.map(preprocess_function, num_proc=10)
vectorized_ds = vectorized_ds.remove_columns(["text"])
quantizer = INCQuantizer.from_pretrained(model)
quantization_config = PostTrainingQuantConfig(approach="static", backend="ipex", domain="nlp")
quantizer.quantize(
quantization_config=quantization_config,
calibration_dataset=vectorized_ds,
save_directory=output_path,
batch_size=1,
)
tokenizer.save_pretrained(output_path)
# 数据集地址https://huggingface.co/datasets/allenai/qasper
from optimum.intel import IPEXModelmodel = IPEXModel.from_pretrained("Intel/bge-small-en-v1.5-rag-int8-static")
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Intel/bge-small-en-v1.5-rag-int8-static")
inputs = tokenizer(sentences, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# get the [CLS] token
embeddings = outputs[0][:, 0]
从上面的结果可以看出,通过量化后模型的延迟和吞吐量都有大幅提升。大家是不是学会的呢。下篇我们继续介绍一个相关工具,辅助我们高效管理RAG流程。
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-11-23
RAG 2.0性能提升:优化索引与召回机制的策略与实践
2024-11-22
RAG技术在实际应用中的挑战与解决方案
2024-11-22
从普通RAG到RAPTOR,10个最新的RAG框架
2024-11-22
如何使用 RAG 提高 LLM 成绩
2024-11-21
提升RAG性能的全攻略:优化检索增强生成系统的策略大揭秘 | 深度好文
2024-11-20
FastGraphRAG 如何做到高达 20%优化检索增强生成(RAG)性能优化
2024-11-20
为裸奔的大模型穿上"防护服":企业AI安全护栏设计指南
2024-11-20
RAG-Fusion技术在产品咨询中的实践与分析
2024-07-18
2024-05-05
2024-07-09
2024-07-09
2024-05-19
2024-06-20
2024-07-07
2024-07-07
2024-07-08
2024-07-09
2024-11-06
2024-11-06
2024-11-05
2024-11-04
2024-10-27
2024-10-25
2024-10-21
2024-10-21