微信扫码
与创始人交个朋友
我要投稿
01
—
(Contextual Retrieval Preprocessing)流程
开始阶段 - 知识库处理:
从知识库(Corpus)中获取文档
将文档切分(切片)成多个Chunks (Chunk 1, Chunk 2, ... Chunk X)
上下文处理阶段:
对每个切片(chunk)使用特定的prompt进行处理
这个prompt的目标是确定该文本块在整个文档结构中的位置或关系
Claude会为每个prompt生成50-100 tokens的相关背景信息
生成的背景信息会被插入到相应文本段的开头
向量化阶段:
Embedding model生成向量嵌入
TF-IDF处理生成TF-IDF向量
将Context和Chunk组合 (Context 1 + Chunk 1, Context 2 + Chunk 2, etc.)
使用两种方式处理组合后的内容:
检索准备阶段:
向量嵌入存入vector database用于相似性搜索
TF-IDF向量存入TF-IDF index
最终实现类似于ES关键词检索的全文搜索功能(由BM25提供)
02
—
与传统方式对比
这个流程的特点是结合了语义理解(通过Claude生成上下文)和传统检索方法(TF-IDF),可以提供更全面的检索能力。用户查询时可以通过向量相似性搜索找到相关内容,这种预处理方式可以提高检索的准确性和相关性。
通过为每个分片生成50-100 tokens的上下文描述,我们为每个"书页"都添加了导航标签,帮助系统理解这段内容在整体中的位置和意义。
更妙的是,这种设计采用了"双引擎"驱动:
向量数据库负责理解语义相似性,就像理解"苹果"和"水果"的关联
TF-IDF处理则精确捕捉关键词,就像在书中精确定位一个专有名词
这种预处理不仅提升了检索的准确性,更大大优化了系统响应速度。毕竟,提前做好功课总比临时抱佛脚要来得高效。
03
—
如此处理的优势
这种分片和预处理的设计有几个重要优势:
解决长文本处理的限制
大多数模型都有输入长度限制(token限制)
通过分片可以处理任意长度的文档
每个分片大小可以控制在模型能高效处理的范围内
提升检索准确性
单纯的文本分片可能会丢失上下文信息
通过给每个分片添加上下文描述(Context),保留了文档的结构和语义连贯性
预处理生成的上下文可以帮助理解该分片在整个文档中的位置和作用
优化检索效率
预计算的向量和TF-IDF可以显著提升检索速度
避免了实时处理的计算开销
双重索引(向量数据库 + TF-IDF)提供了多种检索路径
平衡语义理解和关键词匹配
向量检索善于捕捉语义相似性
TF-IDF善于精确的关键词匹配
两种方法互补,提高检索的综合表现
提升检索质量
携带上下文的分片比孤立的文本片段能更好地回答问题
减少因分片导致的信息碎片化
更容易定位到真正相关的内容
灵活性和可扩展性
分片大小可以根据需求调整
预处理步骤可以根据应用场景增减
索引方式可以根据实际需求选择
04
—
关键代码实现
import numpy as np
from typing import List, Dict, Tuple
from dataclasses import dataclass
from transformers import AutoTokenizer, AutoModel
from sklearn.feature_extraction.text import TfidfVectorizer
import torch
import nltk
from nltk.tokenize import sent_tokenize
nltk.download('punkt')
class Document:
content: str
metadata: Dict = None
class Chunk:
content: str
context: str = ""
metadata: Dict = None
class DocumentPreprocessor:
def __init__(self,
embedding_model_name: str = "sentence-transformers/all-MiniLM-L6-v2",
max_chunk_size: int = 512,
overlap_size: int = 50):
self.tokenizer = AutoTokenizer.from_pretrained(embedding_model_name)
self.model = AutoModel.from_pretrained(embedding_model_name)
self.max_chunk_size = max_chunk_size
self.overlap_size = overlap_size
self.tfidf_vectorizer = TfidfVectorizer()
def chunk_document(self, document: Document) -> List[Chunk]:
"""Split document into overlapping chunks based on sentence boundaries"""
sentences = sent_tokenize(document.content)
chunks = []
current_chunk = []
current_length = 0
for sentence in sentences:
sentence_tokens = self.tokenizer.tokenize(sentence)
sentence_length = len(sentence_tokens)
if current_length + sentence_length > self.max_chunk_size and current_chunk:
# Create new chunk from accumulated sentences
chunk_text = " ".join(current_chunk)
chunks.append(Chunk(content=chunk_text, metadata=document.metadata))
# Keep overlap for context continuity
overlap_sentences = current_chunk[-2:] if len(current_chunk) > 2 else current_chunk
current_chunk = overlap_sentences + [sentence]
current_length = sum(len(self.tokenizer.tokenize(s)) for s in current_chunk)
else:
current_chunk.append(sentence)
current_length += sentence_length
# Add final chunk if there's remaining content
if current_chunk:
chunks.append(Chunk(content=" ".join(current_chunk), metadata=document.metadata))
return chunks
def generate_context(self, chunk: Chunk, document: Document) -> str:
"""Generate context description for a chunk using a template prompt"""
prompt_template = """
<document>
{{WHOLE_DOCUMENT}}
</document>
Here is the chunk we want to situate within the whole document
<chunk>
{{CHUNK_CONTENT}}
</chunk>
Please give a short succinct context to situate this chunk within the overall document for
the purposes of improving search retrieval of the chunk. Answer only with the succinct context
and nothing else.
"""
# In practice, you would use Claude or another LLM here
# This is a simplified example that creates a basic context
chunk_position = document.content.find(chunk.content)
total_length = len(document.content)
position_description = "beginning" if chunk_position < total_length/3 else \
"middle" if chunk_position < 2*total_length/3 else "end"
return f"This section appears in the {position_description} of the document and discusses {chunk.content[:50]}..."
def compute_embeddings(self, chunks: List[Chunk]) -> np.ndarray:
"""Generate embeddings for chunks using the embedding model"""
embeddings = []
for chunk in chunks:
# Combine context and content for embedding
full_text = f"{chunk.context} {chunk.content}"
inputs = self.tokenizer(full_text, return_tensors="pt",
truncation=True, max_length=self.max_chunk_size)
with torch.no_grad():
outputs = self.model(**inputs)
# Use CLS token embedding as chunk embedding
embedding = outputs.last_hidden_state[0][0].numpy()
embeddings.append(embedding)
return np.array(embeddings)
def compute_tfidf(self, chunks: List[Chunk]) -> Tuple[np.ndarray, TfidfVectorizer]:
"""Compute TF-IDF vectors for chunks"""
texts = [f"{chunk.context} {chunk.content}" for chunk in chunks]
tfidf_matrix = self.tfidf_vectorizer.fit_transform(texts)
return tfidf_matrix, self.tfidf_vectorizer
def process_document(self, document: Document) -> Tuple[List[Chunk], np.ndarray, np.ndarray]:
"""Complete preprocessing pipeline for a document"""
# 1. Split document into chunks
chunks = self.chunk_document(document)
# 2. Generate context for each chunk
for chunk in chunks:
chunk.context = self.generate_context(chunk, document)
# 3. Compute embeddings
embeddings = self.compute_embeddings(chunks)
# 4. Compute TF-IDF vectors
tfidf_matrix, _ = self.compute_tfidf(chunks)
return chunks, embeddings, tfidf_matrix
# Example usage
if __name__ == "__main__":
# Sample document
doc = Document(
content="""
Machine learning is a subset of artificial intelligence that focuses on developing systems
that can learn from data. Deep learning is a subset of machine learning that uses neural
networks with multiple layers. These neural networks are designed to automatically learn
representations of data with multiple levels of abstraction.
Modern deep learning has achieved remarkable success in many fields, including computer vision,
natural language processing, and robotics. The key to this success has been the availability
of large datasets and powerful computing resources.
""",
metadata={"title": "Introduction to Deep Learning", "author": "AI Researcher"}
)
# Initialize preprocessor
preprocessor = DocumentPreprocessor()
# Process document
chunks, embeddings, tfidf_matrix = preprocessor.process_document(doc)
# Print results
print(f"Number of chunks: {len(chunks)}")
print(f"Embedding shape: {embeddings.shape}")
print(f"TF-IDF matrix shape: {tfidf_matrix.shape}")
# Print first chunk with its context
print("\nFirst chunk:")
print(f"Context: {chunks[0].context}")
print(f"Content: {chunks[0].content}")
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-11-21
提升RAG性能的全攻略:优化检索增强生成系统的策略大揭秘 | 深度好文
2024-11-20
FastGraphRAG 如何做到高达 20%优化检索增强生成(RAG)性能优化
2024-11-20
为裸奔的大模型穿上"防护服":企业AI安全护栏设计指南
2024-11-20
RAG-Fusion技术在产品咨询中的实践与分析
2024-11-19
构建高性能RAG:文本分割核心技术详解
2024-11-19
【RAG竞赛获奖方案】CCF第七届AIOps国际挑战赛季军方案分享EasyRAG:一个面向AIOps的简洁RAG框架
2024-11-19
企业RAG构建中,如何用“行级别权限管控”避免数据泄露
2024-11-19
大模型prompt压缩技术总结:从硬提示到软提示代表方案实现思路
2024-07-18
2024-05-05
2024-07-09
2024-07-09
2024-05-19
2024-06-20
2024-07-07
2024-07-07
2024-07-08
2024-07-09
2024-11-06
2024-11-06
2024-11-05
2024-11-04
2024-10-27
2024-10-25
2024-10-21
2024-10-21