AI知识库

53AI知识库

学习大模型的前沿技术与行业应用场景


claude提供了一种增强的上下文检索预处理
发布日期:2024-10-27 09:03:24 浏览次数: 1653 来源:金融科技前沿fintech



 为什么要对知识库内容进行分片和预处理?这看似简单的操作背后,实际上隐藏着RAG系统设计的精妙之处。
首先,这解决了一个根本性挑战:大语言模型的输入长度限制。无论多么先进的模型,都会受到token数量的约束。通过分片,我们可以优雅地处理任意长度的文档,就像将一本厚重的书籍分成易于理解的章节。
但仅仅做分片是远远不够的。想象一下,如果你只看到一本书的某一页,没有任何上下文信息,你能完全理解这一页的内容吗?这就是为什么预处理如此重要。




目录
01 处理流程
02 与传统方式对比
03 该方式的优势
04 核心代码实现


01



(Contextual Retrieval Preprocessing)流程



  1. 开始阶段 - 知识库处理:

  • 从知识库(Corpus)中获取文档

  • 将文档切分(切片)成多个Chunks (Chunk 1, Chunk 2, ... Chunk X)

  • 上下文处理阶段:

    • 对每个切片(chunk)使用特定的prompt进行处理

    • 这个prompt的目标是确定该文本块在整个文档结构中的位置或关系

    • Claude会为每个prompt生成50-100 tokens的相关背景信息

    • 生成的背景信息会被插入到相应文本段的开头

  • 向量化阶段:

    • Embedding model生成向量嵌入

    • TF-IDF处理生成TF-IDF向量

    • 将Context和Chunk组合 (Context 1 + Chunk 1, Context 2 + Chunk 2, etc.)

    • 使用两种方式处理组合后的内容:

  • 检索准备阶段:

    • 向量嵌入存入vector database用于相似性搜索

    • TF-IDF向量存入TF-IDF index

    • 最终实现类似于ES关键词检索的全文搜索功能(由BM25提供)



    02


    与传统方式对比


    这个流程的特点是结合了语义理解(通过Claude生成上下文)和传统检索方法(TF-IDF),可以提供更全面的检索能力。用户查询时可以通过向量相似性搜索找到相关内容,这种预处理方式可以提高检索的准确性和相关性。

    通过为每个分片生成50-100 tokens的上下文描述,我们为每个"书页"都添加了导航标签,帮助系统理解这段内容在整体中的位置和意义。

    更妙的是,这种设计采用了"双引擎"驱动:

    • 向量数据库负责理解语义相似性,就像理解"苹果"和"水果"的关联

    • TF-IDF处理则精确捕捉关键词,就像在书中精确定位一个专有名词

    这种预处理不仅提升了检索的准确性,更大大优化了系统响应速度。毕竟,提前做好功课总比临时抱佛脚要来得高效。


    03


    如此处理的优势


    这种分片和预处理的设计有几个重要优势:

    1. 解决长文本处理的限制

    • 大多数模型都有输入长度限制(token限制)

    • 通过分片可以处理任意长度的文档

    • 每个分片大小可以控制在模型能高效处理的范围内

    1. 提升检索准确性

    • 单纯的文本分片可能会丢失上下文信息

    • 通过给每个分片添加上下文描述(Context),保留了文档的结构和语义连贯性

    • 预处理生成的上下文可以帮助理解该分片在整个文档中的位置和作用

    1. 优化检索效率

    • 预计算的向量和TF-IDF可以显著提升检索速度

    • 避免了实时处理的计算开销

    • 双重索引(向量数据库 + TF-IDF)提供了多种检索路径

    1. 平衡语义理解和关键词匹配

    • 向量检索善于捕捉语义相似性

    • TF-IDF善于精确的关键词匹配

    • 两种方法互补,提高检索的综合表现

    1. 提升检索质量

    • 携带上下文的分片比孤立的文本片段能更好地回答问题

    • 减少因分片导致的信息碎片化

    • 更容易定位到真正相关的内容

    1. 灵活性和可扩展性

    • 分片大小可以根据需求调整

    • 预处理步骤可以根据应用场景增减

    • 索引方式可以根据实际需求选择



    04


    关键代码实现


    import numpy as npfrom typing import List, Dict, Tuplefrom dataclasses import dataclassfrom transformers import AutoTokenizer, AutoModelfrom sklearn.feature_extraction.text import TfidfVectorizerimport torchimport nltkfrom nltk.tokenize import sent_tokenizenltk.download('punkt')
    @dataclassclass Document:content: strmetadata: Dict = None
    @dataclassclass Chunk:content: strcontext: str = ""metadata: Dict = None
    class DocumentPreprocessor:def __init__(self, embedding_model_name: str = "sentence-transformers/all-MiniLM-L6-v2", max_chunk_size: int = 512, overlap_size: int = 50):self.tokenizer = AutoTokenizer.from_pretrained(embedding_model_name)self.model = AutoModel.from_pretrained(embedding_model_name)self.max_chunk_size = max_chunk_sizeself.overlap_size = overlap_sizeself.tfidf_vectorizer = TfidfVectorizer()def chunk_document(self, document: Document) -> List[Chunk]:"""Split document into overlapping chunks based on sentence boundaries"""sentences = sent_tokenize(document.content)chunks = []current_chunk = []current_length = 0for sentence in sentences:sentence_tokens = self.tokenizer.tokenize(sentence)sentence_length = len(sentence_tokens)if current_length + sentence_length > self.max_chunk_size and current_chunk:# Create new chunk from accumulated sentenceschunk_text = " ".join(current_chunk)chunks.append(Chunk(content=chunk_text, metadata=document.metadata))# Keep overlap for context continuityoverlap_sentences = current_chunk[-2:] if len(current_chunk) > 2 else current_chunkcurrent_chunk = overlap_sentences + [sentence]current_length = sum(len(self.tokenizer.tokenize(s)) for s in current_chunk)else:current_chunk.append(sentence)current_length += sentence_length# Add final chunk if there's remaining contentif current_chunk:chunks.append(Chunk(content=" ".join(current_chunk), metadata=document.metadata))return chunks
    def generate_context(self, chunk: Chunk, document: Document) -> str:"""Generate context description for a chunk using a template prompt"""prompt_template = """<document>{{WHOLE_DOCUMENT}}</document>Here is the chunk we want to situate within the whole document<chunk>{{CHUNK_CONTENT}}</chunk>Please give a short succinct context to situate this chunk within the overall document for the purposes of improving search retrieval of the chunk. Answer only with the succinct context and nothing else."""# In practice, you would use Claude or another LLM here# This is a simplified example that creates a basic contextchunk_position = document.content.find(chunk.content)total_length = len(document.content)position_description = "beginning" if chunk_position < total_length/3 else \ "middle" if chunk_position < 2*total_length/3 else "end"return f"This section appears in the {position_description} of the document and discusses {chunk.content[:50]}..."
    def compute_embeddings(self, chunks: List[Chunk]) -> np.ndarray:"""Generate embeddings for chunks using the embedding model"""embeddings = []for chunk in chunks:# Combine context and content for embeddingfull_text = f"{chunk.context} {chunk.content}"inputs = self.tokenizer(full_text, return_tensors="pt", truncation=True, max_length=self.max_chunk_size)with torch.no_grad():outputs = self.model(**inputs)# Use CLS token embedding as chunk embeddingembedding = outputs.last_hidden_state[0][0].numpy()embeddings.append(embedding)return np.array(embeddings)
    def compute_tfidf(self, chunks: List[Chunk]) -> Tuple[np.ndarray, TfidfVectorizer]:"""Compute TF-IDF vectors for chunks"""texts = [f"{chunk.context} {chunk.content}" for chunk in chunks]tfidf_matrix = self.tfidf_vectorizer.fit_transform(texts)return tfidf_matrix, self.tfidf_vectorizer
    def process_document(self, document: Document) -> Tuple[List[Chunk], np.ndarray, np.ndarray]:"""Complete preprocessing pipeline for a document"""# 1. Split document into chunkschunks = self.chunk_document(document)# 2. Generate context for each chunkfor chunk in chunks:chunk.context = self.generate_context(chunk, document)# 3. Compute embeddingsembeddings = self.compute_embeddings(chunks)# 4. Compute TF-IDF vectorstfidf_matrix, _ = self.compute_tfidf(chunks)return chunks, embeddings, tfidf_matrix
    # Example usageif __name__ == "__main__":# Sample documentdoc = Document(content="""Machine learning is a subset of artificial intelligence that focuses on developing systems that can learn from data. Deep learning is a subset of machine learning that uses neural networks with multiple layers. These neural networks are designed to automatically learn representations of data with multiple levels of abstraction.Modern deep learning has achieved remarkable success in many fields, including computer vision,natural language processing, and robotics. The key to this success has been the availability of large datasets and powerful computing resources.""",metadata={"title": "Introduction to Deep Learning", "author": "AI Researcher"})# Initialize preprocessorpreprocessor = DocumentPreprocessor()# Process documentchunks, embeddings, tfidf_matrix = preprocessor.process_document(doc)# Print resultsprint(f"Number of chunks: {len(chunks)}")print(f"Embedding shape: {embeddings.shape}")print(f"TF-IDF matrix shape: {tfidf_matrix.shape}")# Print first chunk with its contextprint("\nFirst chunk:")print(f"Context: {chunks[0].context}")print(f"Content: {chunks[0].content}")




53AI,企业落地应用大模型首选服务商

产品:大模型应用平台+智能体定制开发+落地咨询服务

承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业

联系我们

售前咨询
186 6662 7370
预约演示
185 8882 0121

微信扫码

与创始人交个朋友

回到顶部

 
扫码咨询