微信扫码
与创始人交个朋友
我要投稿
01
Why GraphRAG?
1.1 GraphRAG 在解决什么问题?
1. Complex Information Traversal: It excels at connecting different pieces of information to provide new, synthesized insights. 2. Holistic Understanding: It performs better at understanding and summarizing large data collections, offering a more comprehensive grasp of the information.
1.2 不能使用超大上下文的 LLM 进行摘要总结么?
The challenge remains, however, for query-focused abstractive summarization over an entire corpus. Such volumes of text can greatly exceed the limits of LLM context windows, and the expansion of such windows may not be enough given that information can be “lost in the middle” of longer contexts (Kuratov et al., 2024; Liu et al., 2023).
1.3 与 PAPTOR 相比,有何差异?
02
GraphRAG 介绍
前文提到,GraphRAG 与 RAPTOR 类似,需要预先对文档进行处理,进行分层聚类和总结,在 Query 时使用构建出的数据放入 LLM 上下文进行推理。GraphRAG 可以分为 Indexing 和 Query 两个部分。
2.1 Indexing
2.2 Query
"""Local search system prompts."""
LOCAL_SEARCH_SYSTEM_PROMPT = """
---Role---
You are a helpful assistant responding to questions about data in the tables provided.
---Goal---
Generate a response of the target length and format that responds to the user's question, summarizing all information in the input data tables appropriate for the response length and format, and incorporating any relevant general knowledge.
If you don't know the answer, just say so. Do not make anything up.
Points supported by data should list their data references as follows:
"This is an example sentence supported by multiple data references [Data: <dataset name> (record ids); <dataset name> (record ids)]."
Do not list more than 5 record ids in a single reference. Instead, list the top 5 most relevant record ids and add "+more" to indicate that there are more.
For example:
"Person X is the owner of Company Y and subject to many allegations of wrongdoing [Data: Sources (15, 16), Reports (1), Entities (5, 7); Relationships (23); Claims (2, 7, 34, 46, 64, +more)]."
where 15, 16, 1, 5, 7, 23, 2, 7, 34, 46, and 64 represent the id (not the index) of the relevant data record.
Do not include information where the supporting evidence for it is not provided.
---Target response length and format---
{response_type}
---Data tables---
{context_data}
---Goal---
Generate a response of the target length and format that responds to the user's question, summarizing all information in the input data tables appropriate for the response length and format, and incorporating any relevant general knowledge.
If you don't know the answer, just say so. Do not make anything up.
Points supported by data should list their data references as follows:
"This is an example sentence supported by multiple data references [Data: <dataset name> (record ids); <dataset name> (record ids)]."
Do not list more than 5 record ids in a single reference. Instead, list the top 5 most relevant record ids and add "+more" to indicate that there are more.
For example:
"Person X is the owner of Company Y and subject to many allegations of wrongdoing [Data: Sources (15, 16), Reports (1), Entities (5, 7); Relationships (23); Claims (2, 7, 34, 46, 64, +more)]."
where 15, 16, 1, 5, 7, 23, 2, 7, 34, 46, and 64 represent the id (not the index) of the relevant data record.
Do not include information where the supporting evidence for it is not provided.
---Target response length and format---
{response_type}
Add sections and commentary to the response as appropriate for the length and format. Style the response in markdown.
"""
"""System prompts for global search."""
MAP_SYSTEM_PROMPT = """
---Role---
You are a helpful assistant responding to questions about data in the tables provided.
---Goal---
Generate a response consisting of a list of key points that responds to the user's question, summarizing all relevant information in the input data tables.
You should use the data provided in the data tables below as the primary context for generating the response.
If you don't know the answer or if the input data tables do not contain sufficient information to provide an answer, just say so. Do not make anything up.
Each key point in the response should have the following element:
- Description: A comprehensive description of the point.
- Importance Score: An integer score between 0-100 that indicates how important the point is in answering the user's question. An 'I don't know' type of response should have a score of 0.
The response should be JSON formatted as follows:
{{
"points": [
{{"description": "Description of point 1 [Data: Reports (report ids)]", "score": score_value}},
{{"description": "Description of point 2 [Data: Reports (report ids)]", "score": score_value}}
]
}}
The response shall preserve the original meaning and use of modal verbs such as "shall", "may" or "will".
Points supported by data should list the relevant reports as references as follows:
"This is an example sentence supported by data references [Data: Reports (report ids)]"
**Do not list more than 5 record ids in a single reference**. Instead, list the top 5 most relevant record ids and add "+more" to indicate that there are more.
For example:
"Person X is the owner of Company Y and subject to many allegations of wrongdoing [Data: Reports (2, 7, 64, 46, 34, +more)]. He is also CEO of company X [Data: Reports (1, 3)]"
where 1, 2, 3, 7, 34, 46, and 64 represent the id (not the index) of the relevant data report in the provided tables.
Do not include information where the supporting evidence for it is not provided.
---Data tables---
{context_data}
---Goal---
Generate a response consisting of a list of key points that responds to the user's question, summarizing all relevant information in the input data tables.
You should use the data provided in the data tables below as the primary context for generating the response.
If you don't know the answer or if the input data tables do not contain sufficient information to provide an answer, just say so. Do not make anything up.
Each key point in the response should have the following element:
- Description: A comprehensive description of the point.
- Importance Score: An integer score between 0-100 that indicates how important the point is in answering the user's question. An 'I don't know' type of response should have a score of 0.
The response shall preserve the original meaning and use of modal verbs such as "shall", "may" or "will".
Points supported by data should list the relevant reports as references as follows:
"This is an example sentence supported by data references [Data: Reports (report ids)]"
**Do not list more than 5 record ids in a single reference**. Instead, list the top 5 most relevant record ids and add "+more" to indicate that there are more.
For example:
"Person X is the owner of Company Y and subject to many allegations of wrongdoing [Data: Reports (2, 7, 64, 46, 34, +more)]. He is also CEO of company X [Data: Reports (1, 3)]"
where 1, 2, 3, 7, 34, 46, and 64 represent the id (not the index) of the relevant data report in the provided tables.
Do not include information where the supporting evidence for it is not provided.
The response should be JSON formatted as follows:
{{
"points": [
{{"description": "Description of point 1 [Data: Reports (report ids)]", "score": score_value}},
{{"description": "Description of point 2 [Data: Reports (report ids)]", "score": score_value}}
]
}}
"""
{
{
"points": [
{
{
"description": "Description of point 1 [Data: Reports (report ids)]",
"score": score_value
}
},
{
{
"description": "Description of point 2 [Data: Reports (report ids)]",
"score": score_value
}
}
]
}
}
"""Global Search system prompts."""
REDUCE_SYSTEM_PROMPT = """
---Role---
You are a helpful assistant responding to questions about a dataset by synthesizing perspectives from multiple analysts.
---Goal---
Generate a response of the target length and format that responds to the user's question, summarize all the reports from multiple analysts who focused on different parts of the dataset.
Note that the analysts' reports provided below are ranked in the **descending order of importance**.
If you don't know the answer or if the provided reports do not contain sufficient information to provide an answer, just say so. Do not make anything up.
The final response should remove all irrelevant information from the analysts' reports and merge the cleaned information into a comprehensive answer that provides explanations of all the key points and implications appropriate for the response length and format.
Add sections and commentary to the response as appropriate for the length and format. Style the response in markdown.
The response shall preserve the original meaning and use of modal verbs such as "shall", "may" or "will".
The response should also preserve all the data references previously included in the analysts' reports, but do not mention the roles of multiple analysts in the analysis process.
**Do not list more than 5 record ids in a single reference**. Instead, list the top 5 most relevant record ids and add "+more" to indicate that there are more.
For example:
"Person X is the owner of Company Y and subject to many allegations of wrongdoing [Data: Reports (2, 7, 34, 46, 64, +more)]. He is also CEO of company X [Data: Reports (1, 3)]"
where 1, 2, 3, 7, 34, 46, and 64 represent the id (not the index) of the relevant data record.
Do not include information where the supporting evidence for it is not provided.
---Target response length and format---
{response_type}
---Analyst Reports---
{report_data}
---Goal---
Generate a response of the target length and format that responds to the user's question, summarize all the reports from multiple analysts who focused on different parts of the dataset.
Note that the analysts' reports provided below are ranked in the **descending order of importance**.
If you don't know the answer or if the provided reports do not contain sufficient information to provide an answer, just say so. Do not make anything up.
The final response should remove all irrelevant information from the analysts' reports and merge the cleaned information into a comprehensive answer that provides explanations of all the key points and implications appropriate for the response length and format.
The response shall preserve the original meaning and use of modal verbs such as "shall", "may" or "will".
The response should also preserve all the data references previously included in the analysts' reports, but do not mention the roles of multiple analysts in the analysis process.
**Do not list more than 5 record ids in a single reference**. Instead, list the top 5 most relevant record ids and add "+more" to indicate that there are more.
For example:
"Person X is the owner of Company Y and subject to many allegations of wrongdoing [Data: Reports (2, 7, 34, 46, 64, +more)]. He is also CEO of company X [Data: Reports (1, 3)]"
where 1, 2, 3, 7, 34, 46, and 64 represent the id (not the index) of the relevant data record.
Do not include information where the supporting evidence for it is not provided.
---Target response length and format---
{response_type}
Add sections and commentary to the response as appropriate for the length and format. Style the response in markdown.
"""
NO_DATA_ANSWER = (
"I am sorry but I am unable to answer this question given the provided data."
)
GENERAL_KNOWLEDGE_INSTRUCTION = """
The response may also include relevant real-world knowledge outside the dataset, but it must be explicitly annotated with a verification tag [LLM: verify]. For example:
"This is an example sentence supported by real-world knowledge [LLM: verify]."
"""
2.3 Question Generation
"""Question Generation system prompts."""
QUESTION_SYSTEM_PROMPT = """
---Role---
You are a helpful assistant generating a bulleted list of {question_count} questions about data in the tables provided.
---Data tables---
{context_data}
---Goal---
Given a series of example questions provided by the user, generate a bulleted list of {question_count} candidates for the next question. Use - marks as bullet points.
These candidate questions should represent the most important or urgent information content or themes in the data tables.
The candidate questions should be answerable using the data tables provided, but should not mention any specific data fields or data tables in the question text.
If the user's questions reference several named entities, then each candidate question should reference all named entities.
---Example questions---
"""
2.4 小结
03
一些想法与观点
3.1 正确性优于响应时间
这个观点在之前介绍 Agentic Workflow 时有所提及,而在学习 GraphRAG 过程中再次思考了这个问题。GraphRAG 的 Indexing 构建过程和 Query 过程都可以理解为是一种 workflow。
查看官方文档的 Global Query 部分时,看到采用 Map-Reduce 方式,第一反应是“耗时”(MR 与耗时长没有必然关系,只是日常 QDPS 做离线分析相对较慢,形成了自己的认知谬误)。耗时长不一定是问题,就像 QDPS 做离线数据分析一样,需要平衡准确性、成本与耗时。
“正确性优于响应时间”应是部分 LLM 产品的设计理念,但很少有 LLM 产品是基于这一理念设计的。不少产品设计中,过多关注响应时间,而忽视了用户体验的另一个重要维度——准确性。
然而,将“正确性优于响应时间”的理念付诸实践,可能会遇到以下挑战:
产品设计团队的接受度:响应时间可能从原先的秒级延长到分钟甚至小时级别。对于产品设计人员来说,在其他产品都追求秒级响应时,设计出一个响应时间为分钟级别的产品无疑面临巨大挑战。
高成本问题:如果一个任务需要 LLM 进行多环节、多次迭代的推理,这会消耗大量计算资源,每次任务的成本可能高达几十元人民币,从而带来不小的成本压力。
用户体验保障:随着响应时间的增加,如何尽可能维持良好的用户体验成为一大问题。是提供给用户多种选择(即选择响应时间长但准确率更高,或响应时间短但质量一般的选项),还是改变产品交互方式,采用离线处理?
“正确性优于响应时间”并不只是一个技术上的折中策略,随着 LLM 应用越来越普及,这会成为越来越多产品的设计理念。在用户通过使用这种“高耗时”产品得到质量更好、准确率更高的结果后,“正确性优于响应时间”也会被用户慢慢接受为一种产品设计。
3.2 Graph 可以用于现实 Query 改写
在不少对话中,为了实现更好效果,会对用户 query 进行改写。有一种改写方式类似“扩展”,比如“介绍下杭州”这类问题,会把这个问题先拆成几个小问题,比如“杭州的地理信息”、“杭州的经济情况”、“杭州的历史文化信息”,然后分别用这三个问题做 embedding,把 embedding 的信息放入 LLM 进行推理,而不是直接使用“介绍下杭州”的 embedding 向量匹配去召回文本,这样有可能召回不到,或召回的信息不全面。
将一个大问题拆成几个小问题,召回语料并进行 summary 的方式,也能提高使用 RAG 提高摘要总结类问题的全面性。而拆解问题时,也可以利用知识图谱中的实体关系。
3.3 图是 QA 库最佳的数据结构
在 LLM 之前,对话机器人会维护很多“意图”到“标准回答”的映射。当前不少使用 LLM 的对话机器人,为了防止 LLM 幻觉、提高性能,在用户输入的意图非常明确的情况下,也会维护不少 QA 问题(问题——标准答案的一对文本),在能够意图或语义匹配时,能快速返回。
这种人工维护的 QA 库,QA 之间的关系,使用图是最佳的数据结构。一般 QA 也是围绕一些主题(实体)多层次、多个方面的问题。在使用 LLM 能力的情况下,无论是进行相关问题的推荐、用户 query 的扩充,还是摘要总结类的回答,使用图结构都可以容易获取更多关联的上下文,从而在使用 LLM 推理时得到更好的结果。
3.4 GraphRAG 的适用场景
对大量文本进行趋势分析非常适合使用 GraphRAG。通过由下至上构建知识图谱,可以很容易发现热点趋势、新增的主题,并且通过社区聚类,还能给出非常系统的趋势说明和分析。
专业领域的知识往往是系统的、多层结构的。如果要对专业问题进行深入回答,不仅需要高质量的语料,还需要能表示语料之间的关系,从而回答不同层次的问题。专业领域的知识相对有限,构建类似的知识图谱成本可控。
04
总结
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-10-03
RAG实战篇:将用户输入转换为精确的数据库查询语言
2024-10-02
RAG(检索增强生成)新探索:IdentityRAG 提高 RAG 准确性
2024-10-02
2.7K+ Star!LlamaParse:一个为RAG和代理打造的文档解析神器
2024-10-01
你的RAG混合搜索效果不好?别着急上Reranking,先把RRF算法的K=60改了试试。
2024-10-01
LLM RAG面试问题大全!
2024-10-01
检索增强生成(RAG)与相关技术综述:问题、分类、数据、模型、挑战
2024-10-01
大模型RAG:基于PgSql的向量检索
2024-09-30
RAG实战篇:精准判断用户查询意图,自动选择最佳处理方案
2024-07-18
2024-07-09
2024-07-09
2024-07-08
2024-06-20
2024-07-09
2024-05-05
2024-07-07
2024-07-07
2024-05-19
2024-09-30
2024-09-26
2024-09-26
2024-09-20
2024-09-16
2024-09-12
2024-09-11
2024-09-10