微信扫码
与创始人交个朋友
我要投稿
书接上文《DSPy Visualizer:可视化Prompt优化过程》,从示例和可视化方式,观测DSPy是如何对Prompt进行优化的。本文将从以下三方面深入探讨DSPy和LangChain的无缝集成:1.DSPy VS LangChain;2.LangChain和DSPy的结合;3.实践示例:使用DSPy优化LCEL。
1、概述
特点 | LangChain | DSPy |
---|---|---|
核心关注点 | 提供大量构建模块,简化使用LLMs与用户指定数据源结合的应用程序开发。 | 自动化和模块化LLM交互,消除手动提示工程,提高系统可靠性。 |
方法 | 利用模块化组件和可以使用LangChain表达语言(LCEL, LangChain Expression Language)链接在一起的链。 | 通过编程而非提示来简化LLM交互,并自动优化提示和权重。 |
复杂管道 | 通过LCEL创建链,支持异步执行和与各种数据源及API的集成。 | 使用模块和优化器简化多阶段推理管道,并通过减少手动干预确保可扩展性。 |
优化 | 依赖用户的提示工程和多个LLM调用的链式操作。 | 内置优化器自动调整提示和权重,提高LLM管道的效率和效果。 |
社区与支持 | 拥有庞大的开源社区,文档丰富,示例众多。 | 新兴框架,社区支持不断增长,带来LLM提示的新范式。 |
数据源和API:LangChain支持多种数据源和API,允许与不同类型的数据无缝集成,非常适用于各种AI应用。
模块化组件:LangChain提供的模块化组件可以组合在一起,LangChain表达语言(LCEL)使得使用声明性语法构建和管理工作流程变得更容易。
复杂推理任务:对于涉及复杂多阶段推理任务的项目,LangChain需要大量的手动提示工程,这既耗时又容易出错。
可扩展性问题:管理和扩展需要多个LLM调用的工作流可能非常具有挑战性。
DSPy 自动化了提示生成和优化过程,显著减少了手动提示设计的需求,使得使用大型语言模型(LLMs)更为容易,并有助于构建可扩展的AI工作流。
该框架包含内置的优化器,如 BootstrapFewShot 和 MIPRO,可以自动精化提示并将其适配到特定的数据集。
DSPy 使用通用模块和优化器简化了提示设计的复杂性,使得创建复杂多步骤推理应用变得更为容易,无需担心处理LLMs的复杂细节。
DSPy 支持多种LLMs,并具有在同一程序中使用多个LLMs的灵活性。
作为一个较新的框架,DSPy 的社区比 LangChain 小,这意味着资源、示例和社区支持的可用性较为有限。
尽管 DSPy 提供了教程和指南,但其文档比 LangChain 的文档要少,这可能会在您开始使用时带来挑战。
class LangChainModule(dspy.Module):
def __init__(self, lcel):
super().__init__()
modules = []
for name, node in lcel.get_graph().nodes.items():
if isinstance(node.data, LangChainPredict): modules.append(node.data)
self.modules = modules
self.chain = lcel
def forward(self, **kwargs):
output_keys = ['output', self.modules[-1].output_field_key]
output = self.chain.invoke(dict(**kwargs))
try: output = output.content
except Exception: pass
return dspy.Prediction({k: output for k in output_keys})
def invoke(self, d, *args, **kwargs):
return self.forward(**d).output
class LangChainPredict(Predict, Runnable): # , RunnableBinding):
class Config: extra = Extra.allow # Allow extra attributes that are not defined in the model
def __init__(self, prompt, llm, **config):
Runnable.__init__(self)
Parameter.__init__(self)
self.langchain_llm = ShallowCopyOnly(llm)
try: langchain_template = '\n'.join([msg.prompt.template for msg in prompt.messages])
except AttributeError: langchain_template = prompt.template
self.stage = random.randbytes(8).hex()
self.signature, self.output_field_key = self._build_signature(langchain_template)
self.config = config
self.reset()
def reset(self):
...
def dump_state(self):
...
def load_state(self, state):
...
def __call__(self, *arg, **kwargs):
if len(arg) > 0: kwargs = {**arg[0], **kwargs}
return self.forward(**kwargs)
def _build_signature(self, template):
gpt4T = dspy.OpenAI(model='gpt-4-1106-preview', max_tokens=4000, model_type='chat')
with dspy.context(lm=gpt4T): parts = dspy.Predict(Template2Signature)(template=template)
inputs = {k.strip(): OldInputField() for k in parts.input_keys.split(',')}
outputs = {k.strip(): OldOutputField() for k in parts.output_key.split(',')}
for k, v in inputs.items():
v.finalize(k, infer_prefix(k)) # TODO: Generate from the template at dspy.Predict(Template2Signature)
for k, v in outputs.items():
output_field_key = k
v.finalize(k, infer_prefix(k))
return dsp.Template(parts.essential_instructions, **inputs, **outputs), output_field_key
def forward(self, **kwargs):
# Extract the three privileged keyword arguments.
signature = kwargs.pop("signature", self.signature)
demos = kwargs.pop("demos", self.demos)
config = dict(**self.config, **kwargs.pop("config", {}))
prompt = signature(dsp.Example(demos=demos, **kwargs))
output = self.langchain_llm.invoke(prompt, **config)
try: content = output.content
except AttributeError: content = output
pred = Prediction.from_completions([{self.output_field_key: content}], signature=signature)
dspy.settings.langchain_history.append((prompt, pred))
if dsp.settings.trace is not None:
trace = dsp.settings.trace
trace.append((self, {**kwargs}, pred))
return output
def invoke(self, d, *args, **kwargs):
return self.forward(**d)
1)将DSPy的 LangChainPredict类之间串联到 LCEL 中
trained_langchain_predict = LangChainPredict(prompt, llm)#此处省略优化步骤,得到优化后的 trained_langchain_predict...chain = trained_langchain_predict | StrOutputParser()
2)提取优化好的提示词,手动加入到LangChain的Prompt中
#获取signaturesignature = trained_langchain_predict.signature#获取demosdemos = trained_langchain_predict.demos#得到提示词字符串prompt_content = signature(demos)#转换为提示词类prompt = PromptTemplate.from_template(prompt_content)#创建大模型llm = OllamaLLM(model="llama3", stream=False)#连接成 LCELchain = prompt | llm
langchain
langchain_ollama/langchain_openai
dspy
langchain_core
1)模块初始化:
import dspy
from langchain.cache import SQLiteCache
from langchain.globals import set_llm_cache
from langchain_ollama import OllamaLLM
from langchain_community.retrievers import WikipediaRetriever
set_llm_cache(SQLiteCache(database_path="cache.db"))
llm = OllamaLLM(model="llama3", stream=False)
retriever = WikipediaRetriever(load_max_docs=1)
prompt = PromptTemplate.from_template(
"Given {context}, answer the question `{question}` as a tweet."
)
def retrieve(inputs):
return [doc.page_content[:1024] for doc in retriever.get_relevant_documents(query=inputs["question"])]
def retrieve_eval(inputs):
return [{"text": doc.page_content[:1024]} for doc in retriever.get_relevant_documents(query=inputs["question"])]
question = "where was MS Dhoni born?"
2)创建LangChainModule实例
from dspy.predict.langchain import LangChainModule, LangChainPredict zeroshot_chain = LangChainModule(RunnablePassthrough.assign(context=retrieve)| LangChainPredict(prompt, llm)| StrOutputParser())
导入一个数据集,并且按照训练集、测试集、验证集分开。
from dspy.primitives.example import Example
from datasets import load_dataset
dataset = load_dataset('hotpot_qa', 'fullwiki')
trainset = [
Example(dataset['train'][i]).without("id", "type", "level", "supporting_facts", "context").with_inputs("question")
for i in range(0, 50)
]
valset = [
Example(dataset['validation'][i]).without("id", "type", "level", "supporting_facts", "context").with_inputs("question")
for i in range(0, 10)
]
testset = [
Example(dataset['validation'][i]).without("id", "type", "level", "supporting_facts", "context").with_inputs("question")
for i in range(10, 20)
]
"""
trainset[0]
Example({'question': "Which magazine was started first Arthur's Magazine or First for Women?", 'answer': "Arthur's Magazine"}) (input_keys={'question'})
"""
4)创建Metrics
本示例构建了一种利用大模型判断的衡量指标(Metrics),从 correct、engaging、faithful三个维度衡量。
class Assess(dspy.Signature):
"""Assess the quality of a tweet along the specified dimension."""
context = dspy.InputField(desc="ignore if N/A")
assessed_text = dspy.InputField()
assessment_question = dspy.InputField()
assessment_answer = dspy.OutputField(desc="Yes or No")
optimiser_model = dspy.OpenAI(model="gpt-4-turbo", max_tokens=1000, model_type="chat")
METRIC = None
def metric(gold, pred, trace=None):
question, answer, tweet = gold.question, gold.answer, pred.output
context = retrieve_eval({'question': question})
engaging = "Does the assessed text make for a self-contained, engaging tweet?"
faithful = "Is the assessed text grounded in the context? Say no if it includes significant facts not in the context."
correct = f"The text above is should answer `{question}`. The gold answer is `{answer}`. Does the assessed text above contain the gold answer?"
with dspy.context(lm=optimiser_model):
faithful = dspy.Predict(Assess)(
context=context, assessed_text=tweet, assessment_question=faithful
)
correct = dspy.Predict(Assess)(
context="N/A", assessed_text=tweet, assessment_question=correct
)
engaging = dspy.Predict(Assess)(
context="N/A", assessed_text=tweet, assessment_question=engaging
)
correct, engaging, faithful = [
m.assessment_answer.split()[0].lower() == "yes"
for m in [correct, engaging, faithful]
]
score = (correct + engaging + faithful) if correct and (len(tweet) <= 280) else 0
if METRIC is not None:
if METRIC == "correct":
return correct
if METRIC == "engaging":
return engaging
if METRIC == "faithful":
return faithful
if trace is not None:
return score >= 3
return score / 3.0
5)优化程序
from dspy.teleprompt import BootstrapFewShotWithRandomSearch
optimizer = BootstrapFewShotWithRandomSearch(
metric=metric, max_bootstrapped_demos=3, num_candidate_programs=3
)
optimized_chain = optimizer.compile(zeroshot_chain, trainset=trainset, valset=valset)
from dspy.evaluate.evaluate import Evaluate
evaluate = Evaluate(
metric=metric, devset=testset, num_threads=8, display_progress=True, display_table=5
)
evaluate(optimized_chain)
# Average Metric: 3.0 / 10 (30.0): 100%|██████████| 10/10 [00:22<00:00, 2.22s/it]
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-12-22
ANTHROPIC:高端的食材往往需要最朴素的烹饪方法: prompt, workflow, agent
2024-12-21
用LangChain教AI模仿你的写作风格:详细教程
2024-12-18
一站式 LLM 工程观测平台:Langfuse,让所有操作可观测
2024-12-17
LLMs开发者必看!Pydantic AI代理框架震撼登场!
2024-12-16
用LangChain实现一个Agent
2024-12-16
通过阿里云 Milvus 和 LangChain 快速构建 LLM 问答系统
2024-12-16
大模型部署调用(vLLM+LangChain)
2024-12-14
利用 LangGraph 和代理优化工作流程效率:关键功能、用例和集成...
2024-04-08
2024-08-18
2024-06-03
2024-10-10
2024-04-08
2024-04-17
2024-06-24
2024-07-13
2024-04-11
2024-04-12
2024-12-02
2024-11-25
2024-10-30
2024-10-11
2024-08-18
2024-08-16
2024-08-04
2024-07-29