微信扫码
添加专属顾问
我要投稿
书接上文《DSPy Visualizer:可视化Prompt优化过程》,从示例和可视化方式,观测DSPy是如何对Prompt进行优化的。本文将从以下三方面深入探讨DSPy和LangChain的无缝集成:1.DSPy VS LangChain;2.LangChain和DSPy的结合;3.实践示例:使用DSPy优化LCEL。
1、概述
| 特点 | LangChain | DSPy |
|---|---|---|
核心关注点 | 提供大量构建模块,简化使用LLMs与用户指定数据源结合的应用程序开发。 | 自动化和模块化LLM交互,消除手动提示工程,提高系统可靠性。 |
方法 | 利用模块化组件和可以使用LangChain表达语言(LCEL, LangChain Expression Language)链接在一起的链。 | 通过编程而非提示来简化LLM交互,并自动优化提示和权重。 |
复杂管道 | 通过LCEL创建链,支持异步执行和与各种数据源及API的集成。 | 使用模块和优化器简化多阶段推理管道,并通过减少手动干预确保可扩展性。 |
优化 | 依赖用户的提示工程和多个LLM调用的链式操作。 | 内置优化器自动调整提示和权重,提高LLM管道的效率和效果。 |
社区与支持 | 拥有庞大的开源社区,文档丰富,示例众多。 | 新兴框架,社区支持不断增长,带来LLM提示的新范式。 |
数据源和API:LangChain支持多种数据源和API,允许与不同类型的数据无缝集成,非常适用于各种AI应用。
模块化组件:LangChain提供的模块化组件可以组合在一起,LangChain表达语言(LCEL)使得使用声明性语法构建和管理工作流程变得更容易。
复杂推理任务:对于涉及复杂多阶段推理任务的项目,LangChain需要大量的手动提示工程,这既耗时又容易出错。
可扩展性问题:管理和扩展需要多个LLM调用的工作流可能非常具有挑战性。
DSPy 自动化了提示生成和优化过程,显著减少了手动提示设计的需求,使得使用大型语言模型(LLMs)更为容易,并有助于构建可扩展的AI工作流。
该框架包含内置的优化器,如 BootstrapFewShot 和 MIPRO,可以自动精化提示并将其适配到特定的数据集。
DSPy 使用通用模块和优化器简化了提示设计的复杂性,使得创建复杂多步骤推理应用变得更为容易,无需担心处理LLMs的复杂细节。
DSPy 支持多种LLMs,并具有在同一程序中使用多个LLMs的灵活性。
作为一个较新的框架,DSPy 的社区比 LangChain 小,这意味着资源、示例和社区支持的可用性较为有限。
尽管 DSPy 提供了教程和指南,但其文档比 LangChain 的文档要少,这可能会在您开始使用时带来挑战。
class LangChainModule(dspy.Module):def __init__(self, lcel):super().__init__()modules = []for name, node in lcel.get_graph().nodes.items():if isinstance(node.data, LangChainPredict): modules.append(node.data)self.modules = modulesself.chain = lceldef forward(self, **kwargs):output_keys = ['output', self.modules[-1].output_field_key]output = self.chain.invoke(dict(**kwargs))try: output = output.contentexcept Exception: passreturn dspy.Prediction({k: output for k in output_keys})def invoke(self, d, *args, **kwargs):return self.forward(**d).output
class LangChainPredict(Predict, Runnable): # , RunnableBinding):class Config: extra = Extra.allow # Allow extra attributes that are not defined in the modeldef __init__(self, prompt, llm, **config):Runnable.__init__(self)Parameter.__init__(self)self.langchain_llm = ShallowCopyOnly(llm)try: langchain_template = '\n'.join([msg.prompt.template for msg in prompt.messages])except AttributeError: langchain_template = prompt.templateself.stage = random.randbytes(8).hex()self.signature, self.output_field_key = self._build_signature(langchain_template)self.config = configself.reset()def reset(self):...def dump_state(self):...def load_state(self, state):...def __call__(self, *arg, **kwargs):if len(arg) > 0: kwargs = {**arg[0], **kwargs}return self.forward(**kwargs)def _build_signature(self, template):gpt4T = dspy.OpenAI(model='gpt-4-1106-preview', max_tokens=4000, model_type='chat')with dspy.context(lm=gpt4T): parts = dspy.Predict(Template2Signature)(template=template)inputs = {k.strip(): OldInputField() for k in parts.input_keys.split(',')}outputs = {k.strip(): OldOutputField() for k in parts.output_key.split(',')}for k, v in inputs.items():v.finalize(k, infer_prefix(k)) # TODO: Generate from the template at dspy.Predict(Template2Signature)for k, v in outputs.items():output_field_key = kv.finalize(k, infer_prefix(k))return dsp.Template(parts.essential_instructions, **inputs, **outputs), output_field_keydef forward(self, **kwargs):# Extract the three privileged keyword arguments.signature = kwargs.pop("signature", self.signature)demos = kwargs.pop("demos", self.demos)config = dict(**self.config, **kwargs.pop("config", {}))prompt = signature(dsp.Example(demos=demos, **kwargs))output = self.langchain_llm.invoke(prompt, **config)try: content = output.contentexcept AttributeError: content = outputpred = Prediction.from_completions([{self.output_field_key: content}], signature=signature)dspy.settings.langchain_history.append((prompt, pred))if dsp.settings.trace is not None:trace = dsp.settings.tracetrace.append((self, {**kwargs}, pred))return outputdef invoke(self, d, *args, **kwargs):return self.forward(**d)
1)将DSPy的 LangChainPredict类之间串联到 LCEL 中
trained_langchain_predict = LangChainPredict(prompt, llm)#此处省略优化步骤,得到优化后的 trained_langchain_predict...chain = trained_langchain_predict | StrOutputParser()
2)提取优化好的提示词,手动加入到LangChain的Prompt中
#获取signaturesignature = trained_langchain_predict.signature#获取demosdemos = trained_langchain_predict.demos#得到提示词字符串prompt_content = signature(demos)#转换为提示词类prompt = PromptTemplate.from_template(prompt_content)#创建大模型llm = OllamaLLM(model="llama3", stream=False)#连接成 LCELchain = prompt | llm
langchain
langchain_ollama/langchain_openai
dspy
langchain_core
1)模块初始化:
import dspyfrom langchain.cache import SQLiteCachefrom langchain.globals import set_llm_cachefrom langchain_ollama import OllamaLLMfrom langchain_community.retrievers import WikipediaRetrieverset_llm_cache(SQLiteCache(database_path="cache.db"))llm = OllamaLLM(model="llama3", stream=False)retriever = WikipediaRetriever(load_max_docs=1)prompt = PromptTemplate.from_template("Given {context}, answer the question `{question}` as a tweet.")def retrieve(inputs):return [doc.page_content[:1024] for doc in retriever.get_relevant_documents(query=inputs["question"])]def retrieve_eval(inputs):return [{"text": doc.page_content[:1024]} for doc in retriever.get_relevant_documents(query=inputs["question"])]question = "where was MS Dhoni born?"
2)创建LangChainModule实例
from dspy.predict.langchain import LangChainModule, LangChainPredict zeroshot_chain = LangChainModule(RunnablePassthrough.assign(context=retrieve)| LangChainPredict(prompt, llm)| StrOutputParser())
导入一个数据集,并且按照训练集、测试集、验证集分开。
from dspy.primitives.example import Examplefrom datasets import load_datasetdataset = load_dataset('hotpot_qa', 'fullwiki')trainset = [Example(dataset['train'][i]).without("id", "type", "level", "supporting_facts", "context").with_inputs("question")for i in range(0, 50)]valset = [Example(dataset['validation'][i]).without("id", "type", "level", "supporting_facts", "context").with_inputs("question")for i in range(0, 10)]testset = [Example(dataset['validation'][i]).without("id", "type", "level", "supporting_facts", "context").with_inputs("question")for i in range(10, 20)]"""trainset[0]Example({'question': "Which magazine was started first Arthur's Magazine or First for Women?", 'answer': "Arthur's Magazine"}) (input_keys={'question'})"""
4)创建Metrics
本示例构建了一种利用大模型判断的衡量指标(Metrics),从 correct、engaging、faithful三个维度衡量。
class Assess(dspy.Signature):"""Assess the quality of a tweet along the specified dimension."""context = dspy.InputField(desc="ignore if N/A")assessed_text = dspy.InputField()assessment_question = dspy.InputField()assessment_answer = dspy.OutputField(desc="Yes or No")optimiser_model = dspy.OpenAI(model="gpt-4-turbo", max_tokens=1000, model_type="chat")METRIC = Nonedef metric(gold, pred, trace=None):question, answer, tweet = gold.question, gold.answer, pred.outputcontext = retrieve_eval({'question': question})engaging = "Does the assessed text make for a self-contained, engaging tweet?"faithful = "Is the assessed text grounded in the context? Say no if it includes significant facts not in the context."correct = f"The text above is should answer `{question}`. The gold answer is `{answer}`. Does the assessed text above contain the gold answer?"with dspy.context(lm=optimiser_model):faithful = dspy.Predict(Assess)(context=context, assessed_text=tweet, assessment_question=faithful)correct = dspy.Predict(Assess)(context="N/A", assessed_text=tweet, assessment_question=correct)engaging = dspy.Predict(Assess)(context="N/A", assessed_text=tweet, assessment_question=engaging)correct, engaging, faithful = [m.assessment_answer.split()[0].lower() == "yes"for m in [correct, engaging, faithful]]score = (correct + engaging + faithful) if correct and (len(tweet) <= 280) else 0if METRIC is not None:if METRIC == "correct":return correctif METRIC == "engaging":return engagingif METRIC == "faithful":return faithfulif trace is not None:return score >= 3return score / 3.0
5)优化程序
from dspy.teleprompt import BootstrapFewShotWithRandomSearchoptimizer = BootstrapFewShotWithRandomSearch(metric=metric, max_bootstrapped_demos=3, num_candidate_programs=3)optimized_chain = optimizer.compile(zeroshot_chain, trainset=trainset, valset=valset)
from dspy.evaluate.evaluate import Evaluateevaluate = Evaluate(metric=metric, devset=testset, num_threads=8, display_progress=True, display_table=5)evaluate(optimized_chain)# Average Metric: 3.0 / 10 (30.0): 100%|██████████| 10/10 [00:22<00:00, 2.22s/it]
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费POC验证,效果达标后再合作。零风险落地应用大模型,已交付160+中大型企业
2025-10-19
AI 不再“乱跑”:LangChain × LangGraph 打造可控多阶段智能流程
2025-10-15
LangChain对话Manus创始人:顶级AI智能体上下文工程的“满分作业”首次公开
2025-10-09
Langchain回应OpenAI:为什么我们不做拖拉拽工作流
2025-09-21
告别无效检索:我用LangExtract + Milvus升级 RAG 管道的实战复盘
2025-09-19
AI Agent 软件工程关键技术综述
2025-09-13
我为啥现在如此热衷于LangGraph智能体开发
2025-09-12
重磅发布!LangChain 1.0 Alpha 来了,Agent 终于统一了!
2025-09-06
沧海独家:LangChain 1.0 Alpha 架构重构全解析
2025-09-13
2025-09-21
2025-10-19
2025-08-19
2025-08-17
2025-07-30
2025-09-19
2025-09-12
2025-08-03
2025-09-06
2025-07-14
2025-07-13
2025-07-05
2025-06-26
2025-06-13
2025-05-21
2025-05-19
2025-05-08