微信扫码
添加专属顾问
我要投稿
探索AI框架新趋势,掌握MCP集成关键技术。核心内容:1. 模型上下文协议(MCP)的行业标准实践2. 支持MCP的AI框架Composio和Agents.json介绍3. MCP如何促进LLM与外部工具的高效集成
01
引言
AI智能体工具包为开发者提供了多样化的API接口,旨在为AI解决方案配备执行任务所需的工具,并确保输出结果的准确性以满足用户需求。然而,将这些工具集成至AI应用程序并进行有效管理往往面临诸多混乱。本文将通过模型上下文协议(MCP),向大家介绍为大型语言模型和智能体提供上下文的行业标准实践。
闲话少说,我们直接开始吧!
02
默认情况下,如果不为LLM(大语言模型)和AI聊天机器人提供适当的上下文,它们将无法获取实时信息、执行代码、调用外部工具和API,甚至无法代表用户使用浏览器。开发者可以采用以下方法来解决LLM和智能体的这一局限性。
Composio
Composio 提供了一套规范和工具库,用于集成AI智能体和LLM。除了现成的工具库外,Composio 近期还推出了 Composio MCP,使开发者能够连接100多个支持MCP的IDE服务器。通过下述链接,
链接:https://composio.dev/
大家可以查看 Composio MCP 工具分类,并在支持MCP的IDE(如Cursor、Claude和Windsurf)中将多个应用集成到您的项目中。
Agents.json 是基于OpenAI标准构建的规范,旨在优化AI智能体与API及外部工具的交互体验。尽管Agents.json是一个优秀的规范,但它的普及度远不如MCP,目前尚未被广泛采用。
链接:https://github.com/wild-card-ai/agents-json
大家可以参考其GitHub仓库了解更多信息并开始使用。
MCP 为开发者提供了最佳方式,能够向LLM和AI助手提供上下文数据以解决问题。例如,我们可以搭建一个 MCP文档服务器,让IDE和智能体框架(类似于llms.txt
文件的方式)完整访问我们的文档。
03
04
开发者最常提出的第一个问题是:MCP如何运作?
那么为什么需要将MCP用于AI Agent ?
MCP正逐渐成为开发者构建AI系统的行业标准,使这些系统能够高效对接各类外部应用程序。微软近期宣布在Copilot Studio中集成MCP协议,大幅简化了AI应用和智能体调用工具的过程。无独有偶,OpenAI也宣布在其全线产品(包括智能体开发套件和ChatGPT桌面应用)中支持MCP协议。
虽然直接为AI助手配备工具并无不妥,但对于包含多个子智能体、需并行处理邮件收发、网络爬取、财务分析、实时天气查询等复杂任务的AI系统而言,这种直接集成方式会显得异常笨拙。
服务器推送事件 (SSE):通过 HTTP 连接到远程服务。
标准输入输出 (STDIO):允许执行本地命令并通过标准输入/输出进行通信。
大家选择的 AI 应用开发框架会提供连接这些服务器所需的类。
链接:https://github.com/modelcontextprotocol/servers
Glama Registry:面向开发者的生产就绪、开源的MCP服务器。
链接:https://glama.ai/mcp/servers
Smithery Registry:通过Smithery,开发者可以访问超过2000个MCP服务器,增强AI代理和LLMs的能力。
smithery链接: https://smithery.ai/
OpenTools:提供用于MCP工具的生成式API。你可以访问数百个现成的MCP工具,用于你的AI项目。通过OpenTools API,开发者可以扩展LLMs的网页搜索能力、实时获取位置信息和网页抓取功能。该API支持Curl、Python和TypeScript。请访问OpenTools快速指南以开始使用该API。
opentools官网:https://opentools.com/
示例如下:
PulseMCP Registry:使用PulseMCP,你可以浏览托管的MCP工具和应用案例,支持你的AI项目。还可以查看PulseMCP新闻,了解最新趋势的MCP服务器和应用。
官网链接:https://www.pulsemcp.com/
mcp.run:该注册中心为开发者提供数百个MCP应用,用于商业用途。
官网链接:https://www.mcp.run/
Composio Registry:Composio的基于SSE的MCP服务器,便于将工具与不同的AI框架集成,构建应用。
官网链接:https://mcp.composio.dev/
guMCP:Gumloop的guMCP提供免费、开源、全托管的MCP服务器,便于与任何AI应用无缝集成。
官网链接:https://www.gumloop.com/mcp
虽然MCP已经成为一个热词,且所有开发者社区近期都在讨论它,但要知道使用哪些MCP客户端框架以实现与AI应用和代理的集成并不容易。我们进行了调研,发现以下在Python和TypeScript基础上用于智能体工作流和AI助手的领先MCP客户端平台。
注意:以下内容展示了在构建AI解决方案的框架中实现MCP的方法。
构建Git MCP智能体
使用OpenAI Agents SDK构建智能体时,大家可以通过SDK的MCPServerStdio和MCPServerSse类连接到这些由社区开发的MCP服务器。以下的MCP智能体示例实现会访问你本地Git仓库的根目录,并对用户关于该仓库的查询作出响应。
import asyncio
import shutil
import streamlit as st
from agents import Agent, Runner, trace
from agents.mcp import MCPServer, MCPServerStdio
async def query_git_repo(mcp_server: MCPServer, directory_path: str, query: str):
agent = Agent(
name="Assistant",
instructions=f"Answer questions about the localgit repository at {directory_path}, use that for repo_path",
mcp_servers=[mcp_server],
)
with st.spinner(f"Running query: {query}"):
result = await Runner.run(starting_agent=agent, input=query)
return result.final_output
async def run_streamlit_app():
st.title("Local Git Repo Explorer")
st.write("This app allows you to query information about a local git repository.")
directory_path = st.text_input("Enter the path to the git repository:")
if directory_path:
# Common queries as buttons
col1, col2 = st.columns(2)
with col1:
if st.button("Most frequent contributor"):
query = "Who's the most frequent contributor?"
run_query(directory_path, query)
with col2:
if st.button("Last change summary"):
query = "Summarize the last change in the repository."
run_query(directory_path, query)
# Custom query
custom_query = st.text_input("Or enter your own query:")
if st.button("Run Custom Query") and custom_query:
run_query(directory_path, custom_query)
def run_query(directory_path, query):
if not shutil.which("uvx"):
st.error("uvx is not installed. Please install it with `pip install uvx`.")
return
async def execute_query():
async with MCPServerStdio(
cache_tools_list=True,
params={
"command": "python",
"args": [
"-m",
"mcp_server_git",
"--repository",
directory_path
]
},
) as server:
with trace(workflow_name="MCP Git Query"):
result = await query_git_repo(server, directory_path, query)
st.markdown("### Result")
st.write(result)
asyncio.run(execute_query())
if __name__ == "__main__":
st.set_page_config(
page_title="Local Git Repo Explorer",
page_icon="?",
layout="centered"
)
# Change from async to synchronous implementation
# Since Streamlit doesn't work well with asyncio in the main thread
# Define a synchronous version of our app
def main_streamlit_app():
st.title("Local Git Repo Explorer")
st.write("This app allows you to query information about a Git repository.")
directory_path = st.text_input("Enter the path to the git repository:")
if directory_path:
# Common queries as buttons
col1, col2 = st.columns(2)
with col1:
if st.button("Most frequent contributor"):
query = "Who's the most frequent contributor?"
run_query(directory_path, query)
with col2:
if st.button("Last change summary"):
query = "Summarize the last change in the repository."
run_query(directory_path, query)
# Custom query
custom_query = st.text_input("Or enter your own query:")
if st.button("Run Custom Query") and custom_query:
run_query(directory_path, custom_query)
# Run the synchronous app
main_streamlit_app()
上述代码将Streamlit与OpenAI MCP代理集成,允许你通过Git MCP服务器与本地Git仓库进行聊天。要运行此示例,你需要安装以下软件包:
pip install streamlit openai-agents mcp-server-git
然后,使用以下命令导出你的OpenAI API密钥:
exportOPENAI_API_KEY=sk-...
在你运行Python文件时,应该会看到类似如下的结果。
大家可以在GitHub上探索其他关于OpenAI MCP的示例。
链接:https://github.com/openai/openai-agents-python/tree/main/examples/mcp
使用Agents SDK的MCP集成的一个优点是它在OpenAI的控制面板中内置的MCP智能体监控系统。该功能会自动捕捉你的智能体的MCP操作,例如工具列表、POST响应以及获取有关函数调用的数据。下图展示了运行上述代码后,本节中Git MCP示例的追踪信息。你可以从OpenAI的控制面板中访问所有已记录的信息。
下面的示例将Airbnb的MCP服务器与Praison AI智能体集成,使用Streamlit界面帮助用户在指定地点寻找公寓。
要使用Praison AI创建你的第一个MCP代理,大家应当安装以下软件包:
pip install streamlit mcp praisonaiagents
exportOPENAI_API_KEY=sk-...
import streamlit as st
from praisonaiagents import Agent, MCP
st.title("? Airbnb Booking Assistant")
# Create the agent
def get_agent():
return Agent(
instructions="""You help book apartments on Airbnb.""",
llm="gpt-4o-mini",
tools=MCP("npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt")
)
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# User input form
with st.form("booking_form"):
st.subheader("Enter your booking details")
destination = st.text_input("Destination:", "Paris")
col1, col2 = st.columns(2)
with col1:
check_in = st.date_input("Check-in date")
with col2:
check_out = st.date_input("Check-out date")
adults = st.number_input("Number of adults:", min_value=1, max_value=10, value=2)
submitted = st.form_submit_button("Search for accommodations")
if submitted:
search_agent = get_agent()
# Format the query
query = f"I want to book an apartment in {destination} from {check_in.strftime('%m/%d/%Y')} to {check_out.strftime('%m/%d/%Y')} for {adults} adults"
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": query})
# Display user message
with st.chat_message("user"):
st.markdown(query)
# Get response from the agent
with st.chat_message("assistant"):
with st.spinner("Searching for accommodations..."):
response = search_agent.start(query)
st.markdown(response)
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": response})
# Allow for follow-up questions
if st.session_state.messages:
prompt = st.chat_input("Ask a follow-up question about the accommodations")
if prompt:
search_agent = get_agent()
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display user message
with st.chat_message("user"):
st.markdown(prompt)
# Get response from the agent
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
response = search_agent.start(prompt)
st.markdown(response)
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": response})
tools=MCP("npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt")
npx
代表运行启动MCP服务器的命令,-y
是传递给该命令的命令行参数。有关更多信息,请参考OpenAI Agents SDK文档。# Copyright (C) 2024 Andrew Wason
# SPDX-License-Identifier: MIT
import asyncio
import pathlib
import sys
import typing as t
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.tools import BaseTool
from langchain_groq import ChatGroq
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp import MCPToolkit
asyncdefrun(tools: list[BaseTool], prompt: str) -> str:
model = ChatGroq(model_name="llama-3.1-8b-instant", stop_sequences=None) # requires GROQ_API_KEY
tools_map = {tool.name: tool for tool in tools}
tools_model = model.bind_tools(tools)
messages: list[BaseMessage] = [HumanMessage(prompt)]
ai_message = t.cast(AIMessage, await tools_model.ainvoke(messages))
messages.append(ai_message)
for tool_call in ai_message.tool_calls:
selected_tool = tools_map[tool_call["name"].lower()]
tool_msg = await selected_tool.ainvoke(tool_call)
messages.append(tool_msg)
returnawait (tools_model | StrOutputParser()).ainvoke(messages)
asyncdefmain(prompt: str) -> None:
server_params = StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", str(pathlib.Path(__file__).parent.parent)],
)
asyncwith stdio_client(server_params) as (read, write):
asyncwith ClientSession(read, write) as session:
toolkit = MCPToolkit(session=session)
await toolkit.initialize()
response = await run(toolkit.get_tools(), prompt)
print(response)
if __name__ == "__main__":
prompt = sys.argv[1] iflen(sys.argv) > 1else"Read and summarize the file ./readme.md"
asyncio.run(main(prompt))
pip install streamlit mcp praisonaiagents
pm install -g @modelcontextprotocol/server-filesystem
on_mcp_connect
,以建立成功的连接。你还可以实现函数on_mcp_disconnect来进行清理工作。
# pip install chainlit
import chainlit as cl
from mcp import ClientSession
asyncdefon_mcp_connect(connection, session: ClientSession):
"""Called when an MCP connection is established"""
# Your connection initialization code here
# This handler is required for MCP to work
asyncdefon_mcp_disconnect(name: str, session: ClientSession):
"""Called when an MCP connection is terminated"""
# Optional handler: Cleanup your code here
其次,需要配置 MCP 客户端(Chainlit、LangChain、Mastra):为了让 MCP 服务器与 Chainlit 应用正常工作,客户端应通过 Chainlit 的界面提供连接信息。

该配置包括以下内容:
代表连接名称的唯一标识符。
客户端类型:你应当指定是使用 SSE 还是 stdio。使用 SSE 时,需要添加 URL 端点;而使用 stdio 时,则需要一个完整的命令(例如:npx your-tool-package)。下面是完整命令的示例:
npx -y linear-mcp-server --tools=all --api-key=lin_api_your_linear_API_Key
建立 MCP 服务器连接后,大家可以在 MCP 会话中执行工具。最终,你可以通过工具调用,将 MCP 工具无缝集成到你的 Chainlit 应用的模型或代理(agents)中。你可以在 GitHub 上的 Chainlit 示例应用中找到此 Linear MCP 集成的完整源代码。

代码链接:https://github.com/Chainlit/cookbook/tree/main/mcp-linear
# Define server parameters
airbnb_server_params = StdioServerParameters(
command="npx",
args=["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"],
env=env,
)
maps_server_params = StdioServerParameters(
command="npx", args=["-y", "@modelcontextprotocol/server-google-maps"], env=env
)
# Use contextlib.AsyncExitStack to manage multiple async context managers
asyncwith contextlib.AsyncExitStack() as stack:
# Create stdio clients for each server
airbnb_client, _ = await stack.enter_async_context(stdio_client(airbnb_server_params))
maps_client, _ = await stack.enter_async_context(stdio_client(maps_server_params))
# Create all agents
airbnb_agent = Agent(
name="Airbnb",
role="Airbnb Agent",
model=OpenAIChat("gpt-4o"),
tools=[airbnb_client],
instructions=dedent("""\
You are an agent that can find Airbnb listings for a given location.\
"""),
add_datetime_to_instructions=True,
)
代码链接:https://github.com/agno-agi/agno/blob/main/cookbook/examples/teams/coordinate/travel_planner_mcp_team.py
import os
from dotenv import load_dotenv
from upsonic import Task, Agent, Direct
from upsonic.client.tools import Search # Adding Search as a fallback tool
# Load environment variables from .env file
load_dotenv()
# Get the OpenAI API key from environment variables
openai_api_key = os.getenv("OPENAI_API_KEY")
ifnot openai_api_key:
raise ValueError("OPENAI_API_KEY not found in .env file")
# Set your OpenAI API key for the session
os.environ["OPENAI_API_KEY"] = openai_api_key
# Define the HackerNews MCP tool
# Using the correct MCP setup for HackerNews based on Upsonic documentation
classHackerNewsMCP:
command = "uvx"
args = ["mcp-hn"]
# No environment variables are needed for this MCP
# Create a task to analyze the latest HackerNews stories
# Adding Search as a fallback in case HackerNews MCP fails
task = Task(
"Analyze the top 5 HackerNews stories for today. Provide a brief summary of each story, "
"identify any common themes or trends, and highlight which stories might be most relevant "
"for someone interested in AI and software development.",
tools=[HackerNewsMCP, Search] # Include both HackerNews MCP and Search tools
)
# Create an agent specialized in tech news analysis
agent = Agent(
"Tech News Analyst",
company_url="https://news.ycombinator.com/",
company_objective="To provide insightful analysis of tech industry news and trends"
)
# Execute the task with the agent and print the results
print("Analyzing HackerNews stories...")
agent.print_do(task)
# Alternatively, you can use a Direct LLM call if the task is straightforward
# print("Direct analysis of HackerNews stories...")
# Direct.print_do(task)
# If you want to access the response programmatically:
# agent.do(task)
# result = task.response
# print(result)
pip install upsonic
并运行上面的Python代码,你应该会看到类似于此图片的输出。点击上方小卡片关注我
添加个人微信,进专属粉丝群!
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费场景POC验证,效果验证后签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2025-04-28
百度全面接入MCP生态,成为继阿里、腾讯、字节后又一国内科技巨头
2025-04-28
金融大模型推理能力瓶颈如何突破?通义点金提出 DianJin-R1 框架,点石成金!
2025-04-28
深入解读MCP协议最新版本的4大升级【上】:传输机制与安全授权
2025-04-28
一文了解:为什么大模型 Agent框架(A2A)采用 JSON-RPC 2.0?
2025-04-28
拨开MCP的迷雾,聊聊LLM工具调用的本质(一):Function Calling
2025-04-27
一文了解Text Embedding模型:从text2vec、openai-text embedding到m3e、bge(上)
2025-04-27
RollingAI创始人刘开出席2025年中国绿公司年会,解读AI商业化落地新思维
2025-04-27
一文讲透 MCP 与 Function calling,你想看的都在这里
2024-08-13
2024-06-13
2024-08-21
2024-09-23
2024-07-31
2024-05-28
2024-08-04
2024-04-26
2024-07-09
2024-09-17