微信扫码
与创始人交个朋友
我要投稿
在当今快速发展的AI技术领域,多智能体协作框架和大型语言模型(LLMs)的应用正在逐步改变我们处理复杂任务的方式。本文将详细阐述如何利用CrewAI与LlamaIndex框架构建金融研究助手agent,并具体应用于对Uber 2019年度风险因素的深入分析。这一实践不仅展示了AI技术在金融研究领域的潜力,还揭示了如何通过智能协作提升数据分析的效率和准确性。
CrewAI是一个创新的开源框架(Multi-Agent架构-CrewAI详解),它允许用户利用智能代理的协作能力来完成复杂任务。与传统的聊天机器人不同,CrewAI中的代理能够相互协作、交换信息,并以团队的形式解决复杂问题。这种协作原则在AI领域的应用,使得CrewAI能够模拟一群专家协同工作的场景,每个成员都拥有独特的专长和知识,通过有效的沟通和任务分配,实现超越个体能力的成果。
LlamaIndex是另一个用户友好的框架,它支持开发者利用自己的数据轻松构建基于LLM(大型语言模型)的应用程序。该框架包含索引、检索、提示创建和代理编排等关键模块,使得构建高效、智能的应用成为可能。LlamaIndex的一个主要应用是开发多功能的QA接口,该接口能够综合知识并提供对复杂查询的全面回答。
首先,我们使用LlamaIndex来构建一个RAG(检索增强生成)系统(大模型RAG(检索增强)创新--SELF-RAG)。RAG系统结合了检索和生成的能力,通过从大量数据中检索相关信息,并结合LLM的生成能力来提供准确的回答。这一步骤是构建金融分析师Agent的基础,因为它使得Agent能够理解和分析金融数据。
接下来,我们将RAG查询引擎封装为LlamaindexTool,确保工具抽象是构建数据代理的核心。通过封装,我们可以使Agent更容易地利用RAG系统的能力来执行复杂的金融分析任务。
在CrewAI中,我们需要定义不同角色的代理,并为其设定明确的目标。对于金融分析师Agent,我们可以定义以下角色:
研究员agent(Researcher Agent):负责进行深度分析和研究,挖掘Uber 2019年度的风险因素。该agent被赋予“高级金融分析师”的角色,并设定了“揭示不同科技公司的见解”的目标。
作家agent(Writer Agent):负责将研究员agent的分析结果转化为易于理解且引人入胜的博客文章,以吸引更广泛的受众。该agent的角色为“技术内容策划师”,目标是利用分析结果创作一篇关于Uber所面临挑战的博客。
我们为金融分析师Agent创建了具体的任务,如“对Uber 2019年的风险因素进行全面分析”。同时,我们定义了任务的执行流程,可以是顺序流程或层次流程。在顺序流程中,任务按顺序执行;在层次流程中,一个管理Agent负责协调其他Agent,确保任务的有序进行和结果的验证。
任务1:由研究员agent执行,任务描述为“对Uber 2019年的风险因素进行全面分析”,并期望输出一份详细的报告。
任务2:由作家agent执行,基于研究员agent的分析结果,开发一篇具有吸引力的博客文章,突出Uber所面临的主要挑战
研究员agent利用LlamaindexTool工具,通过多次查询Uber 2019年10K年度报告中的相关章节,特别是“风险因素”部分,逐步揭示了Uber在以下几个方面所面临的风险:
新市场扩张风险:Uber在欧洲等新市场的扩张面临文化、监管和竞争等多方面的挑战。公司需要管理好国际业务的风险,以确保其财务业绩和未来发展不受影响。
技术创新风险:Uber在新技术(如无人驾驶汽车、电动单车和电动滑板车)的研发上投入巨大,但这些技术本身存在很高的不确定性,且可能无法带来预期的回报。
监管和安全风险:随着Uber业务的不断扩大,公司需要面对复杂的监管环境和用户对安全性的担忧。这些因素都可能对公司的声誉和盈利能力造成负面影响。
作家agent在接收到研究员agent的分析报告后,开始撰写博客文章。文章以“Uber的艰难旅程:驾驭未来的挑战”为题,通过生动的语言和清晰的逻辑结构,向读者展示了Uber在2019年所面临的三大主要挑战:
新市场扩张的双刃剑:详细描述了Uber在新市场扩张过程中的机遇与风险,特别是文化适应性和监管合规方面的挑战。
创新之路的荆棘:分析了Uber在新技术研发方面的投入与风险,特别是无人驾驶汽车和共享出行领域的创新所带来的不确定性。
监管与安全的双重考验:强调了Uber在应对监管挑战和用户安全需求方面的努力与不足,并展望了公司未来的发展方向。
安装依赖
!pip install llama-index!pip install llama-index-llms-groq!pip install llama-index-core!pip install llama-index-readers-file!pip install llama-index-tools-wolfram-alpha!pip install llama-index-embeddings-huggingface!pip install 'crewai[tools]'
设置LLM
from google.colab import userdata
from llama_index.llms.groq import Groq
groq_api_key = userdata.get('GROQ_API_KEY')
#
llm = Groq(model="llama3-70b-8192", api_key=groq_api_key)
#
response = llm.complete("Explain the importance of low latency LLMs")
print(response)
######################################Response############################
Low-latency Large Language Models (LLMs) are crucial in various applications where real-time or near-real-time processing is essential. Here are some reasons why low-latency LLMs are important:
1. **Interactive Systems**: In interactive systems like chatbots, virtual assistants, and conversational AI, low-latency LLMs enable rapid response times, making the interaction feel more natural and human-like. This is particularly important in applications where users expect immediate responses, such as customer support or language translation.
2. **Real-time Decision Making**: In applications like autonomous vehicles, robotics, or medical diagnosis, low-latency LLMs can process and analyze vast amounts of data in real-time, enabling swift decision-making and reaction to changing circumstances.
3. **Live Streaming and Broadcasting**: Low-latency LLMs can facilitate real-time language translation, sentiment analysis, or content moderation in live streaming and broadcasting applications, enhancing the viewer experience and ensuring timely content delivery.
4. **Gaming and Esports**: In online gaming and esports, low-latency LLMs can improve the gaming experience by enabling faster language processing, sentiment analysis, and chat moderation, reducing lag and enhancing overall performance.
5. **Healthcare and Emergency Services**: In healthcare and emergency services, low-latency LLMs can quickly process medical records, diagnose conditions, and provide critical information to healthcare professionals, saving lives and improving patient outcomes.
6. **Financial Trading and Analytics**: Low-latency LLMs can rapidly analyze large datasets, enabling high-frequency trading, sentiment analysis, and risk assessment in financial markets, helping traders and analysts make informed decisions.
7. **Cybersecurity**: In cybersecurity, low-latency LLMs can quickly detect and respond to threats, such as malware, phishing attacks, or DDoS attacks, reducing the attack surface and minimizing damage.
8. **Edge Computing and IoT**: As IoT devices proliferate, low-latency LLMs can process data closer to the source, reducing latency and improving real-time decision-making in applications like smart homes, cities, or industrial automation.
9. **Accessibility and Inclusion**: Low-latency LLMs can enable real-time language translation, captioning, and transcription, improving accessibility and inclusion for people with disabilities, language barriers, or hearing impairments.
10. **Competitive Advantage**: In many industries, low-latency LLMs can provide a competitive advantage by enabling faster decision-making, improved customer experiences, and increased operational efficiency, ultimately driving business success.
To achieve low latency in LLMs, researchers and developers are exploring various techniques, including:
1. Model pruning and knowledge distillation
2. Quantization and precision reduction
3. Parallel processing and distributed computing
4. Edge computing and decentralized architectures
5. Optimized hardware and software designs
6. Caching and memoization
7. Lazy loading and just-in-time compilation
By reducing latency in LLMs, we can unlock new possibilities in various applications, leading to improved user experiences, increased efficiency, and enhanced decision-making capabilities.
## Crewai requires a chat based model for binding
from langchain_openai import ChatOpenAI
chat_llm = ChatOpenAI(
openai_api_base="https://api.groq.com/openai/v1",
openai_api_key=groq_api_key,
model="llama3-70b-8192",
temperature=0,
max_tokens=1000,
)
下载一份测试数据
!wget "https://s23.q4cdn.com/407969754/files/doc_financials/2019/ar/Uber-Technologies-Inc-2019-Annual-Report.pdf" -O uber_10k.pdf
解析数据
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.openai import OpenAI
import os
from langchain_openai import ChatOpenAI
#
reader = SimpleDirectoryReader(input_files=["uber_10k.pdf"])
docs = reader.load_data()
docs[1]
##############################################################
Document(id_='dd161725-2512-4b03-a689-accc69dc46d4', embedding=None, metadata={'page_label': '2', 'file_name': 'uber_10k.pdf', 'file_path': 'uber_10k.pdf', 'file_type': 'application/pdf', 'file_size': 2829436, 'creation_date': '2024-06-30', 'last_modified_date': '2020-03-31'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={}, text='69\nCountries\n10K+\nCities\n$65B\nGross Bookings\n111M\nMAPCs\n7B\nTripsA global tech \nplatform at \nmassive scale\nServing multiple multi-trillion \ndollar markets with products \nleveraging our core technology \nand infrastructure\nWe believe deeply in our bold mission. Every minute \nof every day, consumers and Drivers on our platform \ncan tap a button and get a ride or tap a button and \nget work. We revolutionized personal mobility with \nridesharing, and we are leveraging our platform to \nredefine the massive meal delivery and logistics \nindustries. The foundation of our platform is our \nmassive network, leading technology, operational \nexcellence, and product expertise. Together, these \nelements power movement from point A to point B.', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n')
设置embedding模型
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
# loads BAAI/bge-small-en-v1.5
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
建立索引
index = VectorStoreIndex.from_documents(docs,embed_model=embed_model,)query_engine = index.as_query_engine(similarity_top_k=5, llm=llm)
将查询引擎实例化为工具
from crewai_tools import LlamaIndexTool
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Uber 2019 10K Query Tool",
description="Use this tool to lookup the 2019 Uber 10K Annual Report",
)
#
query_tool.args_schema.schema()
########################################
{'title': 'QueryToolSchema',
'description': 'Schema for query tool.',
'type': 'object',
'properties': {'query': {'title': 'Query',
'description': 'Search query for the query tool.',
'type': 'string'}},
'required': ['query']}
实例化agent
import os
from crewai import Agent, Task, Crew, Process
# Define your agents with roles and goals
researcher = Agent(
role="Senior Financial Analyst",
goal="Uncover insights about different tech companies",
backstory="""You work at an asset management firm.
Your goal is to understand tech stocks like Uber.""",
verbose=True,
allow_delegation=False,
tools=[query_tool],
llm=chat_llm,
)
writer = Agent(
role="Tech Content Strategist",
goal="Craft compelling content on tech advancements",
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
llm=chat_llm,
verbose=True,
allow_delegation=False,
)
# Create tasks for your agents
task1 = Task(
description="""Conduct a comprehensive analysis of Uber's risk factors in 2019.""",
expected_output="Full analysis report in bullet points",
agent=researcher,
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the headwinds that Uber faces.
Your post should be informative yet accessible, catering to a casual audience.
Make it sound cool, avoid complex words.""",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer,
)
实例化crew
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2,# You can set it to 1 or 2 to different logging levels
)
利用CrewAI和LlamaIndex这两个强大的开源框架,我们成功地构建了一个高效的金融分析师Agent。该Agent能够自动执行复杂的金融分析任务,提供实时、准确的分析结果,为企业的决策制定提供有力支持。这一解决方案不仅提高了金融分析的效率和准确性,还展示了开源技术在金融领域应用的巨大潜力(LLM Agent在商业中的应用:探索自主智能的新前沿)。
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-04-30
2024-07-18
2024-07-04
2024-07-10
2024-06-11
2024-06-20
2024-03-29
2024-07-04
2024-06-29
2024-07-10