AI知识库

53AI知识库

学习大模型的前沿技术与行业应用场景


小而精:llmware如何用小型模型构建企业级RAG管道!
发布日期:2024-06-18 08:37:21 浏览次数: 1938 来源:Halo咯咯


01
概述
llmware提供了一个统一框架,用于构建基于大型语言模型(LLM)的应用(例如,RAG,代理),这些应用使用小型、专业化的模型,可以私有部署,安全地与企业知识源集成,并以成本效益的方式为任何业务流程进行调整和适应。
llmware有两个主要组成部分:
  • RAG管道 - 集成组件,用于连接知识源到生成性AI模型的全生命周期;以及
  • 50多个小型、专业化模型,为关键的企业流程自动化任务进行了微调,包括基于事实的问题回答、分类、摘要和提取。
通过结合这两个组件,以及集成领先的开源模型和底层技术,llmware提供了一套全面的工具,可以快速构建基于知识的企业级LLM应用。


LLMWare 推出了一系列新的 SLIM(结构化语言指令模型),彻底改变了企业利用人工智能处理复杂工作流程的方式。这些模型经过定制,可生成结构化数据输出,从而实现无缝自动化以及与现有系统的集成。与其他人工智能模型不同,SLIM 被设计为在仅 CPU 的计算机上运行,使硬件资源有限的企业可以使用它们。这些模型的开源性质允许定制并避免昂贵的许可费用,从而实现先进人工智能技术的民主化。 
LLMWare 的 SLIM 解决了常见的人工智能采用障碍,例如多步骤任务协调的需要、清晰的数据输出和数据安全问题。通过提供全面的解决方案,LLMWare 使企业能够释放人工智能的全部潜力,将其后台运营转变为效率中心。
02
特性
使用llmware编写代码基于几个主要概念:
Model Catalog:无论底层实现如何,都可以通过简单的查找以相同的方式访问所有模型。 
Library:大规模摄取、组织和索引知识集合 - 解析、文本块和嵌入。 
Query:使用文本、语义、混合、元数据和自定义过滤器的组合查询库。 
Prompt with Sources:将知识检索与LLM推理结合的最简单方式。
RAG-Optimized Models为RAG工作流集成和本地运行而设计的1-7B参数RAG优化模型。 
Simple-to-Scale Database Options:从笔记本电脑到并行集群的集成数据存储。 
? 具有函数调用和SLIM模型的代理 

from llmware.agents import LLMfx
text = ("Tesla stock fell 8% in premarket trading after reporting fourth-quarter revenue and profit that ""missed analysts’ estimates. The electric vehicle company also warned that vehicle volume growth in ""2024 'may be notably lower' than last year’s growth rate. Automotive revenue, meanwhile, increased ""just 1% from a year earlier, partly because the EVs were selling for less than they had in the past. ""Tesla implemented steep price cuts in the second half of the year around the world. In a Wednesday ""presentation, the company warned investors that it’s 'currently between two major growth waves.'")
# create an agent using LLMfx classagent = LLMfx()
# load text to processagent.load_work(text)
# load 'models' as 'tools' to be used in analysis processagent.load_tool("sentiment")agent.load_tool("extract")agent.load_tool("topics")agent.load_tool("boolean")
# run function calls using different toolsagent.sentiment()agent.topics()agent.extract(params=["company"])agent.extract(params=["automotive revenue growth"])agent.xsum()agent.boolean(params=["is 2024 growth expected to be strong? (explain)"])
# at end of processing, show the report that was automatically aggregated by keyreport = agent.show_report()
# displays a summary of the activity in the processactivity_summary = agent.activity_summary()
# list of the responses gatheredfor i, entries in enumerate(agent.response_list):print("update: response analysis: ", i, entries)
output = {"report": report, "activity_summary": activity_summary, "journal": agent.journal}

? ? 开始编码 - RAG快速入门 ?
# This example illustrates a simple contract analysis# using a RAG-optimized LLM running locally
import osimport refrom llmware.prompts import Prompt, HumanInTheLoopfrom llmware.setup import Setupfrom llmware.configs import LLMWareConfig
def contract_analysis_on_laptop (model_name):
#In this scenario, we will:#-- download a set of sample contract files#-- create a Prompt and load a BLING LLM model#-- parse each contract, extract the relevant passages, and pass questions to a local LLM
#Main loop - Iterate thru each contract:##1.parse the document in memory (convert from PDF file into text chunks with metadata)#2.filter the parsed text chunks with a "topic" (e.g., "governing law") to extract relevant passages#3.package and assemble the text chunks into a model-ready context#4.ask three key questions for each contract to the LLM#5.print to the screen#6.save the results in both json and csv for furthe processing and review.
#Load the llmware sample files
print (f"\n > Loading the llmware sample files...")
sample_files_path = Setup().load_sample_files()contracts_path = os.path.join(sample_files_path,"Agreements")
#Query list - these are the 3 main topics and questions that we would like the LLM to analyze for each contract
query_list = {"executive employment agreement": "What are the name of the two parties?","base salary": "What is the executive's base salary?","vacation": "How many vacation days will the executive receive?"}
#Load the selected model by name that was passed into the function
print (f"\n > Loading model {model_name}...")
prompter = Prompt().load_model(model_name, temperature=0.0, sample=False)
#Main loop
for i, contract in enumerate(os.listdir(contracts_path)):
# excluding Mac file artifact (annoying, but fact of life in demos)if contract != ".DS_Store":
print("\nAnalyzing contract: ", str(i+1), contract)
print("LLM Responses:")
for key, value in query_list.items():
# step 1 + 2 + 3 above - contract is parsed, text-chunked, filtered by topic key,# ... and then packaged into the prompt
source = prompter.add_source_document(contracts_path, contract, query=key)
# step 4 above - calling the LLM with 'source' information already packaged into the prompt
responses = prompter.prompt_with_source(value, prompt_name="default_with_context")
# step 5 above - print out to screen
for r, response in enumerate(responses):print(key, ":", re.sub("[\n]"," ", response["llm_response"]).strip())
# We're done with this contract, clear the source from the promptprompter.clear_source_materials()
# step 6 above - saving the analysis to jsonl and csv
# Save jsonl report to jsonl to /prompt_history folderprint("\nPrompt state saved at: ", os.path.join(LLMWareConfig.get_prompt_path(),prompter.prompt_id))prompter.save_state()
# Save csv report that includes the model, response, prompt, and evidence for human-in-the-loop reviewcsv_output = HumanInTheLoop(prompter).export_current_interaction_to_csv()print("csv output saved at:", csv_output)

if __name__ == "__main__":
# use local cpu model - try the newest - RAG finetune of Phi-3 quantized and packaged in GGUFmodel = "bling-phi-3-gguf"
contract_analysis_on_laptop(model)

03
数据存储选项

快速启动:使用SQLite3和ChromaDB(基于文件)即可开箱即用 - 不需要安装 
from llmware.configs import LLMWareConfig LLMWareConfig().set_active_db("sqlite") LLMWareConfig().set_vector_db("chromadb")

速度 + 规模:使用MongoDB(文本集合)和Milvus(向量数据库) :通过Docker Compose安装 
curl -o docker-compose.yaml https://raw.githubusercontent.com/llmware-ai/llmware/main/docker-compose.yamldocker compose up -d

from llmware.configs import LLMWareConfigLLMWareConfig().set_active_db("mongo")LLMWareConfig().set_vector_db("milvus")

Postgres:使用Postgres同时作为文本集合和向量数据库 - 通过Docker Compose安装 
curl -o docker-compose.yaml https://raw.githubusercontent.com/llmware-ai/llmware/main/docker-compose-pgvector.yamldocker compose up -d

from llmware.configs import LLMWareConfigLLMWareConfig().set_active_db("postgres")LLMWareConfig().set_vector_db("postgres")
混合搭配:LLMWare支持3种文本集合数据库(Mongo, Postgres, SQLite)和10种向量数据库(Milvus, PGVector-Postgres, Neo4j, Redis, Mongo-Atlas, Qdrant, Faiss, LanceDB, ChromaDB和Pinecone)
# scripts to deploy other optionscurl -o docker-compose.yaml https://raw.githubusercontent.com/llmware-ai/llmware/main/docker-compose-redis-stack.yaml






53AI,企业落地应用大模型首选服务商

产品:大模型应用平台+智能体定制开发+落地咨询服务

承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业

联系我们

售前咨询
186 6662 7370
预约演示
185 8882 0121

微信扫码

与创始人交个朋友

回到顶部

 
扫码咨询