微信扫码
和创始人交个朋友
我要投稿
掌握LangChain与网络爬虫的结合,提升LLM数据获取能力。 核心内容: 1. 网络爬虫在LLM数据增强中的作用和优势 2. LangChain中使用爬取数据的挑战及解决方案 3. 实操教程:构建LangChain网络爬虫,从CNN文章提取内容并生成摘要
用网络爬虫赋能LLM应用
在LangChain中使用爬取数据的优势与挑战
· 反爬机制:如验证码(CAPTCHA)和动态网页。
· 合规性与效率:维护合规且高效的爬虫耗时且技术复杂。
Bright Data的Web Scraper API提供了预配置的爬取端点,支持100+网站,通过IP轮换、验证码自动破解和JavaScript渲染等功能,实现高效、可靠的数据收集。
分步教程:用Bright Data实现LangChain网络爬虫
我们将在这里构建的示例是一个简单的起点,但使用LangChain可以轻松扩展附加的特性和分析。例如,您甚至可以基于SERP数据创建一个RAG聊天机器人。
按照下面的步骤开始吧!
mkdir langchain_scraping
cd langchain_scraping python3 -m venv env
注意:在Windows上,使用python而不是python3。
现在,在您最喜欢的Python IDE中打开项目目录。PyCharm社区版或带有Python扩展的Visual Studio Code就可以了。
在langchain_scraping中,添加一个script.py文件。这是一个空Python脚本,但它很快就会包含LangChain的网页抓取逻辑。
在IDE的终端中,使用下面的命令激活虚拟环境:
./env/bin/activate
env/Scripts/activate
Python LangChain抽取项目依赖于以下库:
· python-dotenv:从.env文件中加载环境变量。它将用于管理敏感信息,如Bright Data和OpenAI凭据。
· requests:执行HTTP请求以与Bright Data的Web Scraper API交互。
· langchain_openai:通过openai SDK对OpenAI的LangChain集成。
在激活的虚拟环境中,用以下命令安装所有依赖项:
pip install python-dotenv requests langchain-community
在scripts.py中,添加以下导入:
from dotenv import load_dotenv import os
注意:os来自Python标准库,所以你不需要安装它。
然后,在项目文件夹中创建一个.env文件来存储所有凭据。下面是您当前的项目文件结构应该是什么样子的:
在脚本.py中用下面一行指示python-dotenv从.env中加载环境变量:
load_dotenv()
os.environ.get("")
由于目标站点是CNN.com,请在搜索输入中输入“cnn”,然后选择“CNN新闻-按URL分类”抽取器:
在当前页面上,点击"Create token"按钮,生成一个Bright Data API token:
这将打开以下模式,您可以在其中配置token的详细信息:
在您的.env文件中,将此信息存储如下:
BRIGHT_DATA_API_TOKEN=""
你的CNN新闻Web Scraper API页面现在看起来应该类似于下面的示例:
我们开始吧!配置您的Web Scraper API请求并使用它。
Web Scraper API会在前面看到的页面上启动根据您的需求配置的Web Scraper任务。然后,该过程生成包含刮取数据的快照。
下面是Web Scraper API抽取过程的工作概述:
· 您向Web Scraper API发出请求,通过URL提供要抓取的页面。
· 将启动一个网页抓取任务,从这些URL中检索和解析数据。
· 一旦任务完成,您将反复查询快照检索API以获取结果数据。
CNN Web Scraper API的POST端点是:
"https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lycz8783197ch4wvwg&include_errors=true"
{"snapshot_id":""}
使用此响应中的snapshot_id,您需要查询以下端点以检索数据:
https://api.brightdata.com/datasets/v3/snapshot/?format=json
任务完成后,端点将以以下格式返回数据:
[ { "input": { "url": "https://www.cnn.com/2024/12/16/weather/white-christmas-forecast-climate/", "keyword": "" }, "id": "https://www.cnn.com/2024/12/16/weather/white-christmas-forecast-climate/index.html", "url": "https://www.cnn.com/2024/12/16/weather/white-christmas-forecast-climate/index.html", "author": "Mary Gilbert", "headline": "White Christmas forecast: Will you be left dreaming of snow or reveling in it?", "topics": [ "weather" ], "publication_date": "2024-12-16T13:20:52.800Z", "updated_last": "2024-12-16T13:20:52.800Z", "content": "Christmas is approaching nearly as fast as Santa’s sleigh, but almost anyone in the United States fantasizing about a movie-worthy white Christmas might need to keep dreaming. Early forecasts indicate temperatures could max out around 10 to 15 degrees above normal for much of the country on Christmas Day. [omitted for brevity...]", "videos": null, "images": [ "omitted for brevity..." ], "related_articles": [], "keyword": null, "timestamp": "2024-12-16T14:18:14.101Z" }]
要实现这一点,首先从.env读取env并初始化端点URL常量:
BRIGHT_DATA_API_TOKEN = os.environ.get("BRIGHT_DATA_API_TOKEN") BRIGHT_DATA_CNN_WEB_SCRAPER_API_URL = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lycz8783197ch4wvwg&include_errors=true"
def get_scraped_data(url): # Authorization headers headers = { "Authorization": f"Bearer {BRIGHT_DATA_API_TOKEN}" } # Web Scraper API payload data = [{ "url": url }] # Making the POST request to the Bright Data Web Scraper API response = requests.post(BRIGHT_DATA_CNN_WEB_SCRAPER_API_URL, headers=headers, json=data) if response.status_code == 200: response_data = response.json() snapshot_id = response_data.get("snapshot_id") if snapshot_id: # Iterate until the snapshot is ready snapshot_url = f"https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id}?format=json" while True: snapshot_response = requests.get(snapshot_url, headers=headers) if snapshot_response.status_code == 200: # Parse and return the snapshot data snapshot_response_data = snapshot_response.json() return snapshot_response_data[0].get("content") elif snapshot_response.status_code == 202: print("Snapshot not ready yet. Retrying in 10 seconds...") time.sleep(10) # Wait for 10 seconds before retrying else: print(f"Failed to retrieve snapshot. Status code: {snapshot_response.status_code}") print(snapshot_response.text) break else: print("Snapshot ID not found in the response") else: print(f"Error: {response.status_code}")print(response.text)
import requestsimport time
步骤6:准备使用Open AI模型
这个示例依赖OpenAI模型在LangChain中集成LLM。要使用这些模型,您必须在环境变量中配置OpenAI API密钥。
默认情况下,langchain_openai会自动从OPENAI_API_KEY环境变量读取OpenAI API密钥。要设置此功能,请在你的.env文件中添加以下行:
OPENAI_API_KEY=""
太棒了!是时候在LangChain抽取脚本中使用OpenAI模型了。
步骤7:生成LLM Prompt
定义一个函数,该函数取出抽取的数据,并生成一个Prompt以获取文章摘要:
def create_summary_prompt(content, words=100): return f"""Summarize the following content in less than {words} words. CONTENT: '{content}' """在当前示例中,完整的Prompt将是:Summarize the following content in less than 100 words.CONTENT:'Christmas is approaching nearly as fast as Santa’s sleigh, but almost anyone in the United States fantasizing about a movie-worthy white Christmas might need to keep dreaming. Early forecasts indicate temperatures could max out around 10 to 15 degrees above normal for much of the country on Christmas Day. It’s a forecast reminiscent of last Christmas for many, which came amid the warmest winter on record in the US. But the country could be split in two by warmth and cold in the run up to the big day. [omitted for brevity...]'
这足以说明Prompt效果很好!
步骤8:集成OpenAI
首先,调用get_scraped_data()函数从文章页面中检索内容:
article_url = "https://www.cnn.com/2024/12/16/weather/white-christmas-forecast-climate/"scraped_data = get_scraped_data(article_url)
if scraped_data is not None:prompt = create_summary_prompt(scraped_data)
model = ChatOpenAI(model="gpt-4o-mini")response = model.invoke(prompt)
from langchain_openai import ChatOpenAI
summary = response.content
步骤9:导出AI处理的数据
现在,您只需通过LangChain将所选AI模型生成的数据导出为人类可阅读的格式,例如JSON文件。
为此,用您想要的数据初始化字典。然后,导出,然后将其保存为JSON文件,如下图:
export_data = { "url": article_url, "summary": summary}file_name = "summary.json"with open(file_name, "w") as file: json.dump(export_data, file, indent=4)
import json
步骤10:添加日志
使用Web Scraping AI和ChatGPT分析进行抓取过程可能需要一些时间。因此,一个好的做法是包含日志来跟踪脚本的进度。
可以通过在脚本的关键步骤中添加print()语句来实现这一点,如下所示:
article_url = "https://www.cnn.com/2024/12/16/weather/white-christmas-forecast-climate/"print(f"Scraping data from '{article_url}'...")scraped_data = get_scraped_data(article_url)if scraped_data is not None: print("Data successfully scraped, creating summary prompt") prompt = create_summary_prompt(scraped_data) # Ask ChatGPT to perform the task specified in the prompt print("Sending prompt to ChatGPT for summarization") model = ChatOpenAI(model="gpt-4o-mini") response = model.invoke(prompt) # Get the AI result summary = response.content print("Received summary from ChatGPT") # Export the produced data to JSON export_data = { "url": article_url, "summary": summary } print("Exporting data to JSON") # Write the output dictionary to JSON file file_name = "summary.json" with open(file_name, "w") as file: json.dump(export_data, file, indent=4) print(f"Data exported to '${file_name}'")else: print("Scraping failed")
步骤11:
最终的script.py文件应该包含:
from dotenv import load_dotenvimport osimport requestsimport timefrom langchain_openai import ChatOpenAIimport json load_dotenv()BRIGHT_DATA_API_TOKEN = os.environ.get("BRIGHT_DATA_API_TOKEN")BRIGHT_DATA_CNN_WEB_SCRAPER_API_URL = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lycz8783197ch4wvwg&include_errors=true" def get_scraped_data(url): # Authorization headers headers = { "Authorization": f"Bearer {BRIGHT_DATA_API_TOKEN}" } # Web Scraper API payload data = [{ "url": url }] # Making the POST request to the Bright Data Web Scraper API response = requests.post(BRIGHT_DATA_CNN_WEB_SCRAPER_API_URL, headers=headers, json=data) if response.status_code == 200: response_data = response.json() snapshot_id = response_data.get("snapshot_id") if snapshot_id: # Iterate until the snapshot is ready snapshot_url = f"https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id}?format=json" while True: snapshot_response = requests.get(snapshot_url, headers=headers) if snapshot_response.status_code == 200: # Parse and return the snapshot data snapshot_response_data = snapshot_response.json() return snapshot_response_data[0].get("content") elif snapshot_response.status_code == 202: print("Snapshot not ready yet. Retrying in 10 seconds...") time.sleep(10) # Wait for 10 seconds before retrying else: print(f"Failed to retrieve snapshot. Status code: {snapshot_response.status_code}") print(snapshot_response.text) break else: print("Snapshot ID not found in the response") else: print(f"Error: {response.status_code}") print(response.text)def create_summary_prompt(content, words=100): return f"""Summarize the following content in less than {words} words. CONTENT: '{content}' """# Retrieve the content from the given web pagearticle_url = "https://www.cnn.com/2024/12/16/weather/white-christmas-forecast-climate/"scraped_data = get_scraped_data(article_url)# Ask ChatGPT to perform the task specified in the promptprompt = create_summary_prompt(scraped_data)model = ChatOpenAI(model="gpt-4o-mini")response = model.invoke(prompt)# Get the AI resultsummary = response.content # Export the produced data to JSONexport_data = { "url": article_url, "summary": summary} # Write dictionary to JSON filewith open("summary.json", "w") as file: json.dump(export_data, file, indent=4)
用以下命令验证它是否有效:
python3 script.py
python script.py
终端中的输出应该接近这个:
Scraping data from 'https://www.cnn.com/2024/12/16/weather/white-christmas-forecast-climate/'...Snapshot not ready yet. Retrying in 10 seconds...Data successfully scraped, creating summary promptSending prompt to ChatGPT for summarizationReceived summary from ChatGPTExporting data to JSONData exported to 'summary.json'打开项目目录中出现的open.json文件,你应该会看到如下内容:{ "url": "https://www.cnn.com/2024/12/16/weather/white-christmas-forecast-climate/", "summary": "As Christmas approaches, forecasts indicate temperatures in the US may be 10 to 15 degrees above normal, continuing a trend from last year\u2019s warm winter. The western US will likely remain warm, while the East experiences colder conditions leading up to Christmas. Some areas may see a mix of rain and snow, but a true \"white Christmas\" requires at least an inch of snow on the ground. Historically, cities like Minneapolis and Burlington have the best chances for snow, while places like New York City and Atlanta have significantly lower probabilities."}
结论
这种方法的主要挑战包括:
· 页面结构频繁变动
· 反爬机制复杂
· 大规模数据抓取成本高
Bright Data的Web Scraper API提供了从主要网站提取数据的无缝解决方案,轻松克服了这些挑战。这使其成为支持RAG应用程序和其他LangChain支持的解决方案的宝贵工具。
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费场景POC验证,效果验证后签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2025-02-19
LangMem 发布:任何人都能轻松构建智能体记忆!
2025-02-19
深入探究Langchain v0.3:全面解读
2025-02-18
走进Langchain:全面解析
2025-02-13
LangChat实战DeepSeek-R1模型
2025-02-05
揭秘LangGraph!如何一步一步构建动态订单管理系统?
2025-01-22
LangChain实战 | OutputParser:让大模型输出从 “鸡肋” 变 “瑰宝” 的关键!
2025-01-21
Ambient Agent: 让 AI 主动工作的新范式
2025-01-19
LangChain实战 | 实现一个检索增强生成系统(RAG)
2024-10-10
2024-04-08
2024-06-03
2024-08-18
2024-09-04
2024-07-13
2024-04-08
2024-06-24
2024-07-10
2024-04-17
2025-02-05
2024-12-02
2024-11-25
2024-10-30
2024-10-11
2024-08-18
2024-08-16
2024-08-04