微信扫码
与创始人交个朋友
我要投稿
12月6日Ollama发布了新的0.5版本,开始支持模型结果的结构化输出,能够将模型的输出限定为JSON模式所定义的特定格式。Ollama的Python和JavaScript库已更新,以支持结构化输出。
Ollama结构化输出的情形包括:
• 从文档中解析数据
• 从图像中提取数据
• 构建所有语言模型的回复结构
• 比JSON模式更具可靠性和一致性
01
Ollama的结构化输出需要更新到0.5及以上版本。请首先更新Ollama以及Python或JavaScript库。
要将结构化输出传递给模型,可以在cURL请求中使用“格式”(format)参数,或者在Python或JavaScript库中使用“格式”参数。
—
可以简单的通过CURL命令调用模型,并使用结构化输出。
curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{"model": "llama3.1","messages": [{"role": "user", "content": "Tell me about Canada."}],"stream": false,"format": {"type": "object","properties": {"name": {"type": "string"},"capital": {"type": "string"},"languages": {"type": "array","items": {"type": "string"}}},"required": ["name","capital", "languages"]}}'
模型的响应将按照请求中JSON模式所定义的格式返回。
{"capital": "Ottawa","languages": ["English","French"],"name": "Canada"}
—
使用Ollama的Python库时,将schema作为JSON对象传递给“foramt”参数,可以以字典(dict)形式传递,或者使用“Pydantic”(推荐)通过“model_json_schema()”方法对模式进行序列化。
from ollama import chatfrom pydantic import BaseModelclass Country(BaseModel):name: strcapital: strlanguages: list[str]response = chat(messages=[{'role': 'user','content': 'Tell me about Canada.',}],model='llama3.1',format=Country.model_json_schema(),)country = Country.model_validate_json(response.message.content)print(country)
同样,模型的响应将按照请求中JSON模式所定义的格式返回。
name='Canada' capital='Ottawa' languages=['English', 'French']
—
使用OllamaJavaScript库时,将schema作为JSON对象传递给“format”参数,可以以对象(object)形式传递,或者使用“Zod”(推荐)通过“zodToJsonSchema()”方法对模式进行序列化。
import ollama from 'ollama';import { z } from 'zod';import { zodToJsonSchema } from 'zod-to-json-schema';const Country = z.object({name: z.string(),capital: z.string(), languages: z.array(z.string()),});const response = await ollama.chat({model: 'llama3.1',messages: [{ role: 'user', content: 'Tell me about Canada.' }],format: zodToJsonSchema(Country),});const country = Country.parse(JSON.parse(response.message.content));console.log(country);
毫无例外,模型的响应将按照请求中JSON模式所定义的格式返回。
{name: "Canada",capital: "Ottawa",languages: [ "English", "French" ],}
—
1. 数据提取:
为了从文本中提取结构化数据,需要定义一个schema来代表信息。然后模型就可以抽取信息并将数据以在定义的schema中的JSON格式返回。
from ollama import chatfrom pydantic import BaseModelclass Pet(BaseModel):name: stranimal: strage: intcolor: str | Nonefavorite_toy: str | Noneclass PetList(BaseModel):pets: list[Pet]response = chat(messages=[{'role': 'user','content': '''I have two pets.A cat named Luna who is 5 years old and loves playing with yarn. She has grey fur.I also have a 2 year old black cat named Loki who loves tennis balls.''',}],model='llama3.1',format=PetList.model_json_schema(),)pets = PetList.model_validate_json(response.message.content)print(pets)
示例输出结果:
pets=[Pet(name='Luna', animal='cat', age=5, color='grey', favorite_toy='yarn'), Pet(name='Loki', animal='cat', age=2, color='black', favorite_toy='tennis balls')]
2. 图像提取:
结构化输出还可以与视觉模型一起使用。例如,下面的代码使用llama3.2-vision来描述以下图片,并返回一个结构化的输出。
from ollama import chatfrom pydantic import BaseModelclass Object(BaseModel):name: strconfidence: floatattributes: str class ImageDescription(BaseModel):summary: strobjects: List[Object]scene: strcolors: List[str]time_of_day: Literal['Morning', 'Afternoon', 'Evening', 'Night']setting: Literal['Indoor', 'Outdoor', 'Unknown']text_content: Optional[str] = Nonepath = 'path/to/image.jpg'response = chat(model='llama3.2-vision',format=ImageDescription.model_json_schema(),# Pass in the schema for the responsemessages=[{'role': 'user','content': 'Analyze this image and describe what you see, including any objects, the scene, colors and any text you can detect.','images': [path],},],options={'temperature': 0},# Set temperature to 0 for more deterministic output)image_description = ImageDescription.model_validate_json(response.message.content)print(image_description)
示例输出结果:
summary='A palm tree on a sandy beach with blue water and sky.' objects=[Object(name='tree', confidence=0.9, attributes='palm tree'), Object(name='beach', confidence=1.0, attributes='sand')], scene='beach', colors=['blue', 'green', 'white'], time_of_day='Afternoon' setting='Outdoor' text_content=None
3. 兼容OpenAI:
from openai import OpenAIimport openaifrom pydantic import BaseModelclient = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")class Pet(BaseModel):name: stranimal: strage: intcolor: str | Nonefavorite_toy: str | Noneclass PetList(BaseModel):pets: list[Pet]try:completion = client.beta.chat.completions.parse(temperature=0,model="llama3.1:8b",messages=[{"role": "user", "content": '''I have two pets.A cat named Luna who is 5 years old and loves playing with yarn. She has grey fur.I also have a 2 year old black cat named Loki who loves tennis balls.'''}],response_format=PetList,)pet_response = completion.choices[0].messageif pet_response.parsed:print(pet_response.parsed)elif pet_response.refusal:print(pet_response.refusal)except Exception as e:if type(e) == openai.LengthFinishReasonError:print("Too many tokens: ", e)passelse:print(e)pass
—
• 公开对数几率(logits)以实现可控生成
• 提升结构化输出的性能和准确性
• 对采样进行GPU加速
• 提供JSON模式之外的其他格式支持
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-12-22
Hugging Face 发布免费开放课程,微调本地LLMs模型
2024-12-22
我对Multi-Agent集成业务场景设计
2024-12-21
一文回顾OpenAI系列发布会:从工具到AGI,OpenAI的12天进化论
2024-12-19
强化微调技术详解:开启AI模型定制的新篇章
2024-12-18
OpenAI 年底「百亿补贴」来了,满血 o1 API 开放,成本暴跌,定制升级
2024-12-18
腾讯AI团队:用Ray分布式计算效率提升800%
2024-12-18
OpenAI 新货详解:大量接口更新,还有 Go/Java SDK
2024-12-18
聊聊对强化微调(RFT)的理解及看法
2024-09-18
2024-07-11
2024-07-11
2024-07-09
2024-06-11
2024-10-20
2024-07-23
2024-07-20
2024-07-26
2024-07-12