微信扫码
添加专属顾问
我要投稿
获取 Gemma 的访问权限
Gemma 模型托管在 Kaggle 上。要使用 Gemma,请在 Kaggle 上请求访问权限:
安装依赖
# Install Keras 3 last. See https://keras.io/getting_started/ for more details.!pip install -q -U keras-nlp!pip install -q -U keras>=3
选定后端
import os
os.environ["KERAS_BACKEND"] = "jax"# Or "torch" or "tensorflow".
# Avoid memory fragmentation on JAX backend.
os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"]="1.00"
导入包
import kerasimport keras_nlp
template = "Instruction:\n{question}\n\nResponse:\n{answer}"
gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma2_instruct_2b_en")gemma_lm.summary()
Preprocessor: "gemma_causal_lm_preprocessor"
Model: "gemma_causal_lm"
Total params: 2,614,341,888 (9.74 GB)
Trainable params: 2,614,341,888 (9.74 GB)
Non-trainable params: 0 (0.00 B)
查询书中提到的与 Rust 相关的知识
prompt = template.format(question="How can I overload the `+` operator for arithmetic addition in Rust?",answer="",)print(gemma_lm.generate(prompt, max_length=256))
Instruction:How can I overload the `+` operator for arithmetic addition in Rust?Response:```ruststruct Point {x: f64,y: f64,}impl Point {fn new(x: f64, y: f64) -> Self {Point { x, y }}fn add(self, other: Point) -> Point {Point {x: self.x + other.x,y: self.y + other.y,}}}fn main() {let p1 = Point::new(1.0, 2.0);let p2 = Point::new(3.0, 4.0);let result = p1 + p2;println!("Result: ({}, {})", result.x, result.y);}```**Explanation:**1. **Struct Definition:** We define a `Point` struct to represent points in 2D space.2. **`add` Method:** We implement the `+` operator for the `Point`
加载数据集
import json
data = []
with open('/kaggle/input/rust-official-book/dataset.jsonl', encoding='utf-8') as file:
for line in file:
features = json.loads(line)
# Format the entire example as a single string.
data.append(template.format(**features))
# Only use 1000 training examples, to keep it fast.
# data = data[:100]
# Enable LoRA for the model and set the LoRA rank to 4.gemma_lm.backbone.enable_lora(rank=4)gemma_lm.summary()
Preprocessor: "gemma_causal_lm_preprocessor"
Model: "gemma_causal_lm"
Total params: 2,617,270,528 (9.75 GB)
Trainable params: 2,928,640 (11.17 MB)
Non-trainable params: 2,614,341,888 (9.74 GB)
# Limit the input sequence length to 512 (to control memory usage).
512 =
# Use AdamW (a common optimizer for transformer models).
optimizer = keras.optimizers.AdamW(
learning_rate=5e-5,
weight_decay=0.01,
)
# Exclude layernorm and bias terms from decay.
["bias", "scale"]) =
gemma_lm.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=optimizer,
weighted_metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
epochs=1, batch_size=1)
查询书中提到的与 Rust 相关的知识
prompt = template.format(question="How can I overload the `+` operator for arithmetic addition in Rust?",answer="",)print(gemma_lm.generate(prompt, max_length=256))
注意,本教程在一个小型粗糙数据集上进行微调,仅训练一个轮次 (epoch),并使用较低的 LoRA 秩值。为了从微调后的模型中获得更好的响应,您可以尝试以下方法:
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费场景POC验证,效果验证后签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2025-04-26
LLM 微调的学习动力学:幻觉、挤压与优化的艺术(万字长文,实战解读)
2025-04-26
8 卡 H100 大模型训练环境部署文档
2025-04-25
DeepSeek + Dify 企业级大模型私有化部署指南
2025-04-24
自主构建MCP,轻松实现云端部署!
2025-04-24
大模型微调框架LLaMA-Factory
2025-04-23
Unsloth:提升 LLM 微调效率的革命性开源工具
2025-04-23
超越 DevOps?VibeOps 引领 AI 驱动的开发革命
2025-04-23
大模型想 “专精” 特定任务?这 3 种 Addition-Based 微调法别错过
2025-02-04
2025-02-04
2024-09-18
2024-07-11
2024-07-09
2024-07-11
2024-07-26
2025-02-05
2025-01-27
2025-02-01
2025-04-23
2025-04-20
2025-04-01
2025-03-31
2025-03-20
2025-03-16
2025-03-16
2025-03-13