微信扫码
添加专属顾问
我要投稿
在8卡H20服务器上,DeepSeek-V3-0324 (685B)的性能和推理能力表现如何?核心内容:1. 8卡H20服务器配置及DeepSeek-V3-0324部署情况2. DeepSeek-V3-0324 (685B)与DeepSeek-R1-AWQ (671B)的性能对比3. DeepSeek-V3-0324在数学问题上的跑分表现
最近在一台 8卡H20 机器上,先后部署了 DeepSeek-R1-AWQ (671B)和最新的 DeepSeek-V3-0324 (685B) ,测试了下性能和数学问题跑分。服务器由火山引擎提供。先来看一下机器配置:
GPU:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.08 Driver Version: 535.161.08 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA H20 On | 00000000:65:02.0 Off | 0 |
| N/A 29C P0 71W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA H20 On | 00000000:65:03.0 Off | 0 |
| N/A 32C P0 72W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA H20 On | 00000000:67:02.0 Off | 0 |
| N/A 32C P0 74W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA H20 On | 00000000:67:03.0 Off | 0 |
| N/A 30C P0 73W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 4 NVIDIA H20 On | 00000000:69:02.0 Off | 0 |
| N/A 30C P0 74W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 5 NVIDIA H20 On | 00000000:69:03.0 Off | 0 |
| N/A 33C P0 74W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 6 NVIDIA H20 On | 00000000:6B:02.0 Off | 0 |
| N/A 33C P0 73W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 7 NVIDIA H20 On | 00000000:6B:03.0 Off | 0 |
| N/A 29C P0 75W / 500W | 0MiB / 97871MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
这里踩过一个坑:最初的这个驱动版本有问题,在RTX4090上是好的,在H20上跑 DeepSeek-R1-AWQ 试过各种配置及软件版本,一推理就崩溃。后来换了NVIDIA官网为H20推荐的驱动版本 Driver Version: 550.144.03 ( CUDA 12.4), 什么配置都没改就好了。
卡间互联:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7
GPU0 X OK OK OK OK OK OK OK
GPU1 OK X OK OK OK OK OK OK
GPU2 OK OK X OK OK OK OK OK
GPU3 OK OK OK X OK OK OK OK
GPU4 OK OK OK OK X OK OK OK
GPU5 OK OK OK OK OK X OK OK
GPU6 OK OK OK OK OK OK X OK
GPU7 OK OK OK OK OK OK OK X
Legend:
X = Self
OK = Status Ok
CNS = Chipset not supported
GNS = GPU not supported
TNS = Topology not supported
NS = Not supported
U = Unknown
内存:
# free -g
total used free shared buff/cache available
Mem: 1929 29 1891 0 9 1892
Swap: 0 0 0
磁盘:
vda 252:0 0 100G 0 disk
├─vda1 252:1 0 200M 0 part /boot/efi
└─vda2 252:2 0 99.8G 0 part /
nvme3n1 259:0 0 3.5T 0 disk
nvme2n1 259:1 0 3.5T 0 disk
nvme0n1 259:2 0 3.5T 0 disk
nvme1n1 259:3 0 3.5T 0 disk
OS
# uname -a
Linux H20 5.4.0-162-generic #179-Ubuntu SMP Mon Aug 14 08:51:31 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"
用 vLLM v0.8.2 启动推理服务,分别先后启动如下两个模型的推理:
启动性能评测:
nohup python3 -u simple-bench-to-api.py --url http://localhost:7800/v1 \
--model DeepSeek-R1 \
--concurrencys 1,10,20,30,40,50 \
--prompt "Introduce the history of China" \
--max_tokens 100,1024,16384,32768,65536,131072 \
--api_key sk-xxx \
--duration_seconds 30 \
> benth-DeepSeek-R1-AWQ-8-H20.log 2>&1 &
这个命令会分别用 max_tokens 为100,1024,16384,32768,65536,131072, 来对1个并发,10个并发,。。。,50个并发,进行批量测试。每个max_tokens取值生成一个不同并发的表格。压测脚本 simple-bench-to-api.py 及详细参数含义在上一篇文章 《单卡4090上部署的DeepSeek-R1小模型的并发性能》 中有,需要的小伙伴可以自取。
压测结果:
----- max_tokens=100 压测结果汇总 -----
其中有几个概念需要解释下
具体指标的含义:
具体可参见上一篇文章 单卡4090上部署的DeepSeek-R1小模型的并发性能
----- max_tokens=1024 压测结果汇总 -----
--- max_tokens=16384(16k) 压测结果汇总 -----
----- max_tokens=32768(32k) 压测结果汇总 -----
----- max_tokens=65536(64k) 压测结果汇总 -----
----- max_tokens=131072 (128k)压测结果汇总 -----
----- max_tokens=100 压测结果汇总 -----
----- max_tokens=1024 压测结果汇总 -----
----- max_tokens=16384(16k) 压测结果汇总 -----
----- max_tokens=32768(32k) 压测结果汇总 -----
----- max_tokens=65536(64k) 压测结果汇总 -----
压测期间资源峰值:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.144.03 Driver Version: 550.144.03 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA H20 Off | 00000000:65:02.0 Off | 0 |
| N/A 39C P0 176W / 500W | 95096MiB / 97871MiB | 95% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA H20 Off | 00000000:65:03.0 Off | 0 |
| N/A 46C P0 184W / 500W | 95070MiB / 97871MiB | 23% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA H20 Off | 00000000:67:02.0 Off | 0 |
| N/A 45C P0 178W / 500W | 95070MiB / 97871MiB | 95% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA H20 Off | 00000000:67:03.0 Off | 0 |
| N/A 41C P0 180W / 500W | 95070MiB / 97871MiB | 97% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 4 NVIDIA H20 Off | 00000000:69:02.0 Off | 0 |
| N/A 40C P0 180W / 500W | 95070MiB / 97871MiB | 95% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 5 NVIDIA H20 Off | 00000000:69:03.0 Off | 0 |
| N/A 45C P0 182W / 500W | 95070MiB / 97871MiB | 97% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 6 NVIDIA H20 Off | 00000000:6B:02.0 Off | 0 |
| N/A 46C P0 184W / 500W | 95070MiB / 97871MiB | 97% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 7 NVIDIA H20 Off | 00000000:6B:03.0 Off | 0 |
| N/A 40C P0 182W / 500W | 95078MiB / 97871MiB | 98% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
峰值 KV cache usage:
INFO 03-31 23:22:50 [loggers.py:80] Avg prompt throughput: 45.0 tokens/s, Avg generation throughput: 166.9 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 7.7%, Prefix cache hit rate: 0.0%
INFO 03-31 23:23:00 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 350.0 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 7.7%, Prefix cache hit rate: 0.0%
INFO 03-31 23:23:10 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 355.0 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 15.4%, Prefix cache hit rate: 0.0%
INFO 03-31 23:23:20 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 360.0 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 15.4%, Prefix cache hit rate: 0.0%
INFO 03-31 23:23:30 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 355.0 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 23.2%, Prefix cache hit rate: 0.0%
INFO 03-31 23:23:40 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 355.0 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 30.9%, Prefix cache hit rate: 0.0%
INFO 03-31 23:23:50 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 355.0 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 30.9%, Prefix cache hit rate: 0.0%
INFO 03-31 23:24:00 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 360.0 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 38.6%, Prefix cache hit rate: 0.0%
INFO 03-31 23:24:10 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 350.0 tokens/s, Running: 50 reqs, Waiting: 0 reqs, GPU KV cache usage: 38.6%, Prefix cache hit rate: 0.0%
用 GitHub - huggingface/lighteval: Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends 分别对部署在8卡H20上的 DeepSeek-R1-AWQ 和 DeepSeek-V3-0324 做了数学测试集跑分。这里我们修改了少量 lighteval 代码,让其不去自己启动模型推理,而是调用已经部署好的模型的OpenAI API接口。测试结果如下:
修改后的评估命令:
(benchmark) root@H20:/data/code/lighteval# lighteval endpoint litellm model_args="http://localhost:7800" tasks="lighteval|math_500|0|0"
评估结果:
| Task |Version| Metric |Value| |Stderr|
|--------------------|------:|----------------|----:|---|-----:|
|all | |extractive_match|0.818|± |0.0173|
|lighteval:math_500:0| 1|extractive_match|0.818|± |0.0173|
修改后的评估命令:
(benchmark) root@H20:/data/code/lighteval# lighteval endpoint litellm model_args="http://localhost:7800" tasks="lighteval|math_500|0|0" --max-samples 20
为了节省时间,只取了 20 道题。
评估结果:
| Task |Version| Metric |Value| |Stderr|
|--------------------|------:|----------------|----:|---|-----:|
|all | |extractive_match| 0.95|± | 0.05|
|lighteval:math_500:0| 1|extractive_match| 0.95|± | 0.05|
测试期间峰值资源消耗:
|=========================================+========================+======================|
| 0 NVIDIA H20 Off | 00000000:65:02.0 Off | 0 |
| N/A 36C P0 159W / 500W | 97048MiB / 97871MiB | 96% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA H20 Off | 00000000:65:03.0 Off | 0 |
| N/A 42C P0 167W / 500W | 97022MiB / 97871MiB | 91% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA H20 Off | 00000000:67:02.0 Off | 0 |
| N/A 40C P0 160W / 500W | 97022MiB / 97871MiB | 97% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA H20 Off | 00000000:67:03.0 Off | 0 |
| N/A 38C P0 161W / 500W | 97022MiB / 97871MiB | 95% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 4 NVIDIA H20 Off | 00000000:69:02.0 Off | 0 |
| N/A 37C P0 161W / 500W | 97022MiB / 97871MiB | 21% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 5 NVIDIA H20 Off | 00000000:69:03.0 Off | 0 |
| N/A 41C P0 162W / 500W | 97022MiB / 97871MiB | 97% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 6 NVIDIA H20 Off | 00000000:6B:02.0 Off | 0 |
| N/A 42C P0 164W / 500W | 97022MiB / 97871MiB | 97% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 7 NVIDIA H20 Off | 00000000:6B:03.0 Off | 0 |
| N/A 37C P0 163W / 500W | 97030MiB / 97871MiB | 95% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
修改后的评估命令:
(benchmark) root@H20:/data/code/lighteval# lighteval endpoint litellm model_args="http://localhost:7800" tasks="lighteval|aime25|0|0" --max-samples 20
为了节省时间,只取了 20 道题。
评估结果:
| Task |Version| Metric |Value| |Stderr|
|------------------|------:|----------------|----:|---|-----:|
|all | |extractive_match| 0.4|± |0.1124|
|lighteval:aime25:0| 1|extractive_match| 0.4|± |0.1124|
aime25 是比较新的,但是这个分数貌似低于之前别人公布过的评测分数。可能是评测方法的问题,也可能评测过程中上下文有截断影响结果。
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费场景POC验证,效果验证后签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2025-04-19
低延迟小智AI服务端搭建-ASR篇(续):CPU可跑
2025-04-19
LoRA 与QLoRA区别
2025-04-18
DeepSeek-V3-0324 本地部署,vLLM和SGLang的方法
2025-04-18
Ollama对决vLLM:DEEPSEEK部署神器选谁?90%人选错!这份实测攻略让你秒懂!
2025-04-18
ollama v0.6.6 震撼发布!推理能力翻倍、下载提速 50%,对比 vLLM/LMDeploy 谁更强
2025-04-17
从零开始开发 MCP Server
2025-04-17
AI 应用开发不要在大模型迭代必经之路上
2025-04-17
阿里百炼出手了!MCP 现在配置门槛下降了 100 倍
2025-02-04
2025-02-04
2024-09-18
2024-07-11
2024-07-09
2024-07-11
2024-07-26
2025-02-05
2025-01-27
2025-02-01
2025-04-01
2025-03-31
2025-03-20
2025-03-16
2025-03-16
2025-03-13
2025-03-13
2025-03-11