微信扫码
与创始人交个朋友
我要投稿
在看了阿里最有诚意的开源项目:FunAudioLLM 我就有了个想法,为什么不把它搞成个语音入,语音出的家伙呢?
这样,也许我就有了J.A.R.V.I.S 的基础部分,语音对话能力。因为FunAudioLLM提供了语音输入与语音输出两个模型(输出还提供声音定制能力)。
大脑,也有啊,阿里的另一个诚意开源项目QWen2,也是开源模型里能打的扛把子啊。
于是,你看,下面这东西是不是很完美?
实际效果我们看看吧,它是一个完全基于Web的版本,当然了,如果你愿意改成其它类型的App也不是啥难事。反正现在有GPT一类的助手,Wrap一个App也是很轻松的事!
给你个最终的生成结果声音听一下:
安装
这个过程也没有想象的那么麻烦,安装三个基础模型的运行环境。
第一个我选择了 CosyVoice
其实是可以完全按官方提供的说明进行安装与配置的,不过你要考虑国内的环境问题,可能有一些要特殊处理。
#下面这几步居然需要魔法了,你敢信?????程序员得罪谁了?git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git# If you failed to clone submodule due to network failures, please run following command until successcd CosyVoicegit submodule update --init --recursive
安装conda这事不用说了,如果你不会用conda,那你大概率也不会看到这篇文章吧。
接来来假设你用的是Ubuntu系统,我的是22.04
conda create -n cosyvoice python=3.8
conda activate cosyvoice
# pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
# If you encounter sox compatibility issues
# ubuntu
sudo apt-get install sox libsox-dev
到这儿,环境肯定是安装完了。下载模型吧,国内的modescope提供的下载镜像真的是快啊。
你可以运行python命令,然后在python的交互过程中执行下面的python代码。
# SDK模型下载
from modelscope import snapshot_download
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
测试一下
下面的代码存成个python文件,就可以直接测试执行一下了。 export PYTHONPATH=third_party/Matcha-TTS #这一步很重要
from cosyvoice.cli.cosyvoice import CosyVoice
from cosyvoice.utils.file_utils import load_wav
import torchaudio
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT')
# sft usage
print(cosyvoice.list_avaliable_spks())
output = cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女')
torchaudio.save('sft.wav', output['tts_speech'], 22050)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M')
# zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
output = cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k)
torchaudio.save('zero_shot.wav', output['tts_speech'], 22050)
# cross_lingual usage
prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
output = cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k)
torchaudio.save('cross_lingual.wav', output['tts_speech'], 22050)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
# instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
output = cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.')
torchaudio.save('instruct.wav', output['tts_speech'], 22050)
如果你能顺利得到几个生成的音频文件,那就说明你成功了一大半了。
第二个是SenseVoice
如果你顺利了使用了CosyVoice,那下面这操作的就很简单了。
git clone https://github.com/FunAudioLLM/SenseVoice.git
cd SenseVoice
pip install -r requirements.txt #如果你想快一点儿,可以加上 -i https://pypi.tuna.tsinghua.edu.cn/simple
接下来执行下面的测试代码
from model import SenseVoiceSmall
model_dir = "iic/SenseVoiceSmall"
m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir)
res = m.inference(
data_in="https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav",
language="zh", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=False,
**kwargs,
)
print(res)
如果能正常执行,你会得到一个输出如下:
([{'key': 'wav_file_tmp_name', 'text': '<|zh|><|NEUTRAL|><|Speech|><|woitn|>欢迎大家来体验打摩院推出的语音识别模型'}], {'load_data': '0.338', 'extract_feat': '0.020', 'batch_data_time': 5.58})
第三个是Ollama + Qwen2
现在你已经验证了可以正常获得STT与TTS了。接下来自然是大脑Qwen2了。考虑到做成服务的便捷与可替换性,我们选择了个比较流行的LLM服务框架:Ollama。这是一个专门运行llama一系列的服务。
#编辑一个下面的文件:/etc/systemd/system/ollama.service #linux为主的一个安装方法,前提你已经安装好了nvidia的一系列驱动了。
sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/bin/ollama
sudo chmod +x /usr/bin/ollama
sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama
#再执行下面的服务命令,重新启动与设置一下服务 [Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
[Install]
WantedBy=default.target
#再启动一下 sudo systemctl daemon-reload
sudo systemctl enable ollama
#再运行一下 ollama 执行特定的模型,好了,你就能进入到一个简单的对话界面。如果好用了,就是好用了。 sudo systemctl start ollama
ollama run qwen2:7b
现在你可以确认,所有的基础部件都完成了安装了!
接口调用
我写了个简单的python文件,把它们都调用了一下。
import sys
sys.path.append("/data/home/todo/SenseVoice")
sys.path.append("/data/home/todo/CosyVoice")
import requests
from model import SenseVoiceSmall
from cosyvoice.cli.cosyvoice import CosyVoice
import torchaudio
# Paths and URLs
audio_file_path = "/data/home/todo/aliAllInOne/asr_example_zh.wav"# Replace with the path to your audio file
ollama_url = "http://localhost:11434/api/generate"
cosyvoice_model_path = '/data/home/todo/CosyVoice/pretrained_models/CosyVoice-300M-SFT'
generated_audio_path = "/data/home/todo/aliAllInOne/generated_audio.wav"
# Load the SenseVoiceSmall model
model_dir = "iic/SenseVoiceSmall"
model_dir = "/home/todo/.cache/modelscope/hub/iic/SenseVoiceSmall"
m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir)
# Step 1: Perform inference using SenseVoiceSmall
def call_sense_voice_small(audio_path, language="zh"):
res = m.inference(
data_in=audio_path,
language=language,
use_itn=False,
**kwargs,
)
return res[0][0]['text']
#return res
# Step 2: Call the Ollama service
def call_ollama_service(content):
response = requests.post(
ollama_url,
json={
"model": "qwen2:7b",
"prompt": content,
"format": "json",
"stream": False
}
)
print(response)
print(response.status_code, type(response.status_code))
if response.status_code == 200:
return response.json()["response"]
else:
return "Error: Unable to get response from Ollama service."
# Step 3: Generate speech using CosyVoice
def generate_speech_with_cosyvoice(text, speaker="中文女"):
cosyvoice = CosyVoice(cosyvoice_model_path)
output = cosyvoice.inference_sft(text, speaker)
torchaudio.save(generated_audio_path, output['tts_speech'], 22050)
# Main function to execute the steps
def main():
# Step 1: Call SenseVoiceSmall
sense_voice_result = call_sense_voice_small(audio_file_path)
print("SenseVoiceSmall Result:", sense_voice_result)
# Step 2: Call Ollama Service
ollama_result = call_ollama_service(sense_voice_result)
print("Ollama Service Result:", ollama_result)
# Step 3: Generate and save the speech
generate_speech_with_cosyvoice(ollama_result)
print(f"Generated audio saved to {generated_audio_path}")
if __name__ == "__main__":
main()
你那生成的语音是什么呢?下面的是我的程序生成的!
界面
交给你自己完成吧,毕竟我也搞了一个基础了嘛。
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-11-23
人生搜索引擎免费用,开源版哈利波特“冥想盆”登GitHub热榜,支持中文
2024-11-23
o1圈杀疯了,阿里又开源Marco-o1
2024-11-22
Kotaemon:开源基于文档检索的聊天系统(RAG Chat)
2024-11-22
不可思议!AirLLM 如何让 70B 大模型在 4GB GPU 上顺利推理?
2024-11-22
刚刚,OpenAI公开o1模型测试方法,人机协作时代!
2024-11-21
22.4K+ Star!Chatbox:你的终极AI桌面助手
2024-11-21
Magentic-One:微软开源多智能体系统,让 AI 自己动手解决问题
2024-11-21
阿里发布Qwen2.5-Turbo,支持100万Tokens上下文!
2024-05-06
2024-07-25
2024-08-13
2024-06-12
2024-07-11
2024-06-16
2024-07-20
2024-06-15
2024-07-25
2024-07-25
2024-11-22
2024-11-19
2024-11-13
2024-11-13
2024-10-07
2024-09-22
2024-09-20
2024-09-14