gpt4 book ai didi

.net-core - WebRTC 和 Asp.Net Core

转载 作者:行者123 更新时间:2023-12-01 17:55:47 31 4
gpt4 key购买 nike

我想将音频流从我的 Angular Web 应用程序录制到我的 Asp.net Core Api。

我认为,使用 SignalR 及其 websockets 是实现这一目标的好方法。

通过此 typescript 代码,我可以获得 MediaStream:

import { HubConnection } from '@aspnet/signalr';

[...]

private stream: MediaStream;
private connection: webkitRTCPeerConnection;
@ViewChild('video') video;

[...]

navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
console.trace('Received local stream');
this.video.srcObject = stream;
this.stream = stream;

var _hubConnection = new HubConnection('[MY_API_URL]/webrtc');
this._hubConnection.send("SendStream", stream);
})
.catch(function (e) {
console.error('getUserMedia() error: ' + e.message);
});

我使用

处理 .NetCore API 中的流
  public class MyHub: Hub{
public void SendStream(object o)
{
}
}

但是当我将 o 转换为 System.IO.Stream 时,我得到了 null。

当我阅读WebRTC的文档时,我看到了有关RTCPeerConnection的信息。 IceConnection ...我需要那个吗?

如何使用 SignalR 将音频从 WebClient 流式传输到 Asp.netCore API?文档? GitHub?

感谢您的帮助

最佳答案

我找到了访问麦克风流并将其传输到服务器的方法,代码如下:

  private audioCtx: AudioContext;
private stream: MediaStream;

convertFloat32ToInt16(buffer:Float32Array) {
let l = buffer.length;
let buf = new Int16Array(l);
while (l--) {
buf[l] = Math.min(1, buffer[l]) * 0x7FFF;
}
return buf.buffer;
}

startRecording() {
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
this.audioCtx = new AudioContext();
this.audioCtx.createMediaStreamSource(stream);
this.audioCtx.onstatechange = (state) => { console.log(state); }

var scriptNode = this.audioCtx.createScriptProcessor(4096, 1, 1);
scriptNode.onaudioprocess = (audioProcessingEvent) => {
var buffer = [];
// The input buffer is the song we loaded earlier
var inputBuffer = audioProcessingEvent.inputBuffer;
// Loop through the output channels (in this case there is only one)
for (var channel = 0; channel < inputBuffer.numberOfChannels; channel++) {

console.log("inputBuffer:" + audioProcessingEvent.inputBuffer.getChannelData(channel));
var chunk = audioProcessingEvent.inputBuffer.getChannelData(channel);
//because endianness does matter
this.MySignalRService.send("SendStream", this.convertFloat32ToInt16(chunk));
}
}
var source = this.audioCtx.createMediaStreamSource(stream);
source.connect(scriptNode);
scriptNode.connect(this.audioCtx.destination);


this.stream = stream;
})
.catch(function (e) {
console.error('getUserMedia() error: ' + e.message);
});
}

stopRecording() {
try {
let stream = this.stream;
stream.getAudioTracks().forEach(track => track.stop());
stream.getVideoTracks().forEach(track => track.stop());
this.audioCtx.close();
}
catch (error) {
console.error('stopRecording() error: ' + error);
}
}

下一步是将我的 int32Array 转换为 wav 文件。

对我有帮助的来源:

注意:我没有添加如何配置 SignalR 的代码,这不是这里的目的。

关于.net-core - WebRTC 和 Asp.Net Core,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50220281/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com