gpt4 book ai didi

stream - C# - 捕获 RTP 流并发送到语音识别

转载 作者:行者123 更新时间:2023-12-03 14:05:29 25 4
gpt4 key购买 nike

我要完成的工作:

  • 在 C# 中捕获 RTP 流
  • 将该流转发到 System.Speech.SpeechRecognitionEngine

  • 我正在创建一个基于 Linux 的机器人,它将接受麦克风输入,将其发送给 Windows 机器,该机器将使用 Microsoft Speech Recognition 处理音频并将响应发送回机器人。机器人可能距离服务器数百英里,所以我想通过 Internet 执行此操作。

    到目前为止我做了什么:
  • 让机器人使用 FFmpeg 生成一个以 MP3 格式(其他可用格式)编码的 RTP 流(机器人在运行 Arch Linux 的 Raspberry Pi 上运行)
  • 使用 VLC ActiveX 控件在客户端计算机上捕获的流
  • 发现 SpeechRecognitionEngine 有可用的方法:
  • 识别器.SetInputToWaveStream()
  • 识别器.SetInputToAudioStream()
  • 识别器.SetInputToDefaultAudioDevice()
  • 看着使用 JACK 将应用程序的输出发送到 line-in,但完全被它弄糊涂了。

  • 我需要帮助:

    我坚持如何将流从 VLC 实际发送到 SpeechRecognitionEngine。 VLC 根本不公开流。有没有一种方法可以捕获一个流并将该流对象传递给 SpeechRecognitionEngine?或者 RTP 不是这里的解决方案?

    在此先感谢您的帮助。

    最佳答案

    经过一番努力,终于得到了Microsoft.SpeechRecognitionEngine接受 WAVE 音频流。这是过程:

    在 Pi 上,我正在运行 ffmpeg。我使用此命令流式传输音频

    ffmpeg -ac 1 -f alsa -i hw:1,0 -ar 16000 -acodec pcm_s16le -f rtp rtp://XXX.XXX.XXX.XXX:1234

    在服务器端,我创建了一个 UDPClient并在端口 1234 上监听。我在单独的线程上接收数据包。首先,我剥离 RTP header ( header format explained here) 并将有效负载写入一个特殊的流。我不得不使用 SpeechStreamer类(class) described in Sean's response为了使 SpeechRecognitionEngine 工作。它不适用于标准 Memory Stream .

    在语音识别方面我唯一要做的就是将输入设置为音频流而不是默认音频设备。
    recognizer.SetInputToAudioStream( rtpClient.AudioStream,
    new SpeechAudioFormatInfo(WAVFile.SAMPLE_RATE, AudioBitsPerSample.Sixteen, AudioChannel.Mono));

    我还没有对它进行广泛的测试(即让它播放几天,看看它是否仍然有效),但我可以将音频样本保存在 SpeechRecognized 中。听起来很棒。我正在使用 16 KHz 的采样率。我可能会将其降低到 8 KHz 以减少数据传输量,但一旦出现问题我会担心。

    我还应该提到,响应速度非常快。我可以说一个完整的句子并在不到一秒钟的时间内得到回应。 RTP 连接似乎给进程增加了很少的开销。我将不得不尝试一个基准测试并将其与仅使用 MIC 输入进行比较。

    编辑:这是我的 RTPClient 类。
        /// <summary>
    /// Connects to an RTP stream and listens for data
    /// </summary>
    public class RTPClient
    {
    private const int AUDIO_BUFFER_SIZE = 65536;

    private UdpClient client;
    private IPEndPoint endPoint;
    private SpeechStreamer audioStream;
    private bool writeHeaderToConsole = false;
    private bool listening = false;
    private int port;
    private Thread listenerThread;

    /// <summary>
    /// Returns a reference to the audio stream
    /// </summary>
    public SpeechStreamer AudioStream
    {
    get { return audioStream; }
    }
    /// <summary>
    /// Gets whether the client is listening for packets
    /// </summary>
    public bool Listening
    {
    get { return listening; }
    }
    /// <summary>
    /// Gets the port the RTP client is listening on
    /// </summary>
    public int Port
    {
    get { return port; }
    }

    /// <summary>
    /// RTP Client for receiving an RTP stream containing a WAVE audio stream
    /// </summary>
    /// <param name="port">The port to listen on</param>
    public RTPClient(int port)
    {
    Console.WriteLine(" [RTPClient] Loading...");

    this.port = port;

    // Initialize the audio stream that will hold the data
    audioStream = new SpeechStreamer(AUDIO_BUFFER_SIZE);

    Console.WriteLine(" Done");
    }

    /// <summary>
    /// Creates a connection to the RTP stream
    /// </summary>
    public void StartClient()
    {
    // Create new UDP client. The IP end point tells us which IP is sending the data
    client = new UdpClient(port);
    endPoint = new IPEndPoint(IPAddress.Any, port);

    listening = true;
    listenerThread = new Thread(ReceiveCallback);
    listenerThread.Start();

    Console.WriteLine(" [RTPClient] Listening for packets on port " + port + "...");
    }

    /// <summary>
    /// Tells the UDP client to stop listening for packets.
    /// </summary>
    public void StopClient()
    {
    // Set the boolean to false to stop the asynchronous packet receiving
    listening = false;
    Console.WriteLine(" [RTPClient] Stopped listening on port " + port);
    }

    /// <summary>
    /// Handles the receiving of UDP packets from the RTP stream
    /// </summary>
    /// <param name="ar">Contains packet data</param>
    private void ReceiveCallback()
    {
    // Begin looking for the next packet
    while (listening)
    {
    // Receive packet
    byte[] packet = client.Receive(ref endPoint);

    // Decode the header of the packet
    int version = GetRTPHeaderValue(packet, 0, 1);
    int padding = GetRTPHeaderValue(packet, 2, 2);
    int extension = GetRTPHeaderValue(packet, 3, 3);
    int csrcCount = GetRTPHeaderValue(packet, 4, 7);
    int marker = GetRTPHeaderValue(packet, 8, 8);
    int payloadType = GetRTPHeaderValue(packet, 9, 15);
    int sequenceNum = GetRTPHeaderValue(packet, 16, 31);
    int timestamp = GetRTPHeaderValue(packet, 32, 63);
    int ssrcId = GetRTPHeaderValue(packet, 64, 95);

    if (writeHeaderToConsole)
    {
    Console.WriteLine("{0} {1} {2} {3} {4} {5} {6} {7} {8}",
    version,
    padding,
    extension,
    csrcCount,
    marker,
    payloadType,
    sequenceNum,
    timestamp,
    ssrcId);
    }

    // Write the packet to the audio stream
    audioStream.Write(packet, 12, packet.Length - 12);
    }
    }

    /// <summary>
    /// Grabs a value from the RTP header in Big-Endian format
    /// </summary>
    /// <param name="packet">The RTP packet</param>
    /// <param name="startBit">Start bit of the data value</param>
    /// <param name="endBit">End bit of the data value</param>
    /// <returns>The value</returns>
    private int GetRTPHeaderValue(byte[] packet, int startBit, int endBit)
    {
    int result = 0;

    // Number of bits in value
    int length = endBit - startBit + 1;

    // Values in RTP header are big endian, so need to do these conversions
    for (int i = startBit; i <= endBit; i++)
    {
    int byteIndex = i / 8;
    int bitShift = 7 - (i % 8);
    result += ((packet[byteIndex] >> bitShift) & 1) * (int)Math.Pow(2, length - i + startBit - 1);
    }
    return result;
    }
    }

    关于stream - C# - 捕获 RTP 流并发送到语音识别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/15886888/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com