gpt4 book ai didi

ios - 从 iphone 流媒体

转载 作者:可可西里 更新时间:2023-11-01 04:44:38 28 4
gpt4 key购买 nike

我需要将音频从麦克风流式传输到 http 服务器。
这些录音设置是我需要的:

NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey,
[NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0
[NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
[NSNumber numberWithInt:1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:64000],AVEncoderBitRateKey,
nil];

API 即时编码状态:

Send a continuous stream of audio to the currently viewed camera. Audio needs to be encoded at G711 mu-law at 64 kbit/s for transfer to the Axis camera at the bedside. send (this should be a POST URL in SSL to connected server): POST /transmitaudio?id= Content-type: audio/basic Content-Length: 99999 (length is ignored)

以下是我尝试使用的链接列表。

LINK -(SO)基本解释,只有音频单元和音频队列在通过麦克风录制时允许 nsdata 作为输出 | 不是示例,而是对所需内容(音频队列或音频单元)的良好定义

LINK - (SO) 音频回调示例 | 只包含回调

LINK - (SO)REMOTE IO 示例 | 没有启动/停止功能,用于保存到文件

LINK - (SO)REMOTE IO 示例 | 未答复无效

LINK - (SO)基本录音示例 | 很好的例子,但要记录在案

LINK - (SO)引导我进入 InMemoryAudioFile 类的问题(无法开始工作)| 访问了 inMemoryFile(或类似文件)的链接,但无法正常工作。

LINK - (SO) 更多音频单元和远程 io 示例/问题 | 让这个工作正常,但再次没有停止功能,即使我试图弄清楚调用是什么并让它停止,它似乎仍然没有将音频传输到服务器。

LINK - 不错的remoteIO 和音频队列示例但是| 另一个很好的例子,它几乎可以正常工作,但是代码有一些问题(编译器认为它不是 obj-c++)并且再次不知道如何从它而不是文件中获取音频“数据”。

LINK - 音频队列的 Apple 文档 | 框架有问题。完成了它(请参阅下面的问题)但最终无法使其正常工作,但可能没有像其他人那样给这个时间,也许应该有。

LINK - 我在尝试实现音频队列/单元时遇到的(SO)问题 | 不是例子

LINK - (SO) 另一个 remoteIO 例子 | 另一个很好的例子,但无法弄清楚如何将其获取到数据而不是文件。

LINK - 看起来也很有趣,循环缓冲区 | 无法弄清楚如何将其与音频回调相结合

这是我目前正在尝试直播的类(class)。这似乎有效,尽管接收器端(连接到服务器)的扬声器发出静电。这似乎表明音频数据格式有问题。

IOS 版本(减去 GCD 套接字的委托(delegate)方法):

@implementation MicCommunicator {
AVAssetWriter * assetWriter;
AVAssetWriterInput * assetWriterInput;
}

@synthesize captureSession = _captureSession;
@synthesize output = _output;
@synthesize restClient = _restClient;
@synthesize uploadAudio = _uploadAudio;
@synthesize outputPath = _outputPath;
@synthesize sendStream = _sendStream;
@synthesize receiveStream = _receiveStream;

@synthesize socket = _socket;
@synthesize isSocketConnected = _isSocketConnected;

-(id)init {
if ((self = [super init])) {

_receiveStream = [[NSStream alloc]init];
_sendStream = [[NSStream alloc]init];
_socket = [[GCDAsyncSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
_isSocketConnected = FALSE;

_restClient = [RestClient sharedManager];
_uploadAudio = false;

NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
_outputPath = [NSURL fileURLWithPath:[[searchPaths objectAtIndex:0] stringByAppendingPathComponent:@"micOutput.output"]];

NSError * assetError;

AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono; //kAudioChannelLayoutTag_Stereo;
NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey,
[NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0
[NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
[NSNumber numberWithInt:1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:64000],AVEncoderBitRateKey,
nil];

assetWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings]retain];
[assetWriterInput setExpectsMediaDataInRealTime:YES];

assetWriter = [[AVAssetWriter assetWriterWithURL:_outputPath fileType:AVFileTypeWAVE error:&assetError]retain]; //AVFileTypeAppleM4A

if (assetError) {
NSLog (@"error initing mic: %@", assetError);
return nil;
}
if ([assetWriter canAddInput:assetWriterInput]) {
[assetWriter addInput:assetWriterInput];
} else {
NSLog (@"can't add asset writer input...!");
return nil;
}

}
return self;
}

-(void)dealloc {
[_output release];
[_captureSession release];
[_captureSession release];
[assetWriter release];
[assetWriterInput release];
[super dealloc];
}


-(void)beginStreaming {

NSLog(@"avassetwrter class is %@",NSStringFromClass([assetWriter class]));

self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
if (audioInput)
[self.captureSession addInput:audioInput];
else {
NSLog(@"No audio input found.");
return;
}

self.output = [[AVCaptureAudioDataOutput alloc] init];

dispatch_queue_t outputQueue = dispatch_queue_create("micOutputDispatchQueue", NULL);
[self.output setSampleBufferDelegate:self queue:outputQueue];
dispatch_release(outputQueue);

self.uploadAudio = FALSE;

[self.captureSession addOutput:self.output];
[assetWriter startWriting];
[self.captureSession startRunning];
}

-(void)pauseStreaming
{
self.uploadAudio = FALSE;
}

-(void)resumeStreaming
{
self.uploadAudio = TRUE;
}

-(void)finishAudioWork
{
[self dealloc];
}

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {


AudioBufferList audioBufferList;
NSMutableData *data= [[NSMutableData alloc] init];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);

for (int y = 0; y < audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;

[data appendBytes:frame length:audioBuffer.mDataByteSize];
}

// append [data bytes] to your NSOutputStream

// These two lines write to disk, you may not need this, just providing an example
[assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
[assetWriterInput appendSampleBuffer:sampleBuffer];

//start upload audio data
if (self.uploadAudio) {

if (!self.isSocketConnected) {
[self connect];
}
NSString *requestStr = [NSString stringWithFormat:@"POST /transmitaudio?id=%@ HTTP/1.0\r\n\r\n",self.restClient.sessionId];

NSData *requestData = [requestStr dataUsingEncoding:NSUTF8StringEncoding];
[self.socket writeData:requestData withTimeout:5 tag:0];
[self.socket writeData:data withTimeout:5 tag:0];
}
//stop upload audio data

CFRelease(blockBuffer);
blockBuffer=NULL;
[data release];
}

和JAVA版本:

import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.BufferedReader;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.Arrays;

import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;

import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder.AudioSource;
import android.util.Log;

public class AudioWorker extends Thread
{
private boolean stopped = false;

private String host;
private int port;
private long id=0;
boolean run=true;
AudioRecord recorder;

//ulaw encoder stuff
private final static String TAG = "UlawEncoderInputStream";

private final static int MAX_ULAW = 8192;
private final static int SCALE_BITS = 16;

private InputStream mIn;

private int mMax = 0;

private final byte[] mBuf = new byte[1024];
private int mBufCount = 0; // should be 0 or 1

private final byte[] mOneByte = new byte[1];
////
/**
* Give the thread high priority so that it's not canceled unexpectedly, and start it
*/
public AudioWorker(String host, int port, long id)
{
this.host = host;
this.port = port;
this.id = id;
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
// start();
}

@Override
public void run()
{
Log.i("AudioWorker", "Running AudioWorker Thread");
recorder = null;
AudioTrack track = null;
short[][] buffers = new short[256][160];
int ix = 0;

/*
* Initialize buffer to hold continuously recorded AudioWorker data, start recording, and start
* playback.
*/
try
{
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
recorder.startRecording();
// track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the AudioWorker track for playback.
*/


SSLContext sc = SSLContext.getInstance("SSL");
sc.init(null, trustAllCerts, new java.security.SecureRandom());
SSLSocketFactory sslFact = sc.getSocketFactory();
SSLSocket socket = (SSLSocket)sslFact.createSocket(host, port);

socket.setSoTimeout(10000);
InputStream inputStream = socket.getInputStream();
DataInputStream in = new DataInputStream(new BufferedInputStream(inputStream));
OutputStream outputStream = socket.getOutputStream();
DataOutputStream os = new DataOutputStream(new BufferedOutputStream(outputStream));
PrintWriter socketPrinter = new PrintWriter(os);
BufferedReader br = new BufferedReader(new InputStreamReader(in));

// socketPrinter.println("POST /transmitaudio?patient=1333369798370 HTTP/1.0");
socketPrinter.println("POST /transmitaudio?id="+id+" HTTP/1.0");
socketPrinter.println("Content-Type: audio/basic");
socketPrinter.println("Content-Length: 99999");
socketPrinter.println("Connection: Keep-Alive");
socketPrinter.println("Cache-Control: no-cache");
socketPrinter.println();
socketPrinter.flush();


while(!stopped)
{
Log.i("Map", "Writing new data to buffer");
short[] buffer = buffers[ix++ % buffers.length];

N = recorder.read(buffer,0,buffer.length);
track.write(buffer, 0, buffer.length);

byte[] bytes2 = new byte[buffer.length * 2];
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);

read(bytes2, 0, bytes2.length);
os.write(bytes2,0,bytes2.length);

//
// ByteBuffer byteBuf = ByteBuffer.allocate(2*N);
// System.out.println("byteBuf length "+2*N);
// int i = 0;
// while (buffer.length > i) {
// byteBuf.putShort(buffer[i]);
// i++;
// }
// byte[] b = new byte[byteBuf.remaining()];
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
recorder.stop();
recorder.release();
track.stop();
track.release();
}
}

/**
* Called from outside of the thread in order to stop the recording/playback loop
*/
public void close()
{
stopped = true;
}
public void resumeThread()
{
stopped = false;
run();
}

TrustManager[] trustAllCerts = new TrustManager[]{
new X509TrustManager() {
public java.security.cert.X509Certificate[] getAcceptedIssuers() {
return null;
}
public void checkClientTrusted(
java.security.cert.X509Certificate[] certs, String authType) {
}
public void checkServerTrusted(
java.security.cert.X509Certificate[] chain, String authType) {
for (int j=0; j<chain.length; j++)
{
System.out.println("Client certificate information:");
System.out.println(" Subject DN: " + chain[j].getSubjectDN());
System.out.println(" Issuer DN: " + chain[j].getIssuerDN());
System.out.println(" Serial number: " + chain[j].getSerialNumber());
System.out.println("");
}
}
}
};


public static void encode(byte[] pcmBuf, int pcmOffset,
byte[] ulawBuf, int ulawOffset, int length, int max) {

// from 'ulaw' in wikipedia
// +8191 to +8159 0x80
// +8158 to +4063 in 16 intervals of 256 0x80 + interval number
// +4062 to +2015 in 16 intervals of 128 0x90 + interval number
// +2014 to +991 in 16 intervals of 64 0xA0 + interval number
// +990 to +479 in 16 intervals of 32 0xB0 + interval number
// +478 to +223 in 16 intervals of 16 0xC0 + interval number
// +222 to +95 in 16 intervals of 8 0xD0 + interval number
// +94 to +31 in 16 intervals of 4 0xE0 + interval number
// +30 to +1 in 15 intervals of 2 0xF0 + interval number
// 0 0xFF

// -1 0x7F
// -31 to -2 in 15 intervals of 2 0x70 + interval number
// -95 to -32 in 16 intervals of 4 0x60 + interval number
// -223 to -96 in 16 intervals of 8 0x50 + interval number
// -479 to -224 in 16 intervals of 16 0x40 + interval number
// -991 to -480 in 16 intervals of 32 0x30 + interval number
// -2015 to -992 in 16 intervals of 64 0x20 + interval number
// -4063 to -2016 in 16 intervals of 128 0x10 + interval number
// -8159 to -4064 in 16 intervals of 256 0x00 + interval number
// -8192 to -8160 0x00

// set scale factors
if (max <= 0) max = MAX_ULAW;

int coef = MAX_ULAW * (1 << SCALE_BITS) / max;

for (int i = 0; i < length; i++) {
int pcm = (0xff & pcmBuf[pcmOffset++]) + (pcmBuf[pcmOffset++] << 8);
pcm = (pcm * coef) >> SCALE_BITS;

int ulaw;
if (pcm >= 0) {
ulaw = pcm <= 0 ? 0xff :
pcm <= 30 ? 0xf0 + (( 30 - pcm) >> 1) :
pcm <= 94 ? 0xe0 + (( 94 - pcm) >> 2) :
pcm <= 222 ? 0xd0 + (( 222 - pcm) >> 3) :
pcm <= 478 ? 0xc0 + (( 478 - pcm) >> 4) :
pcm <= 990 ? 0xb0 + (( 990 - pcm) >> 5) :
pcm <= 2014 ? 0xa0 + ((2014 - pcm) >> 6) :
pcm <= 4062 ? 0x90 + ((4062 - pcm) >> 7) :
pcm <= 8158 ? 0x80 + ((8158 - pcm) >> 8) :
0x80;
} else {
ulaw = -1 <= pcm ? 0x7f :
-31 <= pcm ? 0x70 + ((pcm - -31) >> 1) :
-95 <= pcm ? 0x60 + ((pcm - -95) >> 2) :
-223 <= pcm ? 0x50 + ((pcm - -223) >> 3) :
-479 <= pcm ? 0x40 + ((pcm - -479) >> 4) :
-991 <= pcm ? 0x30 + ((pcm - -991) >> 5) :
-2015 <= pcm ? 0x20 + ((pcm - -2015) >> 6) :
-4063 <= pcm ? 0x10 + ((pcm - -4063) >> 7) :
-8159 <= pcm ? 0x00 + ((pcm - -8159) >> 8) :
0x00;
}
ulawBuf[ulawOffset++] = (byte)ulaw;
}
}
public static int maxAbsPcm(byte[] pcmBuf, int offset, int length) {
int max = 0;
for (int i = 0; i < length; i++) {
int pcm = (0xff & pcmBuf[offset++]) + (pcmBuf[offset++] << 8);
if (pcm < 0) pcm = -pcm;
if (pcm > max) max = pcm;
}
return max;
}

public int read(byte[] buf, int offset, int length) throws IOException {
if (recorder == null) throw new IllegalStateException("not open");

// return at least one byte, but try to fill 'length'
while (mBufCount < 2) {
int n = recorder.read(mBuf, mBufCount, Math.min(length * 2, mBuf.length - mBufCount));
if (n == -1) return -1;
mBufCount += n;
}

// compand data
int n = Math.min(mBufCount / 2, length);
encode(mBuf, 0, buf, offset, n, mMax);

// move data to bottom of mBuf
mBufCount -= n * 2;
for (int i = 0; i < mBufCount; i++) mBuf[i] = mBuf[i + n * 2];

return n;
}

}

最佳答案

我在这个主题上的工作是惊人的和漫长的。我终于让这个工作了,但它可能被黑了。因此,我会在发布答案之前列出一些警告:

  1. 缓冲区之间仍然有咔哒声

  2. 由于我在 obj-c++ 类中使用我的 obj-c 类的方式,我收到警告,所以那里有问题(但是根据我的研究,使用池与发布一样,所以我不相信这很重要):

    Object 0x13cd20 of class __NSCFString autoreleased with no pool in place - just leaking - break on objc_autoreleaseNoPool() to debug

  3. 为了使这项工作正常进行,我不得不从 SpeakHereController(见下文)中注释掉所有 AQPlayer 引用,因为我无法通过任何其他方式修复错误。不过这对我来说并不重要,因为我只是在录音

所以上面的主要答案是 AVAssetWriter 中有一个错误阻止它附加字节和写入音频数据。在联系苹果支持并让他们通知我后,我终于发现了这一点。据我所知,这个错误是 ulaw 和 AVAssetWriter 特有的,尽管我还没有尝试过许多其他格式来验证。
对此,唯一的其他选择是/曾经是使用 AudioQueues。我以前尝试过但带来了很多问题。最大的问题是我对 obj-c++ 缺乏了解。下面使事情正常运行的类来自 speakHere 示例,稍作更改,以便音频采用 ulaw 格式。其他问题来自试图让所有文件都能很好地播放。然而,这很容易通过将链中的所有文件名更改为 .mm 来解决。下一个问题是尝试和谐地使用这些类。这仍然是一个 WIP,并且与警告号 2 相关。但我对此的基本解决方案是使用 SpeakHereController(也包含在 speakhere 示例中)而不是直接访问 AQRecorder。

无论如何这里是代码:

使用 obj-c 类中的 SpeakHereController

.h

@property(nonatomic,strong) SpeakHereController * recorder;

.毫米

[init method]
//AQRecorder wrapper (SpeakHereController) allocation
_recorder = [[SpeakHereController alloc]init];
//AQRecorder wrapper (SpeakHereController) initialization
//technically this class is a controller and thats why its init method is awakeFromNib
[_recorder awakeFromNib];

[recording]
bool buttonState = self.audioRecord.isSelected;
[self.audioRecord setSelected:!buttonState];

if ([self.audioRecord isSelected]) {

[self.recorder startRecord];
}else {
[self.recorder stopRecord];
}

SpeakHere Controller

#import "SpeakHereController.h"

@implementation SpeakHereController

@synthesize player;
@synthesize recorder;

@synthesize btn_record;
@synthesize btn_play;
@synthesize fileDescription;
@synthesize lvlMeter_in;
@synthesize playbackWasInterrupted;

char *OSTypeToStr(char *buf, OSType t)
{
char *p = buf;
char str[4], *q = str;
*(UInt32 *)str = CFSwapInt32(t);
for (int i = 0; i < 4; ++i) {
if (isprint(*q) && *q != '\\')
*p++ = *q++;
else {
sprintf(p, "\\x%02x", *q++);
p += 4;
}
}
*p = '\0';
return buf;
}

-(void)setFileDescriptionForFormat: (CAStreamBasicDescription)format withName:(NSString*)name
{
char buf[5];
const char *dataFormat = OSTypeToStr(buf, format.mFormatID);
NSString* description = [[NSString alloc] initWithFormat:@"(%d ch. %s @ %g Hz)", format.NumberChannels(), dataFormat, format.mSampleRate, nil];
fileDescription.text = description;
[description release];
}

#pragma mark Playback routines

-(void)stopPlayQueue
{
// player->StopQueue();
[lvlMeter_in setAq: nil];
btn_record.enabled = YES;
}

-(void)pausePlayQueue
{
// player->PauseQueue();
playbackWasPaused = YES;
}


-(void)startRecord
{
// recorder = new AQRecorder();

if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
{
[self stopRecord];
}
else // If we're not recording, start.
{
// btn_play.enabled = NO;

// Set the button's state to "stop"
// btn_record.title = @"Stop";

// Start the recorder
recorder->StartRecord(CFSTR("recordedFile.caf"));

[self setFileDescriptionForFormat:recorder->DataFormat() withName:@"Recorded File"];

// Hook the level meter up to the Audio Queue for the recorder
// [lvlMeter_in setAq: recorder->Queue()];
}
}

- (void)stopRecord
{
// Disconnect our level meter from the audio queue
// [lvlMeter_in setAq: nil];

recorder->StopRecord();

// dispose the previous playback queue
// player->DisposeQueue(true);

// now create a new queue for the recorded file
recordFilePath = (CFStringRef)[NSTemporaryDirectory() stringByAppendingPathComponent: @"recordedFile.caf"];
// player->CreateQueueForFile(recordFilePath);

// Set the button's state back to "record"
// btn_record.title = @"Record";
// btn_play.enabled = YES;
}

- (IBAction)play:(id)sender
{
if (player->IsRunning())
{
if (playbackWasPaused) {
// OSStatus result = player->StartQueue(true);
// if (result == noErr)
// [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:self];
}
else
// [self stopPlayQueue];
nil;
}
else
{
// OSStatus result = player->StartQueue(false);
// if (result == noErr)
// [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:self];
}
}

- (IBAction)record:(id)sender
{
if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
{
[self stopRecord];
}
else // If we're not recording, start.
{
// btn_play.enabled = NO;
//
// // Set the button's state to "stop"
// btn_record.title = @"Stop";

// Start the recorder
recorder->StartRecord(CFSTR("recordedFile.caf"));

[self setFileDescriptionForFormat:recorder->DataFormat() withName:@"Recorded File"];

// Hook the level meter up to the Audio Queue for the recorder
[lvlMeter_in setAq: recorder->Queue()];
}
}
#pragma mark AudioSession listeners
void interruptionListener( void * inClientData,
UInt32 inInterruptionState)
{
SpeakHereController *THIS = (SpeakHereController*)inClientData;
if (inInterruptionState == kAudioSessionBeginInterruption)
{
if (THIS->recorder->IsRunning()) {
[THIS stopRecord];
}
else if (THIS->player->IsRunning()) {
//the queue will stop itself on an interruption, we just need to update the UI
[[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueStopped" object:THIS];
THIS->playbackWasInterrupted = YES;
}
}
else if ((inInterruptionState == kAudioSessionEndInterruption) && THIS->playbackWasInterrupted)
{
// we were playing back when we were interrupted, so reset and resume now
// THIS->player->StartQueue(true);
[[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:THIS];
THIS->playbackWasInterrupted = NO;
}
}

void propListener( void * inClientData,
AudioSessionPropertyID inID,
UInt32 inDataSize,
const void * inData)
{
SpeakHereController *THIS = (SpeakHereController*)inClientData;
if (inID == kAudioSessionProperty_AudioRouteChange)
{
CFDictionaryRef routeDictionary = (CFDictionaryRef)inData;
//CFShow(routeDictionary);
CFNumberRef reason = (CFNumberRef)CFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_Reason));
SInt32 reasonVal;
CFNumberGetValue(reason, kCFNumberSInt32Type, &reasonVal);
if (reasonVal != kAudioSessionRouteChangeReason_CategoryChange)
{
/*CFStringRef oldRoute = (CFStringRef)CFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_OldRoute));
if (oldRoute)
{
printf("old route:\n");
CFShow(oldRoute);
}
else
printf("ERROR GETTING OLD AUDIO ROUTE!\n");

CFStringRef newRoute;
UInt32 size; size = sizeof(CFStringRef);
OSStatus error = AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);
if (error) printf("ERROR GETTING NEW AUDIO ROUTE! %d\n", error);
else
{
printf("new route:\n");
CFShow(newRoute);
}*/

if (reasonVal == kAudioSessionRouteChangeReason_OldDeviceUnavailable)
{
if (THIS->player->IsRunning()) {
[THIS pausePlayQueue];
[[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueStopped" object:THIS];
}
}

// stop the queue if we had a non-policy route change
if (THIS->recorder->IsRunning()) {
[THIS stopRecord];
}
}
}
else if (inID == kAudioSessionProperty_AudioInputAvailable)
{
if (inDataSize == sizeof(UInt32)) {
UInt32 isAvailable = *(UInt32*)inData;
// disable recording if input is not available
THIS->btn_record.enabled = (isAvailable > 0) ? YES : NO;
}
}
}

#pragma mark Initialization routines
- (void)awakeFromNib
{
// Allocate our singleton instance for the recorder & player object
recorder = new AQRecorder();
player = nil;//new AQPlayer();

OSStatus error = AudioSessionInitialize(NULL, NULL, interruptionListener, self);
if (error) printf("ERROR INITIALIZING AUDIO SESSION! %d\n", error);
else
{
UInt32 category = kAudioSessionCategory_PlayAndRecord;
error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
if (error) printf("couldn't set audio category!");

error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", error);
UInt32 inputAvailable = 0;
UInt32 size = sizeof(inputAvailable);

// we do not want to allow recording if input is not available
error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable);
if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d\n", error);
// btn_record.enabled = (inputAvailable) ? YES : NO;

// we also need to listen to see if input availability changes
error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", error);

error = AudioSessionSetActive(true);
if (error) printf("AudioSessionSetActive (true) failed");
}

// [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playbackQueueStopped:) name:@"playbackQueueStopped" object:nil];
// [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playbackQueueResumed:) name:@"playbackQueueResumed" object:nil];

// UIColor *bgColor = [[UIColor alloc] initWithRed:.39 green:.44 blue:.57 alpha:.5];
// [lvlMeter_in setBackgroundColor:bgColor];
// [lvlMeter_in setBorderColor:bgColor];
// [bgColor release];

// disable the play button since we have no recording to play yet
// btn_play.enabled = NO;
// playbackWasInterrupted = NO;
// playbackWasPaused = NO;
}

# pragma mark Notification routines
- (void)playbackQueueStopped:(NSNotification *)note
{
btn_play.title = @"Play";
[lvlMeter_in setAq: nil];
btn_record.enabled = YES;
}

- (void)playbackQueueResumed:(NSNotification *)note
{
btn_play.title = @"Stop";
btn_record.enabled = NO;
[lvlMeter_in setAq: player->Queue()];
}

#pragma mark Cleanup
- (void)dealloc
{
[btn_record release];
[btn_play release];
[fileDescription release];
[lvlMeter_in release];

// delete player;
delete recorder;

[super dealloc];
}

@end

AQ记录仪(.h有2行重要

#define kNumberRecordBuffers    3
#define kBufferDurationSeconds 5.0

)

#include "AQRecorder.h"
//#include "UploadAudioWrapperInterface.h"
//#include "RestClient.h"

RestClient * restClient;
NSData* data;

// ____________________________________________________________________________________
// Determine the size, in bytes, of a buffer necessary to represent the supplied number
// of seconds of audio data.
int AQRecorder::ComputeRecordBufferSize(const AudioStreamBasicDescription *format, float seconds)
{
int packets, frames, bytes = 0;
try {
frames = (int)ceil(seconds * format->mSampleRate);

if (format->mBytesPerFrame > 0)
bytes = frames * format->mBytesPerFrame;
else {
UInt32 maxPacketSize;
if (format->mBytesPerPacket > 0)
maxPacketSize = format->mBytesPerPacket; // constant packet size
else {
UInt32 propertySize = sizeof(maxPacketSize);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize,
&propertySize), "couldn't get queue's maximum output packet size");
}
if (format->mFramesPerPacket > 0)
packets = frames / format->mFramesPerPacket;
else
packets = frames; // worst-case scenario: 1 frame in a packet
if (packets == 0) // sanity check
packets = 1;
bytes = packets * maxPacketSize;
}
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
return 0;
}
return bytes;
}

// ____________________________________________________________________________________
// AudioQueue callback function, called when an input buffers has been filled.
void AQRecorder::MyInputBufferHandler( void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
AQRecorder *aqr = (AQRecorder *)inUserData;


try {
if (inNumPackets > 0) {
// write packets to file
// XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
// inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData),
// "AudioFileWritePackets failed");
aqr->mRecordPacket += inNumPackets;



// int numBytes = inBuffer->mAudioDataByteSize;
// SInt8 *testBuffer = (SInt8*)inBuffer->mAudioData;
//
// for (int i=0; i < numBytes; i++)
// {
// SInt8 currentData = testBuffer[i];
// printf("Current data in testbuffer is %d", currentData);
//
// NSData * temp = [NSData dataWithBytes:currentData length:sizeof(currentData)];
// }


data=[[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]retain];

[restClient uploadAudioData:data url:nil];

}


// if we're not stopping, re-enqueue the buffer so that it gets filled again
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}

}

AQRecorder::AQRecorder()
{
mIsRunning = false;
mRecordPacket = 0;

data = [[NSData alloc]init];
restClient = [[RestClient sharedManager]retain];
}

AQRecorder::~AQRecorder()
{
AudioQueueDispose(mQueue, TRUE);
AudioFileClose(mRecordFile);

if (mFileName){
CFRelease(mFileName);
}

[restClient release];
[data release];
}

// ____________________________________________________________________________________
// Copy a queue's encoder's magic cookie to an audio file.
void AQRecorder::CopyEncoderCookieToFile()
{
UInt32 propertySize;
// get the magic cookie, if any, from the converter
OSStatus err = AudioQueueGetPropertySize(mQueue, kAudioQueueProperty_MagicCookie, &propertySize);

// we can get a noErr result and also a propertySize == 0
// -- if the file format does support magic cookies, but this file doesn't have one.
if (err == noErr && propertySize > 0) {
Byte *magicCookie = new Byte[propertySize];
UInt32 magicCookieSize;
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySize), "get audio converter's magic cookie");
magicCookieSize = propertySize; // the converter lies and tell us the wrong size

// now set the magic cookie on the output file
UInt32 willEatTheCookie = false;
// the converter wants to give us one; will the file take it?
err = AudioFileGetPropertyInfo(mRecordFile, kAudioFilePropertyMagicCookieData, NULL, &willEatTheCookie);
if (err == noErr && willEatTheCookie) {
err = AudioFileSetProperty(mRecordFile, kAudioFilePropertyMagicCookieData, magicCookieSize, magicCookie);
XThrowIfError(err, "set audio file's magic cookie");
}
delete[] magicCookie;
}
}

void AQRecorder::SetupAudioFormat(UInt32 inFormatID)
{
memset(&mRecordFormat, 0, sizeof(mRecordFormat));

UInt32 size = sizeof(mRecordFormat.mSampleRate);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareSampleRate,
&size,
&mRecordFormat.mSampleRate), "couldn't get hardware sample rate");

//override samplearate to 8k from device sample rate

mRecordFormat.mSampleRate = 8000.0;

size = sizeof(mRecordFormat.mChannelsPerFrame);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");


// mRecordFormat.mChannelsPerFrame = 1;

mRecordFormat.mFormatID = inFormatID;
if (inFormatID == kAudioFormatLinearPCM)
{
// if we want pcm, default to signed 16-bit little-endian
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
mRecordFormat.mFramesPerPacket = 1;
}

if (inFormatID == kAudioFormatULaw) {
// NSLog(@"is ulaw");
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger;
mRecordFormat.mSampleRate = 8000.0;
// mRecordFormat.mFormatFlags = 0;
mRecordFormat.mFramesPerPacket = 1;
mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mBitsPerChannel = 16;//was 8
mRecordFormat.mBytesPerPacket = 1;
mRecordFormat.mBytesPerFrame = 1;
}
}

NSString * GetDocumentDirectory(void)
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
return basePath;
}


void AQRecorder::StartRecord(CFStringRef inRecordFile)
{
int i, bufferByteSize;
UInt32 size;
CFURLRef url;

try {
mFileName = CFStringCreateCopy(kCFAllocatorDefault, inRecordFile);

// specify the recording format
SetupAudioFormat(kAudioFormatULaw /*kAudioFormatLinearPCM*/);

// create the queue
XThrowIfError(AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */, NULL /* run loop mode */,
0 /* flags */, &mQueue), "AudioQueueNewInput failed");

// get the record format back from the queue's audio converter --
// the file may require a more specific stream description than was necessary to create the encoder.
mRecordPacket = 0;

size = sizeof(mRecordFormat);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription,
&mRecordFormat, &size), "couldn't get queue's format");

NSString *basePath = GetDocumentDirectory();
NSString *recordFile = [basePath /*NSTemporaryDirectory()*/ stringByAppendingPathComponent: (NSString*)inRecordFile];

url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL);

// create the audio file
XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile,
&mRecordFile), "AudioFileCreateWithURL failed");
CFRelease(url);

// copy the cookie first to give the file object as much info as we can about the data going in
// not necessary for pcm, but required for some compressed audio
CopyEncoderCookieToFile();


// allocate and enqueue buffers
bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); // enough bytes for half a second
for (i = 0; i < kNumberRecordBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
// start the queue
mIsRunning = true;
XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed");
}
catch (CAXException &e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
catch (...) {
fprintf(stderr, "An unknown error occurred\n");
}

}

void AQRecorder::StopRecord()
{
// end recording
mIsRunning = false;
// XThrowIfError(AudioQueueReset(mQueue), "AudioQueueStop failed");
XThrowIfError(AudioQueueStop(mQueue, true), "AudioQueueStop failed");
// a codec may update its cookie at the end of an encoding session, so reapply it to the file now
CopyEncoderCookieToFile();
if (mFileName)
{
CFRelease(mFileName);
mFileName = NULL;
}
AudioQueueDispose(mQueue, true);
AudioFileClose(mRecordFile);
}

请随时评论或完善我的答案,如果它是更好的解决方案,我会接受它作为答案。请注意,这是我的第一次尝试,我确信这不是最优雅或最合适的解决方案。

关于ios - 从 iphone 流媒体,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10501236/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com