gpt4 book ai didi

xcode - Mac OS X 简单录音机

转载 作者:行者123 更新时间:2023-12-03 16:27:16 27 4
gpt4 key购买 nike

有人有一些适用于 Mac OS X 的简单录音机的示例代码吗?我只想录制来自 MacBook Pro 内置麦克风的声音并将其保存到文件中。就这些。

我已经搜索了几个小时,是的,有一些示例可以录制语音并将其保存到文件中,例如 http://developer.apple.com/library/mac/#samplecode/MYRecorder/Introduction/Intro.html 。 Mac OS X 的示例代码似乎比 iPhone 的类似示例代码复杂 10 倍左右。

对于 iOS,命令非常简单:

soundFile =[NSURL FileURLWithPath:[tempDir stringByAppendingString:@"mysound.cap"]];
soundSetting = [NSDictionary dictionaryWithObjectsAndKeys: // dictionary setting code left out goes here
soundRecorder = [[AVAudioRecorder alloc] initWithURL:soundFile settings:soundSetting error:nil];
[soundRecorder record];
[soundRecorder stop];

我认为 Mac OS X 上有代码可以做到这一点,就像 iPhone 版本一样简单。感谢您的帮助。

这是代码(目前播放器无法工作)

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>

@interface MyAVFoundationClass : NSObject <AVAudioPlayerDelegate>
{
AVAudioRecorder *soundRecorder;

}

@property (retain) AVAudioRecorder *soundRecorder;

-(IBAction)stopAudio:(id)sender;
-(IBAction)recordAudio:(id)sender;
-(IBAction)playAudio:(id)sender;

@end


#import "MyAVFoundationClass.h"

@implementation MyAVFoundationClass

@synthesize soundRecorder;

-(void)awakeFromNib
{
NSLog(@"awakeFromNib visited");
NSString *tempDir;
NSURL *soundFile;
NSDictionary *soundSetting;

tempDir = @"/Users/broncotrojan/Documents/testvoices/";
soundFile = [NSURL fileURLWithPath: [tempDir stringByAppendingString:@"test1.caf"]];
NSLog(@"soundFile: %@",soundFile);

soundSetting = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatMPEG4AAC],AVFormatIDKey,
[NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityHigh],AVEncoderAudioQualityKey, nil];

soundRecorder = [[AVAudioRecorder alloc] initWithURL: soundFile settings: soundSetting error: nil];
}

-(IBAction)stopAudio:(id)sender
{
NSLog(@"stopAudioVisited");
[soundRecorder stop];
}

-(IBAction)recordAudio:(id)sender
{
NSLog(@"recordAudio Visited");
[soundRecorder record];

}

-(IBAction)playAudio:(id)sender
{
NSLog(@"playAudio Visited");
NSURL *soundFile;
NSString *tempDir;
AVAudioPlayer *audioPlayer;

tempDir = @"/Users/broncotrojan/Documents/testvoices/";
soundFile = [NSURL fileURLWithPath: [tempDir stringByAppendingString:@"test1.caf"]];
NSLog(@"soundFile: %@", soundFile);

audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFile error:nil];

[audioPlayer setDelegate:self];
[audioPlayer play];

}

@end

最佳答案

这是在 macOS 10.14 上使用 Xcode 10.2.1、Swift 5.0.1 运行的代码。

首先,您必须按照 Apple 中的说明在 Info.plist 文件中设置 NSMicrophoneUsageDescription 又名 隐私 - 麦克风使用说明文档:Requesting Authorization for Media Capture on macOS .

然后您必须向用户请求使用麦克风的权限:

switch AVCaptureDevice.authorizationStatus(for: .audio) {
case .authorized: // The user has previously granted access to the camera.
// proceed with recording

case .notDetermined: // The user has not yet been asked for camera access.
AVCaptureDevice.requestAccess(for: .audio) { granted in

if granted {
// proceed with recording
}
}

case .denied: // The user has previously denied access.
()

case .restricted: // The user can't grant access due to restrictions.
()

@unknown default:
fatalError()
}

然后您可以使用以下方法来开始和停止录音:

import AVFoundation

open class SpeechRecorder: NSObject {
private var destinationUrl: URL!

var recorder: AVAudioRecorder?
let player = AVQueuePlayer()

open func start() {
destinationUrl = createUniqueOutputURL()

do {
let format = AVAudioFormat(settings: [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVEncoderAudioQualityKey: AVAudioQuality.high,
AVSampleRateKey: 44100.0,
AVNumberOfChannelsKey: 1,
AVLinearPCMBitDepthKey: 16,
])!
let recorder = try AVAudioRecorder(url: destinationUrl, format: format)

// workaround against Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL issue
// https://stackoverflow.com/a/57670740/598057
let firstSuccess = recorder.record()
if firstSuccess == false || recorder.isRecording == false {
recorder.record()
}
assert(recorder.isRecording)

self.recorder = recorder
} catch let error {
let code = (error as NSError).code
NSLog("SpeechRecorder: \(error)")
NSLog("SpeechRecorder: \(code)")

let osCode = OSStatus(code)

NSLog("SpeechRecorder: \(String(describing: osCode.detailedErrorMessage()))")
}
}

open func stop() {
NSLog("SpeechRecorder: stop()")

if let recorder = recorder {
recorder.stop()
NSLog("SpeechRecorder: final file \(destinationUrl.absoluteString)")

player.removeAllItems()
player.insert(AVPlayerItem(url: destinationUrl), after: nil)
player.play()
}
}

func createUniqueOutputURL() -> URL {
let paths = FileManager.default.urls(for: .musicDirectory,
in: .userDomainMask)
let documentsDirectory = URL(fileURLWithPath: NSTemporaryDirectory())

let currentTime = Int(Date().timeIntervalSince1970 * 1000)

let outputURL = URL(fileURLWithPath: "SpeechRecorder-\(currentTime).m4a",
relativeTo: documentsDirectory)

destinationUrl = outputURL

return outputURL
}
}

extension OSStatus {
//**************************
func asString() -> String? {
let n = UInt32(bitPattern: self.littleEndian)
guard let n1 = UnicodeScalar((n >> 24) & 255), n1.isASCII else { return nil }
guard let n2 = UnicodeScalar((n >> 16) & 255), n2.isASCII else { return nil }
guard let n3 = UnicodeScalar((n >> 8) & 255), n3.isASCII else { return nil }
guard let n4 = UnicodeScalar( n & 255), n4.isASCII else { return nil }
return String(n1) + String(n2) + String(n3) + String(n4)
} // asString

//**************************
func detailedErrorMessage() -> String {
switch(self) {
case 0:
return "Success"

// AVAudioRecorder errors
case kAudioFileUnspecifiedError:
return "kAudioFileUnspecifiedError"

case kAudioFileUnsupportedFileTypeError:
return "kAudioFileUnsupportedFileTypeError"

case kAudioFileUnsupportedDataFormatError:
return "kAudioFileUnsupportedDataFormatError"

case kAudioFileUnsupportedPropertyError:
return "kAudioFileUnsupportedPropertyError"

case kAudioFileBadPropertySizeError:
return "kAudioFileBadPropertySizeError"

case kAudioFilePermissionsError:
return "kAudioFilePermissionsError"

case kAudioFileNotOptimizedError:
return "kAudioFileNotOptimizedError"

case kAudioFileInvalidChunkError:
return "kAudioFileInvalidChunkError"

case kAudioFileDoesNotAllow64BitDataSizeError:
return "kAudioFileDoesNotAllow64BitDataSizeError"

case kAudioFileInvalidPacketOffsetError:
return "kAudioFileInvalidPacketOffsetError"

case kAudioFileInvalidFileError:
return "kAudioFileInvalidFileError"

case kAudioFileOperationNotSupportedError:
return "kAudioFileOperationNotSupportedError"

case kAudioFileNotOpenError:
return "kAudioFileNotOpenError"

case kAudioFileEndOfFileError:
return "kAudioFileEndOfFileError"

case kAudioFilePositionError:
return "kAudioFilePositionError"

case kAudioFileFileNotFoundError:
return "kAudioFileFileNotFoundError"

//***** AUGraph errors
case kAUGraphErr_NodeNotFound: return "AUGraph Node Not Found"
case kAUGraphErr_InvalidConnection: return "AUGraph Invalid Connection"
case kAUGraphErr_OutputNodeErr: return "AUGraph Output Node Error"
case kAUGraphErr_CannotDoInCurrentContext: return "AUGraph Cannot Do In Current Context"
case kAUGraphErr_InvalidAudioUnit: return "AUGraph Invalid Audio Unit"

//***** MIDI errors
case kMIDIInvalidClient: return "MIDI Invalid Client"
case kMIDIInvalidPort: return "MIDI Invalid Port"
case kMIDIWrongEndpointType: return "MIDI Wrong Endpoint Type"
case kMIDINoConnection: return "MIDI No Connection"
case kMIDIUnknownEndpoint: return "MIDI Unknown Endpoint"
case kMIDIUnknownProperty: return "MIDI Unknown Property"
case kMIDIWrongPropertyType: return "MIDI Wrong Property Type"
case kMIDINoCurrentSetup: return "MIDI No Current Setup"
case kMIDIMessageSendErr: return "MIDI Message Send Error"
case kMIDIServerStartErr: return "MIDI Server Start Error"
case kMIDISetupFormatErr: return "MIDI Setup Format Error"
case kMIDIWrongThread: return "MIDI Wrong Thread"
case kMIDIObjectNotFound: return "MIDI Object Not Found"
case kMIDIIDNotUnique: return "MIDI ID Not Unique"
case kMIDINotPermitted: return "MIDI Not Permitted"

//***** AudioToolbox errors
case kAudioToolboxErr_CannotDoInCurrentContext: return "AudioToolbox Cannot Do In Current Context"
case kAudioToolboxErr_EndOfTrack: return "AudioToolbox End Of Track"
case kAudioToolboxErr_IllegalTrackDestination: return "AudioToolbox Illegal Track Destination"
case kAudioToolboxErr_InvalidEventType: return "AudioToolbox Invalid Event Type"
case kAudioToolboxErr_InvalidPlayerState: return "AudioToolbox Invalid Player State"
case kAudioToolboxErr_InvalidSequenceType: return "AudioToolbox Invalid Sequence Type"
case kAudioToolboxErr_NoSequence: return "AudioToolbox No Sequence"
case kAudioToolboxErr_StartOfTrack: return "AudioToolbox Start Of Track"
case kAudioToolboxErr_TrackIndexError: return "AudioToolbox Track Index Error"
case kAudioToolboxErr_TrackNotFound: return "AudioToolbox Track Not Found"
case kAudioToolboxError_NoTrackDestination: return "AudioToolbox No Track Destination"

//***** AudioUnit errors
case kAudioUnitErr_CannotDoInCurrentContext: return "AudioUnit Cannot Do In Current Context"
case kAudioUnitErr_FailedInitialization: return "AudioUnit Failed Initialization"
case kAudioUnitErr_FileNotSpecified: return "AudioUnit File Not Specified"
case kAudioUnitErr_FormatNotSupported: return "AudioUnit Format Not Supported"
case kAudioUnitErr_IllegalInstrument: return "AudioUnit Illegal Instrument"
case kAudioUnitErr_Initialized: return "AudioUnit Initialized"
case kAudioUnitErr_InvalidElement: return "AudioUnit Invalid Element"
case kAudioUnitErr_InvalidFile: return "AudioUnit Invalid File"
case kAudioUnitErr_InvalidOfflineRender: return "AudioUnit Invalid Offline Render"
case kAudioUnitErr_InvalidParameter: return "AudioUnit Invalid Parameter"
case kAudioUnitErr_InvalidProperty: return "AudioUnit Invalid Property"
case kAudioUnitErr_InvalidPropertyValue: return "AudioUnit Invalid Property Value"
case kAudioUnitErr_InvalidScope: return "AudioUnit InvalidScope"
case kAudioUnitErr_InstrumentTypeNotFound: return "AudioUnit Instrument Type Not Found"
case kAudioUnitErr_NoConnection: return "AudioUnit No Connection"
case kAudioUnitErr_PropertyNotInUse: return "AudioUnit Property Not In Use"
case kAudioUnitErr_PropertyNotWritable: return "AudioUnit Property Not Writable"
case kAudioUnitErr_TooManyFramesToProcess: return "AudioUnit Too Many Frames To Process"
case kAudioUnitErr_Unauthorized: return "AudioUnit Unauthorized"
case kAudioUnitErr_Uninitialized: return "AudioUnit Uninitialized"
case kAudioUnitErr_UnknownFileType: return "AudioUnit Unknown File Type"
case kAudioUnitErr_RenderTimeout: return "AudioUnit Rendre Timeout"

//***** Audio errors
case kAudio_BadFilePathError: return "Audio Bad File Path Error"
case kAudio_FileNotFoundError: return "Audio File Not Found Error"
case kAudio_FilePermissionError: return "Audio File Permission Error"
case kAudio_MemFullError: return "Audio Mem Full Error"
case kAudio_ParamError: return "Audio Param Error"
case kAudio_TooManyFilesOpenError: return "Audio Too Many Files Open Error"
case kAudio_UnimplementedError: return "Audio Unimplemented Error"

default: return "Unknown error (no description)"
}
}
}

inPropertyData == NULL 问题的解决方法改编自 Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL .

OSStatus 代码提供字符串消息的代码改编自此处:How do you convert an iPhone OSStatus code to something useful? .

关于xcode - Mac OS X 简单录音机,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/8101667/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com