gpt4 book ai didi

ios - 使用 AVFoundation 录制方形视频并添加水印

转载 作者:可可西里 更新时间:2023-11-01 00:59:43 24 4
gpt4 key购买 nike

Illustration of what I'm trying to do

我正在尝试执行以下操作:

  • 播放音乐
  • 录制一个方形视频(我在 View 中有一个容器显示您正在录制的内容)
  • 在顶部添加标签,在方形视频的左下角添加应用的图标和名称。

到目前为止,我设法播放音乐,在不同 View 的方形容器中显示 AVCaptureVideoPreviewLayer,并将视频保存到相机胶卷。

问题是我几乎找不到一些关于使用 AVFoundation 的模糊教程,而且这是我的第一个应用程序,让事情变得非常困难。

我设法做了这些事情,但我仍然不明白 AVFoundation 是如何工作的。该文档对于初学者来说含糊不清,我还没有找到我特别想要的教程,并且将多个教程(并用 Obj C 编写)放在一起使得这成为不可能。我的问题如下:

  1. 视频没有保存为正方形。 (提到该应用不支持横向)
  2. 视频没有音频。 (我认为我应该添加视频以外的某种音频输入)
  3. 如何给视频加水印?
  4. 我有一个错误:我创建了一个带有文本和图像的 View (messageView;请参阅代码),让用户知道视频已保存到相机胶卷。但是,如果我第二次开始录制,则该 View 会在视频录制时出现,而不是在录制之后出现。我怀疑这与将每个视频命名相同有关。

所以我做了准备:

override func viewDidLoad() {
super.viewDidLoad()

// Preset For High Quality
captureSession.sessionPreset = AVCaptureSessionPresetHigh

// Get available devices capable of recording video
let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as! [AVCaptureDevice]

// Get back camera
for device in devices
{
if device.position == AVCaptureDevicePosition.Back
{
currentDevice = device
}
}

// Set Input
let captureDeviceInput: AVCaptureDeviceInput
do
{
captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice)
}
catch
{
print(error)
return
}

// Set Output
videoFileOutput = AVCaptureMovieFileOutput()

// Configure Session w/ Input & Output Devices
captureSession.addInput(captureDeviceInput)
captureSession.addOutput(videoFileOutput)

// Show Camera Preview
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(cameraPreviewLayer!)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
let width = view.bounds.width*0.85
cameraPreviewLayer?.frame = CGRectMake(0, 0, width, width)

// Bring Record Button To Front
view.bringSubviewToFront(recordButton)
captureSession.startRunning()

// // Bring Message To Front
// view.bringSubviewToFront(messageView)
// view.bringSubviewToFront(messageText)
// view.bringSubviewToFront(messageImage)
}

然后当我按下录制按钮时:

@IBAction func capture(sender: AnyObject) {
if !isRecording
{
isRecording = true

UIView.animateWithDuration(0.5, delay: 0.0, options: [.Repeat, .Autoreverse, .AllowUserInteraction], animations: { () -> Void in
self.recordButton.transform = CGAffineTransformMakeScale(0.5, 0.5)
}, completion: nil)

let outputPath = NSTemporaryDirectory() + "output.mov"
let outputFileURL = NSURL(fileURLWithPath: outputPath)
videoFileOutput?.startRecordingToOutputFileURL(outputFileURL, recordingDelegate: self)
}
else
{
isRecording = false

UIView.animateWithDuration(0.5, delay: 0, options: [], animations: { () -> Void in
self.recordButton.transform = CGAffineTransformMakeScale(1.0, 1.0)
}, completion: nil)
recordButton.layer.removeAllAnimations()
videoFileOutput?.stopRecording()
}
}

录制视频后:

func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!) {
let outputPath = NSTemporaryDirectory() + "output.mov"
if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputPath)
{
UISaveVideoAtPathToSavedPhotosAlbum(outputPath, self, nil, nil)
// Show Success Message
UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
self.messageView.alpha = 0.8
}, completion: nil)
UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
self.messageText.alpha = 1.0
}, completion: nil)
UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
self.messageImage.alpha = 1.0
}, completion: nil)
// Hide Message
UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
self.messageView.alpha = 0
}, completion: nil)
UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
self.messageText.alpha = 0
}, completion: nil)
UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
self.messageImage.alpha = 0
}, completion: nil)
}
}

那么我需要做什么来解决这个问题?我一直在搜索和查看教程,但我无法弄清楚......我读到有关添加水印的信息,我发现它与在视频顶部添加 CALayers 有关。但显然我做不到,因为我什至不知道如何制作视频方形和添加音频。

最佳答案

一些事情:

就音频而言,您添加的是视频(相机)输入,但没有音频输入。所以这样做是为了获得声音。

    let audioInputDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)

do {
let input = try AVCaptureDeviceInput(device: audioInputDevice)

if sourceAVFoundation.captureSession.canAddInput(input) {
sourceAVFoundation.captureSession.addInput(input)
} else {
NSLog("ERROR: Can't add audio input")
}
} catch let error {
NSLog("ERROR: Getting input device: \(error)")
}

要制作正方形视频,您将不得不考虑使用 AVAssetWriter 而不是 AVCaptureFileOutput。这更复杂,但你获得了更多的“力量”。您已经创建了一个很棒的 AVCaptureSession,要连接 AssetWriter,您需要执行如下操作:

    let fileManager = NSFileManager.defaultManager()
let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
guard let documentDirectory: NSURL = urls.first else {
print("Video Controller: getAssetWriter: documentDir Error")
return nil
}

let local_video_name = NSUUID().UUIDString + ".mp4"
self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)

guard let url = self.videoOutputURL else {
return nil
}


self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)

guard let writer = self.assetWriter else {
return nil
}

//TODO: Set your desired video size here!
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : captureSize.width,
AVVideoHeightKey : captureSize.height,
AVVideoCompressionPropertiesKey : [
AVVideoAverageBitRateKey : 200000,
AVVideoProfileLevelKey : AVVideoProfileLevelH264Baseline41,
AVVideoMaxKeyFrameIntervalKey : 90,
],
]

assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assetWriterInputCamera?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputCamera!)

let audioSettings : [String : AnyObject] = [
AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : 2,
AVSampleRateKey : NSNumber(double: 44100.0)
]

assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
assetWriterInputAudio?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputAudio!)

设置好 AssetWriter 后...然后为视频和音频连接一些输出

    let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
captureSession.addOutput(audioOutput)

// Always add video last...
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
captureSession.addOutput(videoOutput)
if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
if connection.supportsVideoOrientation {
// Force recording to portrait
connection.videoOrientation = AVCaptureVideoOrientation.Portrait
}

self.outputConnection = connection
}


captureSession.startRunning()

最后你需要捕获缓冲区并处理这些东西......确保你的类成为 AVCaptureVideoDataOutputSampleBufferDelegate 和 AVCaptureAudioDataOutputSampleBufferDelegate 的委托(delegate)

//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

if !self.isRecordingStarted {
return
}

if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {

dispatch_async(audioQueue!) {
audio.appendSampleBuffer(sampleBuffer)
}
return
}

if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
dispatch_async(videoQueue!) {
camera.appendSampleBuffer(sampleBuffer)
}
}
}

有一些遗漏的点点滴滴,但希望这足以让您连同文档一起弄明白。

最后,如果你想添加水印,有很多方法可以实时完成,但一种可能的方法是修改sampleBuffer,然后将水印写入图像。您会在 StackOverflow 上找到处理该问题的其他问题。

关于ios - 使用 AVFoundation 录制方形视频并添加水印,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36743842/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com