gpt4 book ai didi

ios - 在 Swift3 iOS 中为捕获的视频添加滤镜效果后音频丢失

转载 作者:行者123 更新时间:2023-11-28 14:53:04 25 4
gpt4 key购买 nike

我正在开发基于视频的应用程序,我需要将 CIFilter 添加到从设备库中选择的捕获视频中。为此,我使用下面的 VideoEffects 库:

https://github.com/FlexMonkey/VideoEffects

使用它,我可以为我的视频添加滤镜,但问题是最终视频输出中缺少音频。我尝试使用以下代码添加音频 Assets 但无法正常工作:

videoOutputURL = documentDirectory.appendingPathComponent("Output_\(timeDateFormatter.string(from: Date())).mp4")

do {
videoWriter = try AVAssetWriter(outputURL: videoOutputURL!, fileType: AVFileTypeMPEG4)
}
catch {
fatalError("** unable to create asset writer **")
}

let outputSettings: [String : AnyObject] = [
AVVideoCodecKey: AVVideoCodecH264 as AnyObject,
AVVideoWidthKey: currentItem.presentationSize.width as AnyObject,
AVVideoHeightKey: currentItem.presentationSize.height as AnyObject]

guard videoWriter!.canApply(outputSettings: outputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("** unable to apply video settings ** ")
}


videoWriterInput = AVAssetWriterInput(
mediaType: AVMediaTypeVideo,
outputSettings: outputSettings)


//setup audio writer
let audioOutputSettings: Dictionary<String, AnyObject> = [
AVFormatIDKey : Int(kAudioFormatMPEG4AAC) as AnyObject,
AVSampleRateKey:48000.0 as AnyObject,
AVNumberOfChannelsKey:NSNumber(value: 1),
AVEncoderBitRateKey : 128000 as AnyObject
]

guard videoWriter!.canApply(outputSettings: audioOutputSettings, forMediaType: AVMediaTypeAudio) else {
fatalError("** unable to apply Audio settings ** ")
}

audioWriterInput = AVAssetWriterInput(
mediaType: AVMediaTypeAudio,
outputSettings: audioOutputSettings)


if videoWriter!.canAdd(videoWriterInput!) {
videoWriter!.add(videoWriterInput!)
videoWriter!.add(audioWriterInput!)
}
else {
fatalError ("** unable to add input **")
}

还有其他方法可以给视频加滤镜吗?请给我建议。

我还尝试使用 GPUImage 添加 CIFilter,但这仅适用于实时视频,不适用于捕获的视频。

最佳答案

从 iOS 9.0 开始,您可以使用 AVVideoComposition 将核心图像过滤器逐帧应用于视频。

let filter = CIFilter(name: "CIGaussianBlur")!
let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.imageByClampingToExtent()
filter.setValue(source, forKey: kCIInputImageKey)

// Vary filter parameters based on video timing
let seconds = CMTimeGetSeconds(request.compositionTime)
filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)

// Crop the blurred output to the bounds of the original image
let output = filter.outputImage!.imageByCroppingToRect(request.sourceImage.extent)

request.finish(with: output, context: nil)
})

现在我们可以使用之前创建的 Assets 创建 AVPlayerItem 并使用 AVPlayer 播放它

let playerItem = AVPlayerItem(asset: asset)
playerItem.videoComposition = composition
let player = AVPlayer(playerItem: playerItem)
player.play()

核心图像过滤器逐帧添加实时。您还可以使用 AVAssetExportSession 类导出视频。

这是 WWDC 2015 的精彩介绍:Link

关于ios - 在 Swift3 iOS 中为捕获的视频添加滤镜效果后音频丢失,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49649194/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com