gpt4 book ai didi

ios - 自定义 AVVideoCompositing 类未按预期工作

转载 作者:IT王子 更新时间:2023-10-29 05:17:25 26 4
gpt4 key购买 nike

我正在尝试将 CIFilter 应用于 AVAsset,然后在应用过滤器的情况下保存它。我这样做的方法是使用 AVAssetExportSession 并将 videoComposition 设置为具有自定义 AVVideoCompositingAVMutableVideoComposition 对象> 类。

我还设置了 instructions我的AVMutableVideoComposition反对自定义组合指令类(符合 AVMutableVideoCompositionInstruction )。向此类传递轨道 ID 以及一些其他不重要的变量。

不幸的是,我遇到了一个问题 - startVideoCompositionRequest:我的自定义视频合成器类(符合 AVVideoCompositing)中的函数未被正确调用。

当我将自定义指令类的 passthroughTrackID 变量设置为轨道 ID 时,startVideoCompositionRequest(request)我的 AVVideoCompositing 中的函数未被调用。

然而,当我没有设置我的自定义指令类的 passthroughTrackID 变量时,startVideoCompositionRequest(request) 被调用,但没有被调用正确 - 打印 request.sourceTrackIDs结果为空数组,request.sourceFrameByTrackID(trackID)结果为零值。

我发现的一件有趣的事情是 cancelAllPendingVideoCompositionRequests:尝试使用过滤器导出视频时,函数总是被调用两次。它要么在 startVideoCompositionRequest: 之前调用一次,之后调用一次,要么在 startVideoCompositionRequest: 未调用的情况下连续调用两次。

我创建了三个类来导出带有过滤器的视频。这是实用程序类,它基本上只包含一个 export 函数并调用所有必需的代码

class VideoFilterExport{

let asset: AVAsset
init(asset: AVAsset){
self.asset = asset
}

func export(toURL url: NSURL, callback: (url: NSURL?) -> Void){
guard let track: AVAssetTrack = self.asset.tracksWithMediaType(AVMediaTypeVideo).first else{callback(url: nil); return}

let composition = AVMutableComposition()
let compositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)

do{
try compositionTrack.insertTimeRange(track.timeRange, ofTrack: track, atTime: kCMTimeZero)
}
catch _{callback(url: nil); return}

let videoComposition = AVMutableVideoComposition(propertiesOfAsset: composition)
videoComposition.customVideoCompositorClass = VideoFilterCompositor.self
videoComposition.frameDuration = CMTimeMake(1, 30)
videoComposition.renderSize = compositionTrack.naturalSize

let instruction = VideoFilterCompositionInstruction(trackID: compositionTrack.trackID)
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, self.asset.duration)
videoComposition.instructions = [instruction]

let session: AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetMediumQuality)!
session.videoComposition = videoComposition
session.outputURL = url
session.outputFileType = AVFileTypeMPEG4

session.exportAsynchronouslyWithCompletionHandler(){
callback(url: url)
}
}
}

这是另外两个类 - 我将把它们放在一个代码块中以缩短这篇文章

// Video Filter Composition Instruction Class - from what I gather,
// AVVideoCompositionInstruction is used only to pass values to
// the AVVideoCompositing class

class VideoFilterCompositionInstruction : AVMutableVideoCompositionInstruction{

let trackID: CMPersistentTrackID
let filters: ImageFilterGroup
let context: CIContext


// When I leave this line as-is, startVideoCompositionRequest: isn't called.
// When commented out, startVideoCompositionRequest(request) is called, but there
// are no valid CVPixelBuffers provided by request.sourceFrameByTrackID(below value)
override var passthroughTrackID: CMPersistentTrackID{get{return self.trackID}}
override var requiredSourceTrackIDs: [NSValue]{get{return []}}
override var containsTweening: Bool{get{return false}}


init(trackID: CMPersistentTrackID, filters: ImageFilterGroup, context: CIContext){
self.trackID = trackID
self.filters = filters
self.context = context

super.init()

//self.timeRange = timeRange
self.enablePostProcessing = true
}

required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}

}


// My custom AVVideoCompositing class. This is where the problem lies -
// although I don't know if this is the root of the problem

class VideoFilterCompositor : NSObject, AVVideoCompositing{

var requiredPixelBufferAttributesForRenderContext: [String : AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32BGRA), // The video is in 32 BGRA
kCVPixelBufferOpenGLESCompatibilityKey as String : NSNumber(bool: true),
kCVPixelBufferOpenGLCompatibilityKey as String : NSNumber(bool: true)
]
var sourcePixelBufferAttributes: [String : AnyObject]? = [
kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32BGRA),
kCVPixelBufferOpenGLESCompatibilityKey as String : NSNumber(bool: true),
kCVPixelBufferOpenGLCompatibilityKey as String : NSNumber(bool: true)
]

let renderQueue = dispatch_queue_create("co.getblix.videofiltercompositor.renderingqueue", DISPATCH_QUEUE_SERIAL)

override init(){
super.init()
}

func startVideoCompositionRequest(request: AVAsynchronousVideoCompositionRequest){
// This code block is never executed when the
// passthroughTrackID variable is in the above class

autoreleasepool(){
dispatch_async(self.renderQueue){
guard let instruction = request.videoCompositionInstruction as? VideoFilterCompositionInstruction else{
request.finishWithError(NSError(domain: "getblix.co", code: 760, userInfo: nil))
return
}
guard let pixels = request.sourceFrameByTrackID(instruction.passthroughTrackID) else{
// This code block is executed when I comment out the
// passthroughTrackID variable in the above class

request.finishWithError(NSError(domain: "getblix.co", code: 761, userInfo: nil))
return
}
// I have not been able to get the code to reach this point
// This function is either not called, or the guard
// statement above executes

let image = CIImage(CVPixelBuffer: pixels)
let filtered: CIImage = //apply the filter here

let width = CVPixelBufferGetWidth(pixels)
let height = CVPixelBufferGetHeight(pixels)
let format = CVPixelBufferGetPixelFormatType(pixels)

var newBuffer: CVPixelBuffer?
CVPixelBufferCreate(kCFAllocatorDefault, width, height, format, nil, &newBuffer)

if let buffer = newBuffer{
instruction.context.render(filtered, toCVPixelBuffer: buffer)
request.finishWithComposedVideoFrame(buffer)
}
else{
request.finishWithComposedVideoFrame(pixels)
}
}
}
}

func renderContextChanged(newRenderContext: AVVideoCompositionRenderContext){
// I don't have any code in this block
}

// This is interesting - this is called twice,
// Once before startVideoCompositionRequest is called,
// And once after. In the case when startVideoCompositionRequest
// Is not called, this is simply called twice in a row
func cancelAllPendingVideoCompositionRequests(){
dispatch_barrier_async(self.renderQueue){
print("Cancelled")
}
}
}

我一直在看Apple's AVCustomEdit sample project对此有很多指导,但我似乎无法在其中找到发生这种情况的任何原因。

我怎样才能得到 request.sourceFrameByTrackID:函数正确调用,并为每个帧提供有效的 CVPixelBuffer

最佳答案

<子> All of the code for this utility is on GitHub

事实证明,requiredSourceTrackIDs自定义中的变量 AVVideoCompositionInstruction类(问题中的 VideoFilterCompositionInstruction)必须设置为包含轨道 ID 的数组

override var requiredSourceTrackIDs: [NSValue]{
get{
return [
NSNumber(value: Int(self.trackID))
]
}
}

所以最终的自定义作文指令类是

class VideoFilterCompositionInstruction : AVMutableVideoCompositionInstruction{
let trackID: CMPersistentTrackID
let filters: [CIFilter]
let context: CIContext

override var passthroughTrackID: CMPersistentTrackID{get{return self.trackID}}
override var requiredSourceTrackIDs: [NSValue]{get{return [NSNumber(value: Int(self.trackID))]}}
override var containsTweening: Bool{get{return false}}

init(trackID: CMPersistentTrackID, filters: [CIFilter], context: CIContext){
self.trackID = trackID
self.filters = filters
self.context = context

super.init()

self.enablePostProcessing = true
}

required init?(coder aDecoder: NSCoder){
fatalError("init(coder:) has not been implemented")
}
}

此实用程序的所有代码 is also on GitHub

关于ios - 自定义 AVVideoCompositing 类未按预期工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39137099/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com