gpt4 book ai didi

ios - 视频长度 - 图像数组到视频

转载 作者:可可西里 更新时间:2023-11-01 02:10:08 25 4
gpt4 key购买 nike

我之前发布了一个问题,但没有得到正确的解决方案。可能是我对这个问题不是很清楚Frame Duration time - UIImage array to movie这就是为什么我要重复这个

我正在做一个项目,我需要从 UIImage 数组中导出视频。我的数组包含 4 张图像,我希望每张图像显示 5 秒,这意味着视频长度为 20 秒。但我导出的视频长度为 25 秒,第一张图片显示 10 秒,最后 3 张图片显示 15 秒(每张 5 秒)。所以,最后 3 张图像运行良好。我正在尝试这段代码...

var outputSize = CGSize(width: 1920, height: 1280)

func build(outputSize outputSize: CGSize) {
let fileManager = NSFileManager.defaultManager()
let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
guard let documentDirectory: NSURL = urls.first else {
fatalError("documentDir Error")
}
let videoOutputURL = documentDirectory.URLByAppendingPathComponent("OutputVideo.mp4")
if NSFileManager.defaultManager().fileExistsAtPath(videoOutputURL!.path!) {
do {
try NSFileManager.defaultManager().removeItemAtPath(videoOutputURL!.path!)
} catch {
fatalError("Unable to delete file: \(error) : \(#function).")
}
}
guard let videoWriter = try? AVAssetWriter(URL: videoOutputURL!, fileType: AVFileTypeMPEG4) else {
fatalError("AVAssetWriter error")
}
let outputSettings = [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : NSNumber(float: Float(outputSize.width)), AVVideoHeightKey : NSNumber(float: Float(outputSize.height))]
guard videoWriter.canApplyOutputSettings(outputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("Negative : Can't apply the Output settings...")
}
let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings)
let sourcePixelBufferAttributesDictionary = [kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32ARGB), kCVPixelBufferWidthKey as String: NSNumber(float: Float(outputSize.width)), kCVPixelBufferHeightKey as String: NSNumber(float: Float(outputSize.height))]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput, sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
if videoWriter.canAddInput(videoWriterInput) {
videoWriter.addInput(videoWriterInput)
}
if videoWriter.startWriting() {
videoWriter.startSession(atSourceTime: kCMTimeZero)
assert(pixelBufferAdaptor.pixelBufferPool != nil)
let media_queue = DispatchQueue(label: "mediaInputQueue")
videoWriterInput.requestMediaDataWhenReady(on: media_queue, using: { () -> Void in
let fps: Int32 = 1
let framePerSecond: Int64 = 5
let frameDuration = CMTimeMake(framePerSecond, fps)
var frameCount: Int64 = 0
var appendSucceeded = true
while (!self.choosenPhotos.isEmpty) { //choosenPhotos is image array
if (videoWriterInput.isReadyForMoreMediaData) {
let nextPhoto = self.choosenPhotos.remove(at: 0)
let lastFrameTime = CMTimeMake(frameCount * framePerSecond, fps)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
print("presentationTime-------------\(presentationTime)")
var pixelBuffer: CVPixelBuffer? = nil
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
if let pixelBuffer = pixelBuffer, status == 0 {
let managedPixelBuffer = pixelBuffer
CVPixelBufferLockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
let data = CVPixelBufferGetBaseAddress(managedPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(self.outputSize.width), height: Int(self.outputSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(managedPixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
context!.clear(CGRect(x: 0, y: 0, width: CGFloat(self.outputSize.width), height: CGFloat(self.outputSize.height)))
let horizontalRatio = CGFloat(self.outputSize.width) / nextPhoto.size.width
let verticalRatio = CGFloat(self.outputSize.height) / nextPhoto.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize: CGSize = CGSize(width: nextPhoto.size.width * aspectRatio, height: nextPhoto.size.height * aspectRatio)
let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0
context?.draw(nextPhoto.cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
} else {
print("Failed to allocate pixel buffer")
appendSucceeded = false
}
}
if !appendSucceeded {
break
}
frameCount += 1
}
videoWriterInput.markAsFinished()
videoWriter.finishWriting { () -> Void in
self.imageArrayToVideoComplete = true
print("Image array to mutable video complete :)")
}
})
}
}

实际上我对变量presentationTime有点困惑。出于这个原因,我打印了它,在 Xcode 中,输出日志是这样的

presentationTime-------------CMTime(value: 0, timescale: 1, flags: __C.CMTimeFlags(rawValue: 1), epoch: 0)
presentationTime-------------CMTime(value: 10, timescale: 1, flags: __C.CMTimeFlags(rawValue: 1), epoch: 0)
presentationTime-------------CMTime(value: 15, timescale: 1, flags: __C.CMTimeFlags(rawValue: 1), epoch: 0)
presentationTime-------------CMTime(value: 20, timescale: 1, flags: __C.CMTimeFlags(rawValue: 1), epoch: 0)

此处第一个值为 0,第二个值为 10 - 差值为 10。但是从第二个 presentationTime 开始,它运行良好(值增加 5)。我想这就是问题所在。我需要做的最小改变是什么?

最佳答案

只需在 buildVideoFromImageArray 函数中的这一行注释掉在

之后
//            videoWriter.startSession(atSourceTime: kCMTimeZero)

并在其位置添加以下行

        let zeroTime = CMTimeMake(Int64(self.reloadDurationFromSlideShow),Int32(1))
videoWriter.startSession(atSourceTime: zeroTime)

测试过了。你也测试一下,让我知道。

关于ios - 视频长度 - 图像数组到视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41216223/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com