gpt4 book ai didi

ios - 如何修改 AVFoundation 视频副本(多次拆分和片段删除)?

转载 作者:行者123 更新时间:2023-12-05 06:52:52 27 4
gpt4 key购买 nike

我一直在查看 Apple 的示例代码 Building a Feature-Rich App for Sports Analysis及其相关的 WWDC 视频,学习推理 AVFoundation 和 VNDetectTrajectoriesRequest .我的目标是允许用户导入视频(这部分我在工作,用户看到一个 UIDocumentBrowserViewController ,选择一个视频文件,然后制作一个副本),但我只想要原始视频的片段复制到轨迹从移动的球中检测到。

这两部分我已经尽量掌握了,最起码找到了视频拷贝的位置和轨迹请求的位置。

完整的视频副本发生在 CameraViewController.swift 中(我现在只从导入的视频开始,而不是从设备的摄像机实时读取),第 160 行:

func startReadingAsset(_ asset: AVAsset) {
videoRenderView = VideoRenderView(frame: view.bounds)
setupVideoOutputView(videoRenderView)

// Setup display link
let displayLink = CADisplayLink(target: self, selector: #selector(handleDisplayLink(_:)))
displayLink.preferredFramesPerSecond = 0 // Use display's rate
displayLink.isPaused = true
displayLink.add(to: RunLoop.current, forMode: .default)

guard let track = asset.tracks(withMediaType: .video).first else {
AppError.display(AppError.videoReadingError(reason: "No video tracks found in AVAsset."), inViewController: self)
return
}

let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer(playerItem: playerItem)
let settings = [
String(kCVPixelBufferPixelFormatTypeKey): kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
]
let output = AVPlayerItemVideoOutput(pixelBufferAttributes: settings)
playerItem.add(output)
player.actionAtItemEnd = .pause
player.play()

self.displayLink = displayLink
self.playerItemOutput = output
self.videoRenderView.player = player

let affineTransform = track.preferredTransform.inverted()
let angleInDegrees = atan2(affineTransform.b, affineTransform.a) * CGFloat(180) / CGFloat.pi
var orientation: UInt32 = 1
switch angleInDegrees {
case 0:
orientation = 1 // Recording button is on the right
case 180, -180:
orientation = 3 // abs(180) degree rotation recording button is on the right
case 90:
orientation = 8 // 90 degree CW rotation recording button is on the top
case -90:
orientation = 6 // 90 degree CCW rotation recording button is on the bottom
default:
orientation = 1
}
videoFileBufferOrientation = CGImagePropertyOrientation(rawValue: orientation)!
videoFileFrameDuration = track.minFrameDuration
displayLink.isPaused = false
}

@objc
private func handleDisplayLink(_ displayLink: CADisplayLink) {
guard let output = playerItemOutput else {
return
}

videoFileReadingQueue.async {
let nextTimeStamp = displayLink.timestamp + displayLink.duration
let itemTime = output.itemTime(forHostTime: nextTimeStamp)
guard output.hasNewPixelBuffer(forItemTime: itemTime) else {
return
}
guard let pixelBuffer = output.copyPixelBuffer(forItemTime: itemTime, itemTimeForDisplay: nil) else {
return
}
// Create sample buffer from pixel buffer
var sampleBuffer: CMSampleBuffer?
var formatDescription: CMVideoFormatDescription?
CMVideoFormatDescriptionCreateForImageBuffer(allocator: nil, imageBuffer: pixelBuffer, formatDescriptionOut: &formatDescription)
let duration = self.videoFileFrameDuration
var timingInfo = CMSampleTimingInfo(duration: duration, presentationTimeStamp: itemTime, decodeTimeStamp: itemTime)
CMSampleBufferCreateForImageBuffer(allocator: nil,
imageBuffer: pixelBuffer,
dataReady: true,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: formatDescription!,
sampleTiming: &timingInfo,
sampleBufferOut: &sampleBuffer)
if let sampleBuffer = sampleBuffer {
self.outputDelegate?.cameraViewController(self, didReceiveBuffer: sampleBuffer, orientation: self.videoFileBufferOrientation)
DispatchQueue.main.async {
let stateMachine = self.gameManager.stateMachine
if stateMachine.currentState is GameManager.SetupCameraState {
// Once we received first buffer we are ready to proceed to the next state
stateMachine.enter(GameManager.DetectingBoardState.self)
}
}
}
}
}

139行 self.outputDelegate?.cameraViewController(self, didReceiveBuffer: sampleBuffer, orientation: self.videoFileBufferOrientation) 是将视频样本缓冲区传递给Vision框架子系统用于分析轨迹的地方,第二个部分。此委托(delegate)回调在第 335 行的 GameViewController.swift 中实现:

        // Perform the trajectory request in a separate dispatch queue.
trajectoryQueue.async {
do {
try visionHandler.perform([self.detectTrajectoryRequest])
if let results = self.detectTrajectoryRequest.results {
DispatchQueue.main.async {
self.processTrajectoryObservations(controller, results)
}
}
} catch {
AppError.display(error, inViewController: self)
}
}

找到的轨迹绘制在 self.processTrajectoryObservations(controller, results) 中的视频上。

我现在遇到的问题是修改它,这样新视频就不会绘制轨迹,而是只将原始视频的一部分复制到在帧中检测到轨迹的地方。

最佳答案

如果您知道当前视频的秒数和持续时间,您可以使用 AVAssetExportSession 转码/导出原始视频的部分内容。

这是我几年前用来做类似事情的一段代码。它是 swift 3,所以语法可能与现在略有不同。

let asset = AVURLAsset(url: originalFileURL)
let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality)!

exportSession.outputURL = someOutputURLYouWant
exportSession.outputFileType = AVFileTypeMPEG4 // can choose other types than mp4 if you want

let start = CMTimeMakeWithSeconds(secondsIntoVideoFloat, 600)
let duration = CMTimeMakeWithSeconds(numberOfSecondsFloat, 600)
exportSession.timeRange = CMTimeRange(start, duration)

exportSession.exportAsynchronously {
switch exportSession.status {
// handle completed, failed, cancelled states.
}
}

关于ios - 如何修改 AVFoundation 视频副本(多次拆分和片段删除)?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65895134/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com