gpt4 book ai didi

ios - 模拟 AVLayerVideoGravityResizeAspectFill : crop and center video to mimic preview without losing sharpness

转载 作者:行者123 更新时间:2023-12-01 18:46:33 28 4
gpt4 key购买 nike

基于此SO post ,下面的代码会旋转、居中和裁剪用户实时捕获的视频。

捕获 session 使用 AVCaptureSessionPresetHigh 作为预设值,预览层使用 AVLayerVideoGravityResizeAspectFill 作为视频重力。这个预览非常清晰。

然而,导出的视频并没有那么清晰,表面上是因为从 5S 后置摄像头的 1920x1080 分辨率缩放到 320x568(导出视频的目标尺寸)会因为丢弃像素而导致模糊?

假设没有办法从 1920x1080 缩放到 320x568 没有一些模糊性,那么问题就变成了:如何模拟预览层的清晰度?

不知何故,Apple 正在使用一种算法将 1920x1080 视频转换为 320x568 的清晰预览帧。

有没有办法用 AVAssetWriter 或 AVAssetExportSession 来模仿这个?

func cropVideo() {
// Set start time
let startTime = NSDate().timeIntervalSince1970

// Create main composition & its tracks
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))

// Get source video & audio tracks
let videoPath = getFilePath(curSlice!.getCaptureURL())
let videoURL = NSURL(fileURLWithPath: videoPath)
let videoAsset = AVURLAsset(URL: videoURL, options: nil)
let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
let videoSize = sourceVideoTrack.naturalSize

// Get rounded time for video
let roundedDur = floor(curSlice!.getDur() * 100) / 100
let videoDur = CMTimeMakeWithSeconds(roundedDur, 100)

// Add source tracks to composition
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange while exporting video: \(error)")
}

// Create video composition
// -- Set video frame
let outputSize = view.bounds.size
let videoComposition = AVMutableVideoComposition()
print("Video composition duration: \(CMTimeGetSeconds(mainComposition.duration))")

// -- Set parent layer
let parentLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, outputSize.width, outputSize.height)
parentLayer.contentsGravity = kCAGravityResizeAspectFill

// -- Set composition props
videoComposition.renderSize = CGSize(width: outputSize.width, height: outputSize.height)
videoComposition.frameDuration = CMTimeMake(1, Int32(frameRate))

// -- Create video composition instruction
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoDur)

// -- Use layer instruction to match video to output size, mimicking AVLayerVideoGravityResizeAspectFill
let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
let videoTransform = getResizeAspectFillTransform(videoSize, outputSize: outputSize)
videoLayerInstruction.setTransform(videoTransform, atTime: kCMTimeZero)

// -- Add layer instruction
instruction.layerInstructions = [videoLayerInstruction]
videoComposition.instructions = [instruction]

// -- Create video layer
let videoLayer = CALayer()
videoLayer.frame = parentLayer.frame

// -- Add sublayers to parent layer
parentLayer.addSublayer(videoLayer)

// -- Set animation tool
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)

// Create exporter
let outputURL = getFilePath(getUniqueFilename(gMP4File))
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)!
exporter.outputURL = NSURL(fileURLWithPath: outputURL)
exporter.outputFileType = AVFileTypeMPEG4
exporter.videoComposition = videoComposition
exporter.shouldOptimizeForNetworkUse = true
exporter.canPerformMultiplePassesOverSourceMediaData = true

// Export to video
exporter.exportAsynchronouslyWithCompletionHandler({
// Log status
let asset = AVAsset(URL: exporter.outputURL!)
print("Exported slice video. Tracks: \(asset.tracks.count). Duration: \(CMTimeGetSeconds(asset.duration)). Size: \(exporter.estimatedOutputFileLength). Status: \(getExportStatus(exporter)). Output URL: \(exporter.outputURL!). Export time: \( NSDate().timeIntervalSince1970 - startTime).")

// Tell delegate
//delegate.didEndExport(exporter)
self.curSlice!.setOutputURL(exporter.outputURL!.lastPathComponent!)
gUser.save()
})
}


// Returns transform, mimicking AVLayerVideoGravityResizeAspectFill, that converts video of <inputSize> to one of <outputSize>
private func getResizeAspectFillTransform(videoSize: CGSize, outputSize: CGSize) -> CGAffineTransform {
// Compute ratios between video & output sizes
let widthRatio = outputSize.width / videoSize.width
let heightRatio = outputSize.height / videoSize.height

// Set scale to larger of two ratios since goal is to fill output bounds
let scale = widthRatio >= heightRatio ? widthRatio : heightRatio

// Compute video size after scaling
let newWidth = videoSize.width * scale
let newHeight = videoSize.height * scale

// Compute translation required to center image after scaling
// -- Assumes CoreAnimationTool places video frame at (0, 0). Because scale transform is applied first, we must adjust
// each translation point by scale factor.
let translateX = (outputSize.width - newWidth) / 2 / scale
let translateY = (outputSize.height - newHeight) / 2 / scale

// Set transform to resize video while retaining aspect ratio
let resizeTransform = CGAffineTransformMakeScale(scale, scale)

// Apply translation & create final transform
let finalTransform = CGAffineTransformTranslate(resizeTransform, translateX, translateY)

// Return final transform
return finalTransform
}

使用 Tim 的代码拍摄的 320x568 视频:

enter image description here

使用 Tim 的代码拍摄的 640x1136 视频:
enter image description here

最佳答案

试试这个。在 Swift 中启动一个新的 Single View 项目,用这段代码替换 ViewController,你应该很高兴!

我已经设置了一个与输出大小不同的 previewLayer,在文件顶部进行更改。

我添加了一些基本的方向支持。 Landscape Vs 的输出尺寸略有不同。肖像。您可以在此处指定您喜欢的任何视频尺寸尺寸,它应该可以正常工作。

查看 videoSettings 字典(第 278 行)以获取输出文件的编解码器和大小。您还可以在此处添加其他设置来处理 keyFrameIntervals 等以调整输出大小。

我添加了一个录制图像以显示它何时录制(点击开始,点击结束),您需要将一些 Assets 添加到 Assets.xcassets 中,称为录制(或注释掉它尝试加载它的第 106 行)。

差不多就是这样。祝你好运!

哦,它是将视频转储到项目目录中,您需要转到 Window/Devices 并下载 Container 才能轻松查看视频。在 TODO 中有一个部分,您可以在其中连接并将文件复制到 PhotoLibrary(使测试更容易)。

import UIKit
import AVFoundation

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {

let CAPTURE_SIZE_LANDSCAPE: CGSize = CGSizeMake(1280, 720)
let CAPTURE_SIZE_PORTRAIT: CGSize = CGSizeMake(720, 1280)

var recordingImage : UIImageView = UIImageView()

var previewLayer : AVCaptureVideoPreviewLayer?

var audioQueue : dispatch_queue_t?
var videoQueue : dispatch_queue_t?

let captureSession = AVCaptureSession()
var assetWriter : AVAssetWriter?
var assetWriterInputCamera : AVAssetWriterInput?
var assetWriterInputAudio : AVAssetWriterInput?
var outputConnection: AVCaptureConnection?

var captureDeviceBack : AVCaptureDevice?
var captureDeviceFront : AVCaptureDevice?
var captureDeviceMic : AVCaptureDevice?
var sessionSetupDone: Bool = false

var isRecordingStarted = false
//var recordingStartedTime = kCMTimeZero
var videoOutputURL : NSURL?

var captureSize: CGSize = CGSizeMake(1280, 720)
var previewFrame: CGRect = CGRectMake(0, 0, 180, 360)

var captureDeviceTrigger = true
var captureDevice: AVCaptureDevice? {
get {
return captureDeviceTrigger ? captureDeviceFront : captureDeviceBack
}
}

override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
return UIInterfaceOrientationMask.AllButUpsideDown
}

override func shouldAutorotate() -> Bool {
if isRecordingStarted {
return false
}

if UIDevice.currentDevice().orientation == UIDeviceOrientation.PortraitUpsideDown {
return false
}

if let cameraPreview = self.previewLayer {
if let connection = cameraPreview.connection {
if connection.supportsVideoOrientation {
switch UIDevice.currentDevice().orientation {
case .LandscapeLeft:
connection.videoOrientation = .LandscapeRight
case .LandscapeRight:
connection.videoOrientation = .LandscapeLeft
case .Portrait:
connection.videoOrientation = .Portrait
case .FaceUp:
return false
case .FaceDown:
return false
default:
break
}
}
}
}

return true
}

override func viewDidLoad() {
super.viewDidLoad()

setupViewControls()

//self.recordingStartedTime = kCMTimeZero

// Setup capture session related logic
videoQueue = dispatch_queue_create("video_write_queue", DISPATCH_QUEUE_SERIAL)
audioQueue = dispatch_queue_create("audio_write_queue", DISPATCH_QUEUE_SERIAL)

setupCaptureDevices()
pre_start()
}

//MARK: UI methods
func setupViewControls() {

// TODO: I have an image (red circle) in an Assets.xcassets. Replace the following with your own image
recordingImage.frame = CGRect(x: 0, y: 0, width: 50, height: 50)
recordingImage.image = UIImage(named: "recording")
recordingImage.hidden = true
self.view.addSubview(recordingImage)


// Setup tap to record and stop
let tapGesture = UITapGestureRecognizer(target: self, action: "didGetTapped:")
tapGesture.numberOfTapsRequired = 1
self.view.addGestureRecognizer(tapGesture)

}



func didGetTapped(selector: UITapGestureRecognizer) {
if self.isRecordingStarted {
self.view.gestureRecognizers![0].enabled = false
recordingImage.hidden = true

self.stopRecording()
} else {
recordingImage.hidden = false
self.startRecording()
}

self.isRecordingStarted = !self.isRecordingStarted
}

func switchCamera(selector: UIButton) {
self.captureDeviceTrigger = !self.captureDeviceTrigger

pre_start()
}

//MARK: Video logic
func setupCaptureDevices() {
let devices = AVCaptureDevice.devices()

for device in devices {
if device.hasMediaType(AVMediaTypeVideo) {
if device.position == AVCaptureDevicePosition.Front {
captureDeviceFront = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Front camera is found")
}
if device.position == AVCaptureDevicePosition.Back {
captureDeviceBack = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Back camera is found")
}
}

if device.hasMediaType(AVMediaTypeAudio) {
captureDeviceMic = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Audio device is found")
}
}
}

func alertPermission() {
let permissionAlert = UIAlertController(title: "No Permission", message: "Please allow access to Camera and Microphone", preferredStyle: UIAlertControllerStyle.Alert)
permissionAlert.addAction(UIAlertAction(title: "Go to settings", style: .Default, handler: { (action: UIAlertAction!) in
print("Video Controller: Permission for camera/mic denied. Going to settings")
UIApplication.sharedApplication().openURL(NSURL(string: UIApplicationOpenSettingsURLString)!)
print(UIApplicationOpenSettingsURLString)
}))
presentViewController(permissionAlert, animated: true, completion: nil)
}

func pre_start() {
NSLog("Video Controller: pre_start")
let videoPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo)
let audioPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeAudio)
if (videoPermission == AVAuthorizationStatus.Denied) || (audioPermission == AVAuthorizationStatus.Denied) {
self.alertPermission()
pre_start()
return
}

if (videoPermission == AVAuthorizationStatus.Authorized) {
self.start()
return
}

AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted :Bool) -> Void in
self.pre_start()
})
}

func start() {
NSLog("Video Controller: start")
if captureSession.running {
captureSession.beginConfiguration()

if let currentInput = captureSession.inputs[0] as? AVCaptureInput {
captureSession.removeInput(currentInput)
}

do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch {
print("Video Controller: begin session. Error adding video input device")
}

captureSession.commitConfiguration()
return
}

do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
try captureSession.addInput(AVCaptureDeviceInput(device: captureDeviceMic))
} catch {
print("Video Controller: start. error adding device: \(error)")
}

if let layer = AVCaptureVideoPreviewLayer(session: captureSession) {
self.previewLayer = layer
layer.videoGravity = AVLayerVideoGravityResizeAspect

if let layerConnection = layer.connection {
if UIDevice.currentDevice().orientation == .LandscapeRight {
layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
} else if UIDevice.currentDevice().orientation == .Portrait {
layerConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
}
}

// TODO: Set the output size of the Preview Layer here
layer.frame = previewFrame
self.view.layer.insertSublayer(layer, atIndex: 0)

}

let bufferVideoQueue = dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
captureSession.addOutput(videoOutput)
if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
self.outputConnection = connection
}

let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
captureSession.addOutput(audioOutput)

captureSession.startRunning()
}

func getAssetWriter() -> AVAssetWriter? {
NSLog("Video Controller: getAssetWriter")
let fileManager = NSFileManager.defaultManager()
let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
guard let documentDirectory: NSURL = urls.first else {
print("Video Controller: getAssetWriter: documentDir Error")
return nil
}

let local_video_name = NSUUID().UUIDString + ".mp4"
self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)

guard let url = self.videoOutputURL else {
return nil
}


self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)

guard let writer = self.assetWriter else {
return nil
}

let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : captureSize.width,
AVVideoHeightKey : captureSize.height,
]

assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assetWriterInputCamera?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputCamera!)

let audioSettings : [String : AnyObject] = [
AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : 2,
AVSampleRateKey : NSNumber(double: 44100.0)
]

assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
assetWriterInputAudio?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputAudio!)

return writer
}

func configurePreset() {
NSLog("Video Controller: configurePreset")
if captureSession.canSetSessionPreset(AVCaptureSessionPreset1280x720) {
captureSession.sessionPreset = AVCaptureSessionPreset1280x720
} else {
captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
}
}

func startRecording() {
NSLog("Video Controller: Start recording")

captureSize = UIDeviceOrientationIsLandscape(UIDevice.currentDevice().orientation) ? CAPTURE_SIZE_LANDSCAPE : CAPTURE_SIZE_PORTRAIT

if let connection = self.outputConnection {

if connection.supportsVideoOrientation {

if UIDevice.currentDevice().orientation == .LandscapeRight {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
NSLog("orientation: right")
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
NSLog("orientation: left")
} else {
connection.videoOrientation = AVCaptureVideoOrientation.Portrait
NSLog("orientation: portrait")
}
}
}

if let writer = getAssetWriter() {
self.assetWriter = writer

let recordingClock = self.captureSession.masterClock
writer.startWriting()
writer.startSessionAtSourceTime(CMClockGetTime(recordingClock))
}

}

func stopRecording() {
NSLog("Video Controller: Stop recording")

if let writer = self.assetWriter {
writer.finishWritingWithCompletionHandler{Void in
print("Recording finished")
// TODO: Handle the video file, copy it from the temp directory etc.
}
}
}

//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

if !self.isRecordingStarted {
return
}

if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {

dispatch_async(audioQueue!) {
audio.appendSampleBuffer(sampleBuffer)
}
return
}

if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
dispatch_async(videoQueue!) {
camera.appendSampleBuffer(sampleBuffer)
}
}
}
}

附加编辑信息

从我们在评论中的其他对话中可以看出,您想要的是减小输出视频的物理尺寸,同时保持尽可能高的尺寸(以保持质量)。请记住,您在屏幕上放置图层的大小是点,而不是像素。您正在以像素为单位编写输出文件 - 这不是与 iPhone 屏幕引用单位的 1:1 比较。

要减小输出文件的大小,您有两个简单的选择:
  • 降低分辨率 - 但如果你太小,播放时会降低质量,特别是在播放时再次放大时。尝试使用 640x360 或 720x480 作为输出像素。
  • 调整压缩设置。 iPhone 的默认设置通常会产生更高质量(更大的输出文件大小)的视频。

  • 用这些选项替换视频设置,看看你是怎么做的:
        let videoSettings: [String : AnyObject] = [
    AVVideoCodecKey : AVVideoCodecH264,
    AVVideoWidthKey : captureSize.width,
    AVVideoHeightKey : captureSize.height,
    AVVideoCompressionPropertiesKey : [
    AVVideoAverageBitRateKey : 2000000,
    AVVideoProfileLevelKey : H264_Main_4_1,
    AVVideoMaxKeyFrameIntervalKey : 90,
    ]
    ]

    AVCompressionProperties 告诉 AVFoundation 如何实际压缩视频。比特率越低,压缩率越高(因此它的流媒体效果越好,但它使用的磁盘空间也越少,但质量会降低)。 MaxKeyFrame 间隔是它写出未压缩帧的频率,将其设置得更高(在我们每秒约 30 帧的视频中,90 将每 1.5 秒一次)也会降低质量,但也会减小大小。你会发现这里引用的常量 https://developer.apple.com/library/prerelease/ios/documentation/AVFoundation/Reference/AVFoundation_Constants/index.html#//apple_ref/doc/constant_group/Video_Settings

    关于ios - 模拟 AVLayerVideoGravityResizeAspectFill : crop and center video to mimic preview without losing sharpness,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35261603/

    28 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com