- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
基于此SO post ,下面的代码会旋转、居中和裁剪用户实时捕获的视频。
捕获 session 使用 AVCaptureSessionPresetHigh 作为预设值,预览层使用 AVLayerVideoGravityResizeAspectFill 作为视频重力。这个预览非常清晰。
然而,导出的视频并没有那么清晰,表面上是因为从 5S 后置摄像头的 1920x1080 分辨率缩放到 320x568(导出视频的目标尺寸)会因为丢弃像素而导致模糊?
假设没有办法从 1920x1080 缩放到 320x568 没有一些模糊性,那么问题就变成了:如何模拟预览层的清晰度?
不知何故,Apple 正在使用一种算法将 1920x1080 视频转换为 320x568 的清晰预览帧。
有没有办法用 AVAssetWriter 或 AVAssetExportSession 来模仿这个?
func cropVideo() {
// Set start time
let startTime = NSDate().timeIntervalSince1970
// Create main composition & its tracks
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
// Get source video & audio tracks
let videoPath = getFilePath(curSlice!.getCaptureURL())
let videoURL = NSURL(fileURLWithPath: videoPath)
let videoAsset = AVURLAsset(URL: videoURL, options: nil)
let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
let videoSize = sourceVideoTrack.naturalSize
// Get rounded time for video
let roundedDur = floor(curSlice!.getDur() * 100) / 100
let videoDur = CMTimeMakeWithSeconds(roundedDur, 100)
// Add source tracks to composition
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange while exporting video: \(error)")
}
// Create video composition
// -- Set video frame
let outputSize = view.bounds.size
let videoComposition = AVMutableVideoComposition()
print("Video composition duration: \(CMTimeGetSeconds(mainComposition.duration))")
// -- Set parent layer
let parentLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, outputSize.width, outputSize.height)
parentLayer.contentsGravity = kCAGravityResizeAspectFill
// -- Set composition props
videoComposition.renderSize = CGSize(width: outputSize.width, height: outputSize.height)
videoComposition.frameDuration = CMTimeMake(1, Int32(frameRate))
// -- Create video composition instruction
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoDur)
// -- Use layer instruction to match video to output size, mimicking AVLayerVideoGravityResizeAspectFill
let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
let videoTransform = getResizeAspectFillTransform(videoSize, outputSize: outputSize)
videoLayerInstruction.setTransform(videoTransform, atTime: kCMTimeZero)
// -- Add layer instruction
instruction.layerInstructions = [videoLayerInstruction]
videoComposition.instructions = [instruction]
// -- Create video layer
let videoLayer = CALayer()
videoLayer.frame = parentLayer.frame
// -- Add sublayers to parent layer
parentLayer.addSublayer(videoLayer)
// -- Set animation tool
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
// Create exporter
let outputURL = getFilePath(getUniqueFilename(gMP4File))
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)!
exporter.outputURL = NSURL(fileURLWithPath: outputURL)
exporter.outputFileType = AVFileTypeMPEG4
exporter.videoComposition = videoComposition
exporter.shouldOptimizeForNetworkUse = true
exporter.canPerformMultiplePassesOverSourceMediaData = true
// Export to video
exporter.exportAsynchronouslyWithCompletionHandler({
// Log status
let asset = AVAsset(URL: exporter.outputURL!)
print("Exported slice video. Tracks: \(asset.tracks.count). Duration: \(CMTimeGetSeconds(asset.duration)). Size: \(exporter.estimatedOutputFileLength). Status: \(getExportStatus(exporter)). Output URL: \(exporter.outputURL!). Export time: \( NSDate().timeIntervalSince1970 - startTime).")
// Tell delegate
//delegate.didEndExport(exporter)
self.curSlice!.setOutputURL(exporter.outputURL!.lastPathComponent!)
gUser.save()
})
}
// Returns transform, mimicking AVLayerVideoGravityResizeAspectFill, that converts video of <inputSize> to one of <outputSize>
private func getResizeAspectFillTransform(videoSize: CGSize, outputSize: CGSize) -> CGAffineTransform {
// Compute ratios between video & output sizes
let widthRatio = outputSize.width / videoSize.width
let heightRatio = outputSize.height / videoSize.height
// Set scale to larger of two ratios since goal is to fill output bounds
let scale = widthRatio >= heightRatio ? widthRatio : heightRatio
// Compute video size after scaling
let newWidth = videoSize.width * scale
let newHeight = videoSize.height * scale
// Compute translation required to center image after scaling
// -- Assumes CoreAnimationTool places video frame at (0, 0). Because scale transform is applied first, we must adjust
// each translation point by scale factor.
let translateX = (outputSize.width - newWidth) / 2 / scale
let translateY = (outputSize.height - newHeight) / 2 / scale
// Set transform to resize video while retaining aspect ratio
let resizeTransform = CGAffineTransformMakeScale(scale, scale)
// Apply translation & create final transform
let finalTransform = CGAffineTransformTranslate(resizeTransform, translateX, translateY)
// Return final transform
return finalTransform
}
最佳答案
试试这个。在 Swift 中启动一个新的 Single View 项目,用这段代码替换 ViewController,你应该很高兴!
我已经设置了一个与输出大小不同的 previewLayer,在文件顶部进行更改。
我添加了一些基本的方向支持。 Landscape Vs 的输出尺寸略有不同。肖像。您可以在此处指定您喜欢的任何视频尺寸尺寸,它应该可以正常工作。
查看 videoSettings 字典(第 278 行)以获取输出文件的编解码器和大小。您还可以在此处添加其他设置来处理 keyFrameIntervals 等以调整输出大小。
我添加了一个录制图像以显示它何时录制(点击开始,点击结束),您需要将一些 Assets 添加到 Assets.xcassets 中,称为录制(或注释掉它尝试加载它的第 106 行)。
差不多就是这样。祝你好运!
哦,它是将视频转储到项目目录中,您需要转到 Window/Devices 并下载 Container 才能轻松查看视频。在 TODO 中有一个部分,您可以在其中连接并将文件复制到 PhotoLibrary(使测试更容易)。
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {
let CAPTURE_SIZE_LANDSCAPE: CGSize = CGSizeMake(1280, 720)
let CAPTURE_SIZE_PORTRAIT: CGSize = CGSizeMake(720, 1280)
var recordingImage : UIImageView = UIImageView()
var previewLayer : AVCaptureVideoPreviewLayer?
var audioQueue : dispatch_queue_t?
var videoQueue : dispatch_queue_t?
let captureSession = AVCaptureSession()
var assetWriter : AVAssetWriter?
var assetWriterInputCamera : AVAssetWriterInput?
var assetWriterInputAudio : AVAssetWriterInput?
var outputConnection: AVCaptureConnection?
var captureDeviceBack : AVCaptureDevice?
var captureDeviceFront : AVCaptureDevice?
var captureDeviceMic : AVCaptureDevice?
var sessionSetupDone: Bool = false
var isRecordingStarted = false
//var recordingStartedTime = kCMTimeZero
var videoOutputURL : NSURL?
var captureSize: CGSize = CGSizeMake(1280, 720)
var previewFrame: CGRect = CGRectMake(0, 0, 180, 360)
var captureDeviceTrigger = true
var captureDevice: AVCaptureDevice? {
get {
return captureDeviceTrigger ? captureDeviceFront : captureDeviceBack
}
}
override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
return UIInterfaceOrientationMask.AllButUpsideDown
}
override func shouldAutorotate() -> Bool {
if isRecordingStarted {
return false
}
if UIDevice.currentDevice().orientation == UIDeviceOrientation.PortraitUpsideDown {
return false
}
if let cameraPreview = self.previewLayer {
if let connection = cameraPreview.connection {
if connection.supportsVideoOrientation {
switch UIDevice.currentDevice().orientation {
case .LandscapeLeft:
connection.videoOrientation = .LandscapeRight
case .LandscapeRight:
connection.videoOrientation = .LandscapeLeft
case .Portrait:
connection.videoOrientation = .Portrait
case .FaceUp:
return false
case .FaceDown:
return false
default:
break
}
}
}
}
return true
}
override func viewDidLoad() {
super.viewDidLoad()
setupViewControls()
//self.recordingStartedTime = kCMTimeZero
// Setup capture session related logic
videoQueue = dispatch_queue_create("video_write_queue", DISPATCH_QUEUE_SERIAL)
audioQueue = dispatch_queue_create("audio_write_queue", DISPATCH_QUEUE_SERIAL)
setupCaptureDevices()
pre_start()
}
//MARK: UI methods
func setupViewControls() {
// TODO: I have an image (red circle) in an Assets.xcassets. Replace the following with your own image
recordingImage.frame = CGRect(x: 0, y: 0, width: 50, height: 50)
recordingImage.image = UIImage(named: "recording")
recordingImage.hidden = true
self.view.addSubview(recordingImage)
// Setup tap to record and stop
let tapGesture = UITapGestureRecognizer(target: self, action: "didGetTapped:")
tapGesture.numberOfTapsRequired = 1
self.view.addGestureRecognizer(tapGesture)
}
func didGetTapped(selector: UITapGestureRecognizer) {
if self.isRecordingStarted {
self.view.gestureRecognizers![0].enabled = false
recordingImage.hidden = true
self.stopRecording()
} else {
recordingImage.hidden = false
self.startRecording()
}
self.isRecordingStarted = !self.isRecordingStarted
}
func switchCamera(selector: UIButton) {
self.captureDeviceTrigger = !self.captureDeviceTrigger
pre_start()
}
//MARK: Video logic
func setupCaptureDevices() {
let devices = AVCaptureDevice.devices()
for device in devices {
if device.hasMediaType(AVMediaTypeVideo) {
if device.position == AVCaptureDevicePosition.Front {
captureDeviceFront = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Front camera is found")
}
if device.position == AVCaptureDevicePosition.Back {
captureDeviceBack = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Back camera is found")
}
}
if device.hasMediaType(AVMediaTypeAudio) {
captureDeviceMic = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Audio device is found")
}
}
}
func alertPermission() {
let permissionAlert = UIAlertController(title: "No Permission", message: "Please allow access to Camera and Microphone", preferredStyle: UIAlertControllerStyle.Alert)
permissionAlert.addAction(UIAlertAction(title: "Go to settings", style: .Default, handler: { (action: UIAlertAction!) in
print("Video Controller: Permission for camera/mic denied. Going to settings")
UIApplication.sharedApplication().openURL(NSURL(string: UIApplicationOpenSettingsURLString)!)
print(UIApplicationOpenSettingsURLString)
}))
presentViewController(permissionAlert, animated: true, completion: nil)
}
func pre_start() {
NSLog("Video Controller: pre_start")
let videoPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo)
let audioPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeAudio)
if (videoPermission == AVAuthorizationStatus.Denied) || (audioPermission == AVAuthorizationStatus.Denied) {
self.alertPermission()
pre_start()
return
}
if (videoPermission == AVAuthorizationStatus.Authorized) {
self.start()
return
}
AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted :Bool) -> Void in
self.pre_start()
})
}
func start() {
NSLog("Video Controller: start")
if captureSession.running {
captureSession.beginConfiguration()
if let currentInput = captureSession.inputs[0] as? AVCaptureInput {
captureSession.removeInput(currentInput)
}
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch {
print("Video Controller: begin session. Error adding video input device")
}
captureSession.commitConfiguration()
return
}
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
try captureSession.addInput(AVCaptureDeviceInput(device: captureDeviceMic))
} catch {
print("Video Controller: start. error adding device: \(error)")
}
if let layer = AVCaptureVideoPreviewLayer(session: captureSession) {
self.previewLayer = layer
layer.videoGravity = AVLayerVideoGravityResizeAspect
if let layerConnection = layer.connection {
if UIDevice.currentDevice().orientation == .LandscapeRight {
layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
} else if UIDevice.currentDevice().orientation == .Portrait {
layerConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
}
}
// TODO: Set the output size of the Preview Layer here
layer.frame = previewFrame
self.view.layer.insertSublayer(layer, atIndex: 0)
}
let bufferVideoQueue = dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
captureSession.addOutput(videoOutput)
if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
self.outputConnection = connection
}
let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
captureSession.addOutput(audioOutput)
captureSession.startRunning()
}
func getAssetWriter() -> AVAssetWriter? {
NSLog("Video Controller: getAssetWriter")
let fileManager = NSFileManager.defaultManager()
let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
guard let documentDirectory: NSURL = urls.first else {
print("Video Controller: getAssetWriter: documentDir Error")
return nil
}
let local_video_name = NSUUID().UUIDString + ".mp4"
self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)
guard let url = self.videoOutputURL else {
return nil
}
self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)
guard let writer = self.assetWriter else {
return nil
}
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : captureSize.width,
AVVideoHeightKey : captureSize.height,
]
assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assetWriterInputCamera?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputCamera!)
let audioSettings : [String : AnyObject] = [
AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : 2,
AVSampleRateKey : NSNumber(double: 44100.0)
]
assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
assetWriterInputAudio?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputAudio!)
return writer
}
func configurePreset() {
NSLog("Video Controller: configurePreset")
if captureSession.canSetSessionPreset(AVCaptureSessionPreset1280x720) {
captureSession.sessionPreset = AVCaptureSessionPreset1280x720
} else {
captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
}
}
func startRecording() {
NSLog("Video Controller: Start recording")
captureSize = UIDeviceOrientationIsLandscape(UIDevice.currentDevice().orientation) ? CAPTURE_SIZE_LANDSCAPE : CAPTURE_SIZE_PORTRAIT
if let connection = self.outputConnection {
if connection.supportsVideoOrientation {
if UIDevice.currentDevice().orientation == .LandscapeRight {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
NSLog("orientation: right")
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
NSLog("orientation: left")
} else {
connection.videoOrientation = AVCaptureVideoOrientation.Portrait
NSLog("orientation: portrait")
}
}
}
if let writer = getAssetWriter() {
self.assetWriter = writer
let recordingClock = self.captureSession.masterClock
writer.startWriting()
writer.startSessionAtSourceTime(CMClockGetTime(recordingClock))
}
}
func stopRecording() {
NSLog("Video Controller: Stop recording")
if let writer = self.assetWriter {
writer.finishWritingWithCompletionHandler{Void in
print("Recording finished")
// TODO: Handle the video file, copy it from the temp directory etc.
}
}
}
//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
if !self.isRecordingStarted {
return
}
if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {
dispatch_async(audioQueue!) {
audio.appendSampleBuffer(sampleBuffer)
}
return
}
if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
dispatch_async(videoQueue!) {
camera.appendSampleBuffer(sampleBuffer)
}
}
}
}
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : captureSize.width,
AVVideoHeightKey : captureSize.height,
AVVideoCompressionPropertiesKey : [
AVVideoAverageBitRateKey : 2000000,
AVVideoProfileLevelKey : H264_Main_4_1,
AVVideoMaxKeyFrameIntervalKey : 90,
]
]
关于ios - 模拟 AVLayerVideoGravityResizeAspectFill : crop and center video to mimic preview without losing sharpness,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35261603/
你能解释一下两者之间的区别吗 和 ? 最佳答案 通过使用 .您可以添加多个源元素。多个源元素可以链接到不同的视频文件。浏览器将使用第一个识别的格式。
我正在使用 ImagePickerController 处理 iPhone 视频捕获。我已经设置了图像选择器 Controller 的属性。我用它来将视频的最大长度设置为 60 秒。 imagePic
我正在制作一个进行基本视频处理的应用程序。我成功地合并到视频(视频上的视频)。 如何将左上角的小视频裁剪成一个圆圈? 最佳答案 如果您想导出该视频,您需要: 创建 CALayer,它将成为您的视频层的
我正在使用 SVT-AV1 和 FFMPEG 将视频编码为 AV1 视频和 opus 音频编解码器(.webm),它工作正常,除了视频搜索不起作用(非常糟糕)。当我寻找时,CPU 使用率会上升,并且需
在 Adobe Muse 中使用 VIDEO.JS 目前我已将海报图像配置为在视频开头显示, 当视频结束时,我希望海报图像重新出现。谢谢你的帮助! 最佳答案 将来最好的方法是通过 css。我只是a
我目前正在尝试从单张图片 (1980*1024) 生成视频 这是我的命令: ffmpeg -threads 8 -r 1 -loop 1 -i "C:\Library\Titling\__Resour
我想从 HTML 获取框架 javascript 中的组件,以便我可以处理它们然后输出到 Canvas 最佳答案 看看这个代码笔:Demo var videoId = 'video'; va
我已经使用 video.js 一段时间了,正在寻找响应式解决方案。我看到 4.6 声称是这样,但无法开始工作。我在文档中找不到任何关于使其响应的内容。我基本上只需要它保持在容器的 100% 并保持其纵
我正在寻找任何用于设置视频流服务器的现代资源。最好是开源解决方案。 我对此的搜索导致了很多死胡同。我也确实需要构建自己的服务而不是支付服务费用。 最佳答案 要设置您自己的视频流服务器,您应该从以下组件
如何在处理流媒体或网络视频时拦截“无法播放视频”对话框? 我尝试了以下操作并能够显示我的自定义错误消息。但最重要的是,我仍然收到 Android MediaPlayer 错误对话框“无法播放视频”。
它使我的视频居中并将控件放置在 div 底部但视频流出。在 css 样式表中,css 似乎无法识别。样式表中的 cos 颜色为黑色。我使用 Chrome 。 div.video_div{ width:
在 HTML5 video 元素中,您定义 type 属性的值始终以 video 开头。从元素是视频不是已经很明显,它是视频类型吗?为什么需要这样的视频:type="video/mp4",不应该只是t
我想通过 jQuery 或 Javascript 检测 html5 标签内的特定视频何时已完全加载(我的意思是,下载到浏览器的缓存中)。视频具有 preload = "auto"属性。 我尽我所能做到
HTML5 带来或将带来和 标签等等。自从我听说了他们,读了之后更是如此Why do we have an img element?特别是Jay C. Weber's message back fro
我正在制定一个 Web 应用程序的详细信息,该应用程序涉及顺序加载一长串(非常短的)视频剪辑,一个接一个,用户偶尔会输入建立新的视频剪辑加载方向. 我希望能够让浏览器一次预加载五个视频剪辑。然而,我们
我想知道 HTML5 标签现在支持.avi 格式视频文件的播放。 最佳答案 简短回答:否。改用 WebM 或 Ogg。 This article几乎涵盖了您需要了解的有关 的所有信息元素,包括哪些浏
尽管它似乎处于某种危险之中,但开放视频标准是一个好主意。我看到了一些关于运动跟踪的演示——只是概念验证,但仍然很有趣。现在,我要说的是,如果可以访问用户的网络摄像头,像这样的概念真的会是一个收获……想
我正在尝试使用 php-facebook-sdk 并借助 curl Facebook API 创建广告。 我已经使用 curl 上传了我的视频,它返回了一个 ID。现在,该视频 ID 将用于添加广告,
我正在使用 Video.js在我的网站上显示视频的插件。 我想删除画中画图标。我已经尝试了几个小时,但没有成功。 我做错了什么? 最佳答案 它应该是 data-setup='{"cont
使用 MediaRecorder 从 SurfaceView 录制视频 录音机 recorderProfile = CamcorderProfile.get( CamcorderProfile.QUA
我是一名优秀的程序员,十分优秀!