gpt4 book ai didi

ios - 使用 MonoTouch 在 iOS 中捕获视频

转载 作者:可可西里 更新时间:2023-11-01 05:58:55 25 4
gpt4 key购买 nike

我拥有在 Objective-C 中创建、配置和启动视频捕获 session 的代码,运行没有任何问题。我正在将示例移植到 C# 和 MonoTouch 4.0.3 并遇到一些问题,这是代码:

    void Initialize ()
{
// Create notifier delegate class
captureVideoDelegate = new CaptureVideoDelegate(this);

// Create capture session
captureSession = new AVCaptureSession();
captureSession.SessionPreset = AVCaptureSession.Preset640x480;

// Create capture device
captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);

// Create capture device input
NSError error;
captureDeviceInput = new AVCaptureDeviceInput(captureDevice, out error);
captureSession.AddInput(captureDeviceInput);

// Create capture device output
captureVideoOutput = new AVCaptureVideoDataOutput();
captureSession.AddOutput(captureVideoOutput);
captureVideoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV32BGRA;
captureVideoOutput.MinFrameDuration = new CMTime(1, 30);
//
// ISSUE 1
// In the original Objective-C code I was creating a dispatch_queue_t object, passing it to
// setSampleBufferDelegate:queue message and worked, here I could not find an equivalent to
// the queue mechanism. Also not sure if the delegate should be used like this).
//
captureVideoOutput.SetSampleBufferDelegatequeue(captureVideoDelegate, ???????);

// Create preview layer
previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession);
previewLayer.Orientation = AVCaptureVideoOrientation.LandscapeRight;
//
// ISSUE 2:
// Didn't find any VideoGravity related enumeration in MonoTouch (not sure if string will work)
//
previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill";
previewLayer.Frame = new RectangleF(0, 0, 1024, 768);
this.View.Layer.AddSublayer(previewLayer);

// Start capture session
captureSession.StartRunning();

}

#endregion

public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate
{
private VirtualDeckViewController mainViewController;

public CaptureVideoDelegate(VirtualDeckViewController viewController)
{
mainViewController = viewController;
}

public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
{
// TODO: Implement - see: http://go-mono.com/docs/index.aspx?link=T%3aMonoTouch.Foundation.ModelAttribute

}
}

问题 1:不确定如何在 SetSampleBufferDelegatequeue 方法中正确使用委托(delegate)。也没有找到在 Objective-C 中工作正常的 dispatch_queue_t 对象的等效机制来传递第二个参数。

问题 2:我没有在 MonoTouch 库中找到任何 VideoGravity 枚举,不确定传递具有常量值的字符串是否有效。

我一直在寻找解决这个问题的线索,但周围没有明确的样本。非常感谢任何有关如何在 MonoTouch 中执行相同操作的示例或信息。

非常感谢。

最佳答案

这是我的代码。好好利用它。我只是删掉了重要的东西,所有的初始化都在那里,以及示例输出缓冲区的读取。

然后我有处理 CVImageBuffer 的代码形成链接的自定义 ObjC 库,如果您需要在 Monotouch 中处理它,那么您需要加倍努力并将其转换为 CGImage 或 UIImage。 Monotouch(AFAIK)中没有这个功能,所以你需要自己从普通的 ObjC 绑定(bind)它。 ObjC 中的示例在这里:how to convert a CVImageBufferRef to UIImage

public void InitCapture ()
{
try
{
// Setup the input
NSError error = new NSError ();
captureInput = new AVCaptureDeviceInput (AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video), out error);

// Setup the output
captureOutput = new AVCaptureVideoDataOutput ();
captureOutput.AlwaysDiscardsLateVideoFrames = true;
captureOutput.SetSampleBufferDelegateAndQueue (avBufferDelegate, dispatchQueue);
captureOutput.MinFrameDuration = new CMTime (1, 10);

// Set the video output to store frame in BGRA (compatible across devices)
captureOutput.VideoSettings = new AVVideoSettings (CVPixelFormatType.CV32BGRA);

// Create a capture session
captureSession = new AVCaptureSession ();
captureSession.SessionPreset = AVCaptureSession.PresetMedium;
captureSession.AddInput (captureInput);
captureSession.AddOutput (captureOutput);

// Setup the preview layer
prevLayer = new AVCaptureVideoPreviewLayer (captureSession);
prevLayer.Frame = liveView.Bounds;
prevLayer.VideoGravity = "AVLayerVideoGravityResize"; // image may be slightly distorted, but red bar position will be accurate

liveView.Layer.AddSublayer (prevLayer);

StartLiveDecoding ();
}
catch (Exception ex)
{
Console.WriteLine (ex.ToString ());
}
}

public void DidOutputSampleBuffer (AVCaptureOutput captureOutput, MonoTouch.CoreMedia.CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
{
Console.WriteLine ("DidOutputSampleBuffer: enter");

if (isScanning)
{
CVImageBuffer imageBuffer = sampleBuffer.GetImageBuffer ();

Console.WriteLine ("DidOutputSampleBuffer: calling decode");

// NSLog(@"got image w=%d h=%d bpr=%d",CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer), CVPixelBufferGetBytesPerRow(imageBuffer));
// call the decoder
DecodeImage (imageBuffer);
}
else
{
Console.WriteLine ("DidOutputSampleBuffer: not scanning");
}

Console.WriteLine ("DidOutputSampleBuffer: quit");
}

关于ios - 使用 MonoTouch 在 iOS 中捕获视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5953432/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com