gpt4 book ai didi

ios - CGImageCreateWithImageInRect 持有图像数据 - 泄漏?

转载 作者:行者123 更新时间:2023-12-01 16:26:18 25 4
gpt4 key购买 nike

我正在尝试拍摄图像快照,对其进行裁剪并将其保存到 UIImageView。

我已经从几十个不同的方向尝试过,但这里是一般设置。

首先,我在 ARC、XCODE 7.2 下运行它,在 6Plus 手机 iOS 9.2 上进行测试。

现在委托(delegate)已经设置好了..

    - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSLog(@"CameraViewController : imagePickerController");


//Get the Image Data
NSData *getDataImage = UIImageJPEGRepresentation([info objectForKey:@"UIImagePickerControllerOriginalImage"], 0.9);

// Turn it into a UI image
UIImage *getCapturedImage = [[UIImage alloc] initWithData:getDataImage];

// Figure out the size and build the rectangle we are going to put the image into
CGSize imageSize = getCapturedImage.size;
CGFloat imageScale = getCapturedImage.scale;
int yCoord = (imageSize.height - ((imageSize.width*2)/3))/2;
CGRect getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3));
CGRect rect = CGRectMake(getRect.origin.x*imageScale,
getRect.origin.y*imageScale,
getRect.size.width*imageScale,
getRect.size.height*imageScale);


//Resize the image and store it
CGImageRef imageRef = CGImageCreateWithImageInRect([getCapturedImage CGImage], rect);

//Stick the resulting image into an image variable
UIImage *cropped = [UIImage imageWithCGImage:imageRef];

//Release that reference
CGImageRelease(imageRef);

//Save the newly cropped image to a UIImageView property
_imageView.image = cropped;


_saveBtn.hidden = NO;
[picker dismissViewControllerAnimated:YES completion:^{
// After we are finished with dismissing the picker, run the below to close out the camera tool
[self dismissCameraViewFromImageSelect];

}];


}

当我运行上面的内容时,我得到了下面的图像。

First retain

此时我正在查看之前设置的_imageView.image 中的图像。图像数据已经吞噬了 30MB。但是当我退出这个 View 时,图像数据仍然保留。

如果我尝试完成捕获新图像的过程,这就是我得到的。

Second retain

当我绕过调整图像大小并将其分配给 ImageView 时,没有 30MB 被吞噬。

我已经查看了有关此的所有建议,并且建议的所有建议都没有造成任何影响,但让我们回顾一下我尝试过但没有奏效的方法。

不工作。
  • 将其放入 @autoreleasepool block 中。

  • 这似乎永远行不通。也许我做得不对,但尝试了几种不同的方法,没有释放内存。
  • CGImageRelease(imageRef);

  • 我正在这样做,但我已经尝试了许多不同的方法。仍然没有运气。
  • CFRelease(imageRef);

  • 也不行。
  • 设置imageRef = nil;

  • 依然保留。即使将它与 CGImageRelease 结合起来也对我不起作用。

    我尝试将裁剪方面分离到它自己的函数中并返回结果,但仍然没有运气。

    我在网上没有发现任何特别有用的东西,所有对类似问题的引用都有似乎不起作用的建议(如上所述)。

    提前感谢您的建议。

    最佳答案

    好吧,经过深思熟虑,我决定从头开始,因为我最近的大部分工作都是在 Swift 中进行的,所以我整理了一个可以调用的 swift 类,控制相机,并通过委托(delegate)给调用者。

    最终结果是我没有这种内存泄漏,其中一些变量保留了前一个图像的内存,我可以通过将 Swift 类文件桥接到我的 Obj-C ViewControllers 在我当前的项目中使用它。

    这是执行获取的类的代码。

    //
    // CameraOverlay.swift
    // CameraTesting
    //
    // Created by Chris Cantley on 3/3/16.
    // Copyright © 2016 Chris Cantley. All rights reserved.
    //

    import Foundation
    import UIKit
    import AVFoundation

    //We want to pass an image up to the parent class once the image has been taken so the easiest way to send it up
    // and trigger the placing of the image is through a delegate.
    protocol CameraOverlayDelegate: class {
    func cameraOverlayImage(image:UIImage)
    }

    class CameraOverlay: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {

    //MARK: Internal Variables

    //Setting up the delegate reference to be used later on.
    internal var delegate: CameraOverlayDelegate?


    //Varibles for setting the camera view
    internal var returnImage : UIImage!
    internal var previewView : UIView!
    internal var boxView:UIView!
    internal let myButton: UIButton = UIButton()

    //Setting up Camera Capture required properties
    internal var previewLayer:AVCaptureVideoPreviewLayer!
    internal var captureDevice : AVCaptureDevice!
    internal let session=AVCaptureSession()
    internal var stillImageOutput: AVCaptureStillImageOutput!

    //When we put up the camera preview and the button we have to reference a parent view so this will hold the
    // parent view passed into the class so that other methods can work with it.
    internal var view : UIView!



    //When this class is instantiated, we want to require that the calling class passes us
    //some view that we can tie the camera previewer and button to.

    //MARK: - Instantiation Methods
    init(parentView: UIView){


    //Instantiate the reference to the passed-in UIView
    self.view = parentView

    //We are doing the following here because this only needs to be setup once per instantiation.

    //Create the output container with settings to specify that we are getting a still Image, and that it is a JPEG.
    stillImageOutput = AVCaptureStillImageOutput()
    stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]

    //Now we are sticking the image into the above formatted container
    session.addOutput(stillImageOutput)
    }

    //MARK: - Public Functions
    func showCameraView() {

    //This handles showing the camera previewer and button
    self.setupCameraView()

    //This sets up the parameters for the camera and begins the camera session.
    self.setupAVCapture()
    }

    //MARK: - Internal Functions

    //When the user clicks the button, this gets the image, sends it up to the delegate, and shuts down all the Camera related views.
    internal func didPressTakePhoto(sender: UIButton) {

    //Create a media connection...
    if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {

    //Setup the orientation to be locked to portrait
    videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait

    //capture the still image from the camera
    stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
    if (sampleBuffer != nil) {

    //Get the image data
    let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
    let dataProvider = CGDataProviderCreateWithCFData(imageData)
    let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
    //The 2.0 scale halves the scale of the image. Where as the 1.0 gives you the full size.
    let image = UIImage(CGImage: cgImageRef!, scale: 2.0, orientation: UIImageOrientation.Up)


    // What size is this image.
    let imageSize = image.size
    let imageScale = image.scale
    let yCoord = (imageSize.height - ((imageSize.width*2)/3))/2
    let getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3))
    let rect = CGRectMake(getRect.origin.x*imageScale, getRect.origin.y*imageScale, getRect.size.width*imageScale, getRect.size.height*imageScale)
    let imageRef = CGImageCreateWithImageInRect(image.CGImage, rect)
    //let newImage = UIImage(CGImage: imageRef!)

    //This app forces the user to use landscapto take pictures so this simply turns the image so that it looks correct when we take the image.
    let newImage: UIImage = UIImage(CGImage: imageRef!, scale: image.scale, orientation: UIImageOrientation.Down)

    //Pass the image up to the delegate.
    self.delegate?.cameraOverlayImage(newImage)

    //stop the session
    self.session.stopRunning()

    //Remove the views.
    self.previewView.removeFromSuperview()
    self.boxView.removeFromSuperview()
    self.myButton.removeFromSuperview()

    //By this point the image has been handed off to the caller through the delegate and memory has been cleaned up.

    }
    })
    }
    }


    internal func setupCameraView(){

    //Add a view that is big as the frame that acts as a background.
    self.boxView = UIView(frame: self.view.frame)
    self.boxView.backgroundColor = UIColor(red: 255, green: 255, blue: 255, alpha: 1.0)
    self.view.addSubview(self.boxView)

    //Add Camera Preview View
    // This sets up the previewView to be a 3:2 aspect ratio
    let newHeight = UIScreen.mainScreen().bounds.size.width / 2 * 3
    self.previewView = UIView(frame: CGRectMake(0, 0, UIScreen.mainScreen().bounds.size.width, newHeight))
    self.previewView.backgroundColor = UIColor.cyanColor()


    self.previewView.contentMode = UIViewContentMode.ScaleToFill
    self.view.addSubview(previewView)


    //Add the button.
    myButton.frame = CGRectMake(0,0,200,40)
    myButton.backgroundColor = UIColor.redColor()
    myButton.layer.masksToBounds = true
    myButton.setTitle("press me", forState: UIControlState.Normal)
    myButton.setTitleColor(UIColor.whiteColor(), forState: UIControlState.Normal)
    myButton.layer.cornerRadius = 20.0
    myButton.layer.position = CGPoint(x: self.view.frame.width/2, y:(self.view.frame.height - myButton.frame.height ) )
    myButton.addTarget(self, action: "didPressTakePhoto:", forControlEvents: .TouchUpInside)
    self.view.addSubview(myButton)

    }


    internal func setupAVCapture(){

    session.sessionPreset = AVCaptureSessionPresetPhoto;

    let devices = AVCaptureDevice.devices();

    // Loop through all the capture devices on this phone
    for device in devices {

    // Make sure this particular device supports video
    if (device.hasMediaType(AVMediaTypeVideo)) {

    // Finally check the position and confirm we've got the front camera
    if(device.position == AVCaptureDevicePosition.Back) {
    captureDevice = device as? AVCaptureDevice
    if captureDevice != nil {

    //-> Now that we have the back of the camera, start a session.
    beginSession()
    break;
    }
    }
    }
    }
    }

    // Sets up the session
    internal func beginSession(){

    var err : NSError? = nil
    var deviceInput:AVCaptureDeviceInput?

    //See if we can get input from the Capture device as defined in setupAVCapture()
    do {
    deviceInput = try AVCaptureDeviceInput(device: captureDevice)
    } catch let error as NSError {
    err = error
    deviceInput = nil
    }
    if err != nil {
    print("error: \(err?.localizedDescription)")
    }

    //If we can add input into the AVCaptureSession() then do so.
    if self.session.canAddInput(deviceInput){
    self.session.addInput(deviceInput)
    }


    //Now show layers that were setup in the previewView, and mask it to the boundary of the previewView layer.
    let rootLayer :CALayer = self.previewView.layer
    rootLayer.masksToBounds=true


    //put a live video capture based on the current session.
    self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session);

    // Determine how to fill the previewLayer. In this case, I want to fill out the space of the previewLayer.
    self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
    self.previewLayer.frame = rootLayer.bounds


    //Put the sublayer into the previewLayer
    rootLayer.addSublayer(self.previewLayer)

    session.startRunning()

    }


    }

    这是我在 View Controller 中使用此类的方式。
    //
    // ViewController.swift
    // CameraTesting
    //
    // Created by Chris Cantley on 2/26/16.
    // Copyright © 2016 Chris Cantley. All rights reserved.
    //

    import UIKit
    import AVFoundation


    class ViewController: UIViewController, CameraOverlayDelegate{

    //Setting up the class reference.
    var cameraOverlay : CameraOverlay!

    //Connected to the UIViewController main view.
    @IBOutlet var getView: UIView!

    //Connected to an ImageView that will display the image when it is passed back to the delegate.
    @IBOutlet weak var imgShowImage: UIImageView!


    //Connected to the button that is pressed to bring up the camera view.
    @IBAction func btnPictureTouch(sender: AnyObject) {

    //Remove the image from the UIImageView and take another picture.
    self.imgShowImage.image = nil
    self.cameraOverlay.showCameraView()
    }


    override func viewDidLoad() {

    super.viewDidLoad()

    //Pass in the target UIView which in this case is the main view
    self.cameraOverlay = CameraOverlay(parentView: getView)

    //Make this class the delegate for the instantiated class.
    //That way it knows to receive the image when the user takes a picture
    self.cameraOverlay.delegate = self


    }


    override func didReceiveMemoryWarning() {
    super.didReceiveMemoryWarning()

    //Nothing here but if you run out of memorry you might want to do something here.

    }

    override func shouldAutorotate() -> Bool {
    if (UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeLeft ||
    UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeRight ||
    UIDevice.currentDevice().orientation == UIDeviceOrientation.Unknown) {
    return false;
    }
    else {
    return true;
    }
    }

    //This references the delegate from CameraOveralDelegate
    func cameraOverlayImage(image: UIImage) {

    //Put the image passed up from the CameraOverlay class into the UIImageView
    self.imgShowImage.image = image
    }



    }

    这是我将其放在一起的项目的链接。
    GitHub - Boiler plate get image from camera

    关于ios - CGImageCreateWithImageInRect 持有图像数据 - 泄漏?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35094111/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com