gpt4 book ai didi

android - 浏览图像和人脸检测

转载 作者:太空狗 更新时间:2023-10-29 13:27:41 24 4
gpt4 key购买 nike

我在检测浏览图像的人脸时遇到了一些问题。我知道问题是我不知道如何在导入的图像上应用我正在测试的人脸检测代码。我正在测试的示例代码是为本地存储的图像编写的。我相信我很接近,但你能帮我吗?

首先,我创建了一个图库方法

    protected void gallery() {
Intent intent = new Intent();
intent.setType("image/*");
intent.setAction("android.intent.action.GET_CONTENT");
startActivityForResult(Intent.createChooser(intent, "Choose An Image"), 1);
}

我仍在学习 Intent 等,但据我所知,我需要使用 Intent 来使用 Android 的图库,并且因为我设置 Action 来获取内容,所以我也在使用 Intent 将信息传递给它。话虽如此,我试图将 Intent 中的信息传递给uri。所以这就是我接下来要做的。

protected void onActivityResult(int requestCode, int resultCode, Intent intent) {
super.onActivityResult(requestCode, resultCode, intent);
if(requestCode == 1 && resultCode == RESULT_OK)
{
Uri uri = intent.getData();
try {
InputStream is = getContentResolver().openInputStream(uri);
Bitmap bitmap = BitmapFactory.decodeStream(is);
ImageView image = (ImageView)findViewById(R.id.img_view);
image.setImageBitmap(bitmap);

} catch (Exception e) {
e.printStackTrace();
}
}
}

所以这是让我感到困惑的部分。我猜 InputStream 有图像信息?好吧,我尝试在同一个 try-catch 中应用人脸检测代码。我想image.setImageBitmap(bitmap)完成后,就是应用人脸检测的时候了。这是人脸检测代码。

protected void onActivityResult(int requestCode, int resultCode, Intent intent) {
super.onActivityResult(requestCode, resultCode, intent);
if(requestCode == 1 && resultCode == RESULT_OK)
{
Uri uri = intent.getData();
try {
InputStream is = getContentResolver().openInputStream(uri);
Bitmap bitmap = BitmapFactory.decodeStream(is);
ImageView image = (ImageView)findViewById(R.id.image_view);
image.setImageBitmap(bitmap);

BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig=Bitmap.Config.RGB_565;
bitmap = BitmapFactory.decodeResource(getResources(), R.id.img_view, options);

imageWidth = bitmap.getWidth();
imageHeight = bitmap.getHeight();
detectedFaces = new FaceDetector.Face[NUM_FACES];
faceDetector= new FaceDetector(imageWidth, imageHeight, NUM_FACES);
NUM_FACE_DETECTED = faceDetector.findFaces(bitmap, detectedFaces);
mIL.invalidate();
} catch (Exception e) {
e.printStackTrace();
}
}
}

我不知道如何更改“mFaceBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.smilingfaces, options);”这是用于本地镜像的图像,我认为存储在 InputStream 中的图像(或者是它?所选图像在哪里?)我想出了一个想法来代替 imageView 布局,因为图像在布局中。我不明白所有这些是如何转移和协同工作的。无论如何,该代码 fragment 应该可以检测人脸。然后 onDraw() 在检测到的人脸周围绘制正方形。我不确定把它放在哪里,但我把它放在了 onActivityResult() 之外

protected void onDraw(Canvas canvas) {

Paint myPaint = new Paint();
myPaint.setColor(Color.RED);
myPaint.setStyle(Paint.Style.STROKE);
myPaint.setStrokeWidth(3);
myPaint.setDither(true);

for (int count = 0; count < NUM_FACE_DETECTED; count++) {
Face face = detectedFaces[count];
PointF midPoint = new PointF();
face.getMidPoint(midPoint);

eyeDistance = face.eyesDistance();
canvas.drawRect(midPoint.x-eyeDistance, midPoint.y-eyeDistance, midPoint.x+eyeDistance, midPoint.y+eyeDistance, myPaint);
}
}

有什么建议吗?我非常接近让它发挥作用!

最佳答案

我明白你真正想要的是什么。我会把完整的代码写给你,然后继续。

在这段代码中,我在布局中使用了一个 imageview,还有两个类,一个是 Activity 类,另一个是 imageview 类。

我将创建两个按钮,其中一个按钮用于从图库中选择图像并显示它(用于人脸检测),第二个按钮用于检测所选图像上的人脸。

首先是mainlayout.xml

<?xml version="1.0" encoding="utf-8"?>

<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent" >

<com.simpleapps.facedetection.MyView
android:id="@+id/faceview"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
/>

<LinearLayout
android:orientation="horizontal"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_gravity="top">

<ImageView
android:id="@+id/gallery"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_weight="1"
android:background="@drawable/gallery" />

<ImageView
android:id="@+id/detectf"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginRight="10dp"
android:layout_weight="1"
android:background="@drawable/detect" />


</LinearLayout>
</FrameLayout>

现在是 Activity 课

主 Activity .java

 public class MainActivity extends Activity {

public MyView faceview;

public static Bitmap defaultBitmap;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);

setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);

setContentView(R.layout.activity_main);

DisplayMetrics displaymetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displaymetrics);
screenHeight = displaymetrics.heightPixels;
screenWidth = displaymetrics.widthPixels;

faceview = (MyView)findViewById(R.id.faceview);

myGallery = (LinearLayout)findViewById(R.id.mygallery);





gallery=(ImageView)findViewById(R.id.gallery);
detectf=(ImageView)findViewById(R.id.detectf);



BitmapFactory.Options bitmapFatoryOptions=new BitmapFactory.Options();
bitmapFatoryOptions.inPreferredConfig=Bitmap.Config.RGB_565;

defaultBitmap=BitmapFactory.decodeResource(getResources(), R.drawable.face,bitmapFatoryOptions);

faceview.setImage(defaultBitmap);

gallery.setOnClickListener(new OnClickListener() {

public void onClick(View v) {
// TODO Auto-generated method stub

Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
intent.setType("image/*");
startActivityForResult(intent, 0 );

}
});

detectf.setOnClickListener(new OnClickListener() {

public void onClick(View v) {
// TODO Auto-generated method stub


faceview.facedetect();

}
});

}

@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {

super.onActivityResult(requestCode, resultCode, data);

if (resultCode == Activity.RESULT_OK) {

if(requestCode==0){

imageURI = data.getData();

try {

BitmapFactory.Options bitmapFatoryOptions=new BitmapFactory.Options();
bitmapFatoryOptions.inPreferredConfig=Bitmap.Config.RGB_565;

Bitmap b =
BitmapFactory.decodeStream(getContentResolver().openInputStream(imageURI), null,
bitmapFatoryOptions);


faceview.myBitmap=b;


} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}


faceview.invalidate();

}

faceview.invalidate();
} else {
System.exit(0);
Log.e("result", "BAD");
}
}
}

现在是 View 类。

我的 View .java

public class MyView extends View {

private FaceDetector.Face[] detectedFaces;
private int NUMBER_OF_FACES=10;
private FaceDetector faceDetector;
private int NUMBER_OF_FACE_DETECTED;
private float eyeDistance;

public Paint myPaint;

public Bitmap resultBmp;

public Bitmap myBitmap,HairBitmap;

public PointF midPoint1;

public MyView(Context context, AttributeSet attrs) {
super(context,attrs);
// TODO Auto-generated constructor stub
BitmapFactory.Options bitmapFatoryOptions=new BitmapFactory.Options();
bitmapFatoryOptions.inPreferredConfig=Bitmap.Config.RGB_565;

}

public void setImage(Bitmap bitmap) {
myBitmap = bitmap;

invalidate();
}


public void facedetect(){

myPaint = new Paint();
myPaint.setColor(Color.GREEN);
myPaint.setStyle(Paint.Style.STROKE);
myPaint.setStrokeWidth(3);

detectedFaces=new FaceDetector.Face[NUMBER_OF_FACES];
faceDetector=new FaceDetector(resultBmp.getWidth(),resultBmp.getHeight(),NUMBER_OF_FACES);
NUMBER_OF_FACE_DETECTED=faceDetector.findFaces(resultBmp, detectedFaces);

System.out.println("faces detected are"+NUMBER_OF_FACE_DETECTED);

Canvas facec=new Canvas();

for(int count=0;count<NUMBER_OF_FACE_DETECTED;count++)
{

if(count==0){

face1=detectedFaces[count];
midPoint1=new PointF();
face1.getMidPoint(midPoint1);

eyeDistance=face1.eyesDistance();


}

}

invalidate();

if(NUMBER_OF_FACE_DETECTED==0){

Toast.makeText(getContext(), "no faces detected", Toast.LENGTH_LONG).show();

}else if(NUMBER_OF_FACE_DETECTED!=0){

Toast.makeText(getContext(), "faces detected "+NUMBER_OF_FACE_DETECTED, Toast.LENGTH_LONG).show();

}
}

protected void onDraw(Canvas canvas)
{

if(myBitmap!=null)
{



w = myBitmap.getWidth();
h = myBitmap.getHeight();
resultBmp = null;

int widthofBitMap = MainActivity.screenWidth ;
int heightofBitMap = widthofBitMap*h/w;

resultBmp = Bitmap.createScaledBitmap(myBitmap, widthofBitMap, heightofBitMap, true);
canvas.drawBitmap(resultBmp, (MainActivity.screenWidth-widthofBitMap)/2,(MainActivity.screenHeight-heightofBitMap)/2, null);


}

}

@Override

public boolean onTouchEvent(MotionEvent event) {
// TODO Auto-generated method stub

int action = event.getAction();



switch(action){
case MotionEvent.ACTION_MOVE:
x = event.getX();
y = event.getY();





break;
case MotionEvent.ACTION_DOWN:
x = event.getX();
y = event.getY();





break;
case MotionEvent.ACTION_UP:
default:


}
invalidate();
return true;
}


}

我花了一些时间来编写这段代码。我希望它有所帮助。如果您收到一些错误,请询问。

关于android - 浏览图像和人脸检测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19763674/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com