- c - 在位数组中找到第一个零
- linux - Unix 显示有关匹配两种模式之一的文件的信息
- 正则表达式替换多个文件
- linux - 隐藏来自 xtrace 的命令
我一直在尝试编译this Android Studio 2.2 中的源代码,使用 Android NDK、OpenCV-2.4.10 和 OpenCV-2.4.10-android-sdk 。我已经能够使用这个编译库 thread使用 Android NDK 但问题是,当我运行应用程序时它崩溃并出现以下异常:
10-19 13:16:52.401 27219-27219/? W/System.err: Native code library failed to load.
10-19 13:16:52.401 27219-27219/? W/System.err: java.lang.UnsatisfiedLinkError: com.android.tools.fd.runtime.IncrementalClassLoader$DelegateClassLoader[DexPathList[[dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-support-annotations-24.2.1_fed5c262a94aefc942781eb9d084010e5bac6a17-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_9-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_8-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_7-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_6-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_5-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_4-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_3-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_2-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_1-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-slice_0-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-internal_impl-24.2.1_c5ce2ccc24f48fdeeaa83795c4a9965b3b0000bd-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-internal_impl-24.2.1_9180b436d4d87e2a23c5c519d7114d01fff80862-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-internal_impl-24.2.1_758040e3c9887b663eba33123666f0d3d8eb7843-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-internal_impl-24.2.1_756c53f48345dbb2e71bd2438ac9f2c3ae9d1134-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-internal_impl-24.2.1_62ee59c260ed6ea416c5d3215fa59796bfc5182d-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-support-vector-drawable-24.2.1_d52f38f279dcae00c9d3cbbeafe8ac633a0ecca4-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-support-v4-24.2.1_d95c4ed9fe44ddeb9a8a8aae5e16887920d4daf8-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-support-media-compat-24.2.1_9fa2c3bc1a23639dc03b19b5edb12c0d69de4a65-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-support-fragment-24.2.1_5761e83f1e521bc942c5fda0d5d1732e153d7fc2-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-support-core-utils-24.2.1_a2346acb90d9291006f5de9a4c03663e9a37e36b-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-support-core-ui-24.2.1_fb6dfb901874f3fff658664f859ae460d253e846-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-support-compat-24.2.1_b0964989f1de3db4f3e423aa1bea8410573c9efe-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-appcompat-v7-24.2.1_4842b08591f3514a767dfa2e4af7ee994ac6115e-classes.dex", dex file "/data/data/com.example.shahzeb.testingndk/files/instant-run/dex/slice-com.android.support-animated-vector-drawable-24.2.1_889e0b48597230c226324f209f1b26d855496bb4-classes.dex"],nativeLibraryDirectories=[/data/app/com.example.shahzeb.testingndk-2/lib/arm, /vendor/lib, /system/lib]]] couldn't find "libhomography.so"
java.lang.UnsatisfiedLinkError: No implementation found for void com.example.testingndk.MainActivity.runDemo() (tried Java_com_example_testingndk_MainActivity_runDemo and Java_com_example_testingndk_MainActivity_runDemo__)
at com.example.testingndk.MainActivity.runDemo(Native Method)
at com.example.testingndk.MainActivity.onCreate(MainActivity.java:34)
at android.app.Activity.performCreate(Activity.java:6251)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1107)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2369)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2476)
at android.app.ActivityThread.-wrap11(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1344)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:148)
at android.app.ActivityThread.main(ActivityThread.java:5417)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:726)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616)
我已经检查了我的包和方法名称,它们是相同的。 Android.mk 中一定有我遗漏的东西。完整的源代码如下。
Android.mk
LOCAL_PATH := $(call my-dir)
OPENCV_PATH := C:/OpenCV-2.4.10-android-sdk/sdk/native/jni
include $(CLEAR_VARS)
OPENCV_INSTALL_MODULES := on
OPENCV_CAMERA_MODULES := off
include $(OPENCV_PATH)/OpenCV.mk
LOCAL_C_INCLUDES := \
$(LOCAL_PATH) \
$(OPENCV_PATH)/include
LOCAL_SRC_FILES := \
demo.cpp \
nonfree_init.cpp \
sift.cpp \
surf.cpp
LOCAL_MODULE := homography
LOCAL_CFLAGS := -Werror -O3 -ffast-math
LOCAL_LDLIBS := -llog -ldl
include $(BUILD_SHARED_LIBRARY)
演示.cpp
#include <jni.h>
#include <string.h>
#include <stdio.h>
#include <android/log.h>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/nonfree/nonfree.hpp"
using namespace cv;
#define LOG_TAG "nonfree_jni_demo"
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__)
typedef unsigned char uchar;
void run_demo();
void run_demo()
{
Mat img_object = imread( "/sdcard/DCIM/Camera/test1.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat img_scene = imread( "/sdcard/DCIM/Camera/test2.jpg", CV_LOAD_IMAGE_GRAYSCALE );
if( !img_object.data || !img_scene.data ) {
// std::cout<< " --(!) Error reading images " << std::endl;
LOGI("Could not open or find the image!\n");
}
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_object, keypoints_object );
detector.detect( img_scene, keypoints_scene );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_object, keypoints_object, descriptors_object );
extractor.compute( img_scene, keypoints_scene, descriptors_scene );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
// printf("-- Max dist : %f \n", max_dist );
// printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ if( matches[i].distance < 3*min_dist )
{ good_matches.push_back( matches[i]); }
}
Mat img_matches;
drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
//-- Show detected matches
imshow( "Good Matches & Object detection", img_matches );
waitKey(0);
}
void readme();
/** @function readme */
void readme(){
// std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl;
LOGI(" Usage: ./SURF_descriptor <img1> <img2>\n");
}
extern "C" {
JNIEXPORT void JNICALL Java_com_example_testingndk_MainActivity_runDemo(JNIEnv * env, jobject obj);
};
JNIEXPORT void JNICALL Java_com_example_testingndk_MainActivity_runDemo(JNIEnv * env, jobject obj)
{
LOGI( "Start run_demo! \n");
run_demo();
LOGI( "End run_demo!\n");
}
MainActivity.java
package com.example.testingndk;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
public class MainActivity extends AppCompatActivity {
static
{
try
{
// Load necessary libraries.
System.loadLibrary("homography");
}
catch( UnsatisfiedLinkError e )
{
System.err.println("Native code library failed to load.\n" + e);
}
}
public static native void runDemo();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Log.v("nonfree_jni_demo", "start runDemo");
// Call the JNI interface
runDemo();
}
}
最佳答案
@uelordi 感谢您的支持。
我把jni文件夹放在项目目录的根目录下,所以在用ndk-build
编译后,我复制了生成的*.so
libs 文件夹中的文件(在同一目录中编译后生成)到 myapp/src/main/jniLibs,我现在可以运行 OpenCV 代码。我希望它可以帮助某人。
关于android - Features2D + Homography OpenCV 链接错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40124649/
我正在尝试从我的系统中完全删除 opencv。我试图学习 ROS,而在教程中我遇到了一个问题。创建空工作区后,我调用catkin_make 它给出了一个常见错误,我在 answers.ros 中搜索并
我在尝试逐步转移对warpAffine的调用时遇到崩溃(不是异常): void rotateImage( const Mat& source, double degree, Mat& output )
如何处理opencv gpu异常?是否有用于opencvgpu异常处理的特定错误代码集api? 我尝试了很多搜索,但只有1个错误代码,即CV_GpuNotSupported。 请帮帮我。 最佳答案 虽
笔记 我是 OpenCV(或计算机视觉)的新手,所以告诉我搜索查询会很有帮助! 我想问什么 我想编写一个从图片中提取名片的程序。 我能够提取粗略的轮廓,但反射光会变成噪点,我无法提取准确的轮廓。请告诉
我想根据像素的某个阈值将Mono16类型的Mat转换为二进制图像。我尝试使用以下内容: 阈值(img,ret,0.1,1,CV_THRESH_BINARY); 尝试编译时,出现make错误,提示: 错
我对使用GPU加速的OpenCV中的卷积函数有疑问。 使用GPU的卷积速度大约快3.5 运行时: convolve(src_32F, kernel, cresult, false, cbuffer);
我正在尝试使用非对称圆圈网格执行相机校准。 我通常找不到适合CirclesGridFinder的文档,尤其是findHoles()函数的文档。 如果您有关于此功能如何工作以及其参数含义的信息,将不胜感
在计算机上绘图和在 OpenCV 的投影仪上投影之间有什么区别吗? 一种选择是投影显示所有内容的计算机屏幕。但也许也有这样的选择,即在投影仪上精确地绘制和投影图像,仅使用计算机作为计算机器。如果我能做
我将Processing(processing.org)用于需要人脸跟踪的项目。现在的问题是由于for循环,程序将耗尽内存。我想停止循环或至少解决内存不足的问题。这是代码。 import hyperm
我有下面的代码: // Image Processing.cpp : Defines the entry point for the console application. // //Save
我正在为某些项目使用opencv。并有应解决的任务。 任务很简单。我有一张主图片,并且有一个模板,而不是将主图片与模板进行比较。我使用matchTemplate()函数。我只是好奇一下。 在文档中,我
我正在尝试使用以下命令创建级联分类器: haartraining -data haarcascade -vec samples.vec -bg negatives.dat -nstages 20 -n
我试图使用OpenCV检测黑色图像中一组形状的颜色,为此我使用了Canny检测。但是,颜色输出总是返回为黑色。 std::vector > Asteroids::DetectPoints(const
我正在尝试使用OpenCv 2.4.5从边缘查找渐变方向,但是我在使用cvSobel()时遇到问题,以下是错误消息和我的代码。我在某处读到它可能是由于浮点(??)之间的转换,但我不知道如何解决它。有帮
我正在尝试构建循环关闭算法,但是在开始开发之前,我想测试哪种功能描述符在真实数据集上效果更好。 我有两个在两个方向拍摄的走廊图像,一个进入房间,另一个离开同一个房间。因此它们代表相同的场景,但具有2个
有没有一种方法可以比较直方图,但例如要排除白色,因此白色不会影响比较。 最佳答案 白色像素有 饱和度 , S = 0 .因此,在创建直方图时很容易从计数中删除白色像素。请执行下列操作: 从 BGR 转
就像本主题的标题一样,如何在OpenCV中确定图像的特定像素(灰度或彩色)是否饱和(例如,亮度过高)? 先感谢您。 最佳答案 根据定义,饱和像素是指与强度(即灰度值或颜色分量之一)等于255相关联的像
我是OpenCV的新用户,正在从事大学项目。程序会获取输入图像,对其进行综合模糊处理,然后对其进行模糊处理。当对合成模糊图像进行反卷积时,会生成边界伪像,因为...好吧,到目前为止,我还没有实现边界条
我想知道OpenCV是haar特征还是lbp是在多尺度搜索过程中缩放图像还是像论文中提到的那样缩放特征本身? 编辑:事实证明,检测器可以缩放图像,而不是功能。有人知道为什么吗?通过缩放功能可以更快。
我在openCv中使用SVM.train命令(已定义了适当的参数)。接下来,我要使用我的算法进行分类,而不是使用svm.predict。 可能吗?我可以访问训练时生成的支持 vector 吗?如果是这
我是一名优秀的程序员,十分优秀!