- iOS/Objective-C 元类和类别
- objective-c - -1001 错误,当 NSURLSession 通过 httpproxy 和/etc/hosts
- java - 使用网络类获取 url 地址
- ios - 推送通知中不播放声音
我试图显示两个图像之间的匹配关键点(一个是从我的相机捕获的,另一个是从数据库捕获的)
任何人都可以帮助我在我的代码中编写 DrawMatches 函数以显示 2 个图像之间的匹配线。
这是我的代码:
public final class ImageDetectionFilter{
// Flag draw target Image corner.
private boolean flagDraw ;
// The reference image (this detector's target).
private final Mat mReferenceImage;
// Features of the reference image.
private final MatOfKeyPoint mReferenceKeypoints = new MatOfKeyPoint();
// Descriptors of the reference image's features.
private final Mat mReferenceDescriptors = new Mat();
// The corner coordinates of the reference image, in pixels.
// CvType defines the color depth, number of channels, and
// channel layout in the image. Here, each point is represented
// by two 32-bit floats.
private final Mat mReferenceCorners = new Mat(4, 1, CvType.CV_32FC2);
// Features of the scene (the current frame).
private final MatOfKeyPoint mSceneKeypoints = new MatOfKeyPoint();
// Descriptors of the scene's features.
private final Mat mSceneDescriptors = new Mat();
// Tentative corner coordinates detected in the scene, in
// pixels.
private final Mat mCandidateSceneCorners =
new Mat(4, 1, CvType.CV_32FC2);
// Good corner coordinates detected in the scene, in pixels.
private final Mat mSceneCorners = new Mat(4, 1, CvType.CV_32FC2);
// The good detected corner coordinates, in pixels, as integers.
private final MatOfPoint mIntSceneCorners = new MatOfPoint();
// A grayscale version of the scene.
private final Mat mGraySrc = new Mat();
// Tentative matches of scene features and reference features.
private final MatOfDMatch mMatches = new MatOfDMatch();
// A feature detector, which finds features in images.
private final FeatureDetector mFeatureDetector =
FeatureDetector.create(FeatureDetector.ORB);
// A descriptor extractor, which creates descriptors of
// features.
private final DescriptorExtractor mDescriptorExtractor =
DescriptorExtractor.create(DescriptorExtractor.ORB);
// A descriptor matcher, which matches features based on their
// descriptors.
private final DescriptorMatcher mDescriptorMatcher = DescriptorMatcher
.create(DescriptorMatcher.BRUTEFORCE_HAMMINGLUT);
// The color of the outline drawn around the detected image.
private final Scalar mLineColor = new Scalar(0, 255, 0);
public ImageDetectionFilter(final Context context,
final int referenceImageResourceID) throws IOException {
// Load the reference image from the app's resources.
// It is loaded in BGR (blue, green, red) format.
mReferenceImage = Utils.loadResource(context, referenceImageResourceID,
Imgcodecs.CV_LOAD_IMAGE_COLOR);
// Create grayscale and RGBA versions of the reference image.
final Mat referenceImageGray = new Mat();
Imgproc.cvtColor(mReferenceImage, referenceImageGray,
Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(mReferenceImage, mReferenceImage,
Imgproc.COLOR_BGR2RGBA);
// Store the reference image's corner coordinates, in pixels.
mReferenceCorners.put(0, 0, new double[] { 0.0, 0.0 });
mReferenceCorners.put(1, 0,
new double[] { referenceImageGray.cols(),0.0 });
mReferenceCorners.put(2, 0,
new double[] { referenceImageGray.cols(),
referenceImageGray.rows() });
mReferenceCorners.put(3, 0,
new double[] { 0.0, referenceImageGray.rows() });
// Detect the reference features and compute their
// descriptors.
mFeatureDetector.detect(referenceImageGray,
mReferenceKeypoints);
mDescriptorExtractor.compute(referenceImageGray,
mReferenceKeypoints,mReferenceDescriptors);
}
public void apply(Mat src, Mat dst) {
// Convert the scene to grayscale.
Imgproc.cvtColor(src, mGraySrc, Imgproc.COLOR_RGBA2GRAY);
// Detect the same features, compute their descriptors,
// and match the scene descriptors to reference descriptors.
mFeatureDetector.detect(mGraySrc, mSceneKeypoints);
mDescriptorExtractor.compute(mGraySrc, mSceneKeypoints,
mSceneDescriptors);
mDescriptorMatcher.match(mSceneDescriptors,
mReferenceDescriptors,mMatches);
findSceneCorners();
// If the corners have been found, draw an outline around the
// target image.
// Else, draw a thumbnail of the target image.
draw(src, dst);
}
private void findSceneCorners() {
flagDraw = false;
final List<DMatch> matchesList = mMatches.toList();
if (matchesList.size() < 4) {
// There are too few matches to find the homography.
return;
}
final List<KeyPoint> referenceKeypointsList =
mReferenceKeypoints.toList();
final List<KeyPoint> sceneKeypointsList =
mSceneKeypoints.toList();
// Calculate the max and min distances between keypoints.
double maxDist = 0.0;
double minDist = Double.MAX_VALUE;
for (final DMatch match : matchesList) {
final double dist = match.distance;
if (dist < minDist) {
minDist = dist;
}
if (dist > maxDist) {
maxDist = dist;
}
}
// The thresholds for minDist are chosen subjectively
// based on testing. The unit is not related to pixel
// distances; it is related to the number of failed tests
// for similarity between the matched descriptors.
if (minDist > 50.0) {
// The target is completely lost.
// Discard any previously found corners.
mSceneCorners.create(0, 0, mSceneCorners.type());
return;
} else if (minDist > 25.0) {
// The target is lost but maybe it is still close.
// Keep any previously found corners.
return;
}
// Identify "good" keypoints and on match distance.
final ArrayList<Point> goodReferencePointsList =
new ArrayList<Point>();
final ArrayList<Point> goodScenePointsList =
new ArrayList<Point>();
final double maxGoodMatchDist = 1.75 * minDist;
for (final DMatch match : matchesList) {
if (match.distance < maxGoodMatchDist) {
goodReferencePointsList.add(
referenceKeypointsList.get(match.trainIdx).pt);
goodScenePointsList
.add(sceneKeypointsList.get(match.queryIdx).pt);
}
}
if (goodReferencePointsList.size() < 4
|| goodScenePointsList.size() < 4) {
// There are too few good points to find the homography.
return;
}
// There are enough good points to find the homography.
// (Otherwise, the method would have already returned.)
// Convert the matched points to MatOfPoint2f format, as
// required by the Calib3d.findHomography function.
final MatOfPoint2f goodReferencePoints = new MatOfPoint2f();
goodReferencePoints.fromList(goodReferencePointsList);
final MatOfPoint2f goodScenePoints = new MatOfPoint2f();
goodScenePoints.fromList(goodScenePointsList);
// Find the homography.
final Mat homography = Calib3d.findHomography(
goodReferencePoints,goodScenePoints);
// Use the homography to project the reference corner
// coordinates into scene coordinates.
Core.perspectiveTransform(mReferenceCorners,
mCandidateSceneCorners,homography);
// Convert the scene corners to integer format, as required
// by the Imgproc.isContourConvex function.
mCandidateSceneCorners.convertTo(mIntSceneCorners,
CvType.CV_32S);
// Check whether the corners form a convex polygon. If not,
// (that is, if the corners form a concave polygon), the
// detection result is invalid because no real perspective can
// make the corners of a rectangular image look like a concave
// polygon!
if (Imgproc.isContourConvex(mIntSceneCorners)) {
// The corners form a convex polygon, so record them as
// valid scene corners.
mCandidateSceneCorners.copyTo(mSceneCorners);
flagDraw = true;
}
}
protected void draw(final Mat src, final Mat dst) {
if (dst != src) {
src.copyTo(dst);
}
// Outline the found target in green.
Imgproc.line(dst, new Point(mSceneCorners.get(0, 0)), new Point(
mSceneCorners.get(1, 0)), mLineColor, 4);
Imgproc.line(dst, new Point(mSceneCorners.get(1, 0)), new Point(
mSceneCorners.get(2, 0)), mLineColor, 4);
Imgproc.line(dst, new Point(mSceneCorners.get(2, 0)), new Point(
mSceneCorners.get(3, 0)), mLineColor, 4);
Imgproc.line(dst, new Point(mSceneCorners.get(3, 0)), new Point(
mSceneCorners.get(0, 0)), mLineColor, 4);
}
public boolean getFlagDraw(){
return flagDraw;
}
}
最佳答案
我对 Java 不太熟悉,不确定这是否会有帮助,但我发布了一个示例,说明我如何使用 openCV 在 python 中实现这一点。也许这会帮助您作为指导。
(该示例改编自 this 网站,其中有您可能感兴趣的进一步解释)
在此示例中,我要在一组六只卡通动物中找到一只卡通动物的旋转版本。
基本上,您想使用训练和查询图像中的关键点调用 cv2.drawMatches()
并屏蔽不良匹配。我的代码的相关部分在最底部。
您的示例不是一个最小的代码示例,我没有完成所有的代码示例,但您似乎已经有了关键点并且应该准备好了吗?
import numpy as np
import cv2
from matplotlib import pyplot as plt
MIN_MATCH_COUNT = 4
img1 = cv2.imread('d:/one_animal_rotated.jpg',0) # queryImage
img2 = cv2.imread('d:/many_animals.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create(0,3,0)
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
#find matches using FLANN
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
#apply ratio test to find best matches (values from 0.7-1 made sense here)
good = []
for m,n in matches:
if m.distance < 1*n.distance:
good.append(m)
#find homography to transform the edges of the query image and draw them on the train image
#This is also used to mask all keypoints that aren't inside this box further below.
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
img2 = cv2.polylines(img2,[np.int32(dst)],True,255,3, cv2.LINE_AA)
#draw the good matched key points
draw_params = dict(matchColor = (0,255,0), # draw matches in green color
singlePointColor = None,
matchesMask = matchesMask, # draw only inliers
flags = 2)
img3 = cv2.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
plt.figure()
plt.imshow(img3, 'gray'),plt.show()
关于java - DrawMatching between two images - 图像识别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38787930/
我正在尝试在 2 张图片上找到匹配的兴趣点。这个项目的最后是建立全景图。 我有这个代码 SIFT detector(0); src1 = imread( folder + inputName1 , 1
imageCorrespondence = cv2.drawMatches(imageLeft, kpLeft, imageRight, kpRight, [goodMatches[0]], None
我通过在 features2d 框架中使用不同的检测器从两个连续的特征点中获得了特征点: 在第一帧中,特征点被绘制成红色 在下一帧中,特征点绘制为蓝色 我想在第一帧内(带红点的图像)这些红色和蓝色(匹
我的代码包含一个部分,我在其中对一组匹配项进行排序,并根据距离定义良好的匹配项。当我尝试 drawMatches 时,我收到一个错误: OpenCV Error: Assertion failed (
我知道drawMatches函数不会显示其matchs1to2参数的所有匹配项。这是基于其其他参数和标志(例如“不显示单行”)。我想知道是否有任何方法可以访问数组格式(例如DMatch结构)的输出匹配
当我尝试执行以下操作时: cv2.drawMatches(img1, keypoints1, img2, keypoints2, 匹配, 无, matchColor=(0,255,0), single
我是 OpenCV 的新手。我正在尝试在 iOS 上的 OpenCV 中使用 FLANN/SURF 在图像之间绘制特征匹配。我正在关注这个例子: http://docs.opencv.org/doc/
我试图显示两个图像之间的匹配关键点(一个是从我的相机捕获的,另一个是从数据库捕获的) 任何人都可以帮助我在我的代码中编写 DrawMatches 函数以显示 2 个图像之间的匹配线。 这是我的代码:
我只是在 OpenCV 中做一个特征检测的例子。这个例子如下所示。它给了我以下错误 module' 对象没有属性 'drawMatches' 我已经检查了 OpenCV 文档,但不确定为什么会出现此错
我正在尝试检测视频中的对象。我使用 SURF 作为特征检测和描述符提取器,使用 BRUTFORCE 作为匹配器。我用面孔测试了我的工作,我拍了一张我的照片,当我运行相机并将其对准我时,我的脸被检测到并
不画火柴。 Opencv 3.0,完全更新的 Ubuntu。代码运行但未显示任何匹配项。测试区域直接从图像中剪切和复制以进行匹配。 import numpy as np import cv2 cv2.
这是 OpenCV 的 drawMatches() 功能: void drawMatches(Mat img1, vector keypoints1, Mat img
我写了一段代码,它通过 KNN 算法找到 K 个最接近的匹配项。在获得 matMatch 和 matchIndices 之后,我尝试在两个结果帧之间绘制匹配对。 我将 matMask 和 ma
我使用 Sift/Surf 和 ORB,但有时我会遇到 drawMatch 函数的问题。 错误在这里: OpenCV Error: Assertion failed (i2 >= 0 && i2 =
关闭。这个问题需要debugging details .它目前不接受答案。 想改进这个问题?将问题更新为 on-topic对于堆栈溢出。 4年前关闭。 Improve this question 我有
我是一名优秀的程序员,十分优秀!