gpt4 book ai didi

c++ - 用于运动检测的opencv代码中的错误

转载 作者:太空狗 更新时间:2023-10-29 23:46:02 38 4
gpt4 key购买 nike

我正在寻找实现人类运动跟踪的方法,也在multiple moving object detection中讨论了使用差分分析和lukas kanade光学方法从多个视频帧中提取的内容。

我找到了科学论文,发现我们必须使用连接的组件filtering connected components进行连续运动跟踪,但是我不知道如何进行此过程。我所需要的只是skeletonization轨迹和人体步态运动的坐标。

我正在使用Opencv和C++,但是在我的情况下,opencv中用于对象检测的文档不足。我来自医学背景,需要这是儿科医生项目的一部分。

我找到了这段代码motion detection并试图执行它(尚不知道它是否检测到并跟踪运动)。但是,它返回了这些错误,我感到困惑,因为这些错误是微不足道的,其他注释提到它们能够运行此代码。但是我无法缓解这些错误,也无法理解它们发生的原因。我正在使用OpenCv2.3,以下是错误

  • 无法打开surce文件stdafx.h
  • 警告C4996:'fopen':此函数或变量可能不安全。考虑改用fopen_s。要禁用弃用,请使用_CRT_SECURE_NO_WARNINGS。详细信息请参见在线帮助。
  • 错误C2086:'CvSize imgSize':重新定义
  • 错误C2065:“temp”:未声明的标识符
  • 错误C4430:缺少类型说明符-假定为int。注意:C++不支持default-int
  • 错误C2365:'cvReleaseImage':重新定义;以前的定义是“功能”
    1> c:\ opencv2.3 \ opencv \ build \ include \ opencv2 \ core \ core_c.h(87):请参见'cvReleaseImage'
  • 的声明
  • 错误C2065:“差异”:未声明的标识符
  • 错误C4430:缺少类型说明符-假定为int。注意:C++不支持default-int
  • 错误C2365:'cvReleaseImage':重新定义;以前的定义是“功能”
    1> c:\ opencv2.3 \ opencv \ build \ include \ opencv2 \ core \ core_c.h(87):请参见'cvReleaseImage'
  • 的声明
  • 错误C2065:“greyImage”:未声明的标识符
  • 错误C4430:缺少类型说明符-假定为int。注意:C++不支持default-int
  • 错误C2365:'cvReleaseImage':重新定义;以前的定义是'function'
  • \ opencv2.3 \ opencv \ build \ include \ opencv2 \ core \ core_c.h(87):请参见'cvReleaseImage'的声明
    错误C2065:“movingAverage”:未声明的标识符
    -错误C4430:缺少类型说明符-假定为int。注意:C++不支持default-int
    -错误C2365:'cvReleaseImage':重新定义;以前的定义是“功能”
    -1> c:\ opencv2.3 \ opencv \ build \ include \ opencv2 \ core \ core_c.h(87):请参见'cvReleaseImage'的声明
    -错误C4430:缺少类型说明符-假定为int。注意:C++不支持default-int
    -错误C2365:'cvDestroyWindow':重新定义;以前的定义是'function'
  • c:\ opencv2.3 \ opencv \ build \ include \ opencv2 \ highgui \ highgui_c.h(136):请参见'cvDestroyWindow'声明
  • 错误C2440:“正在初始化”:无法从“const char [10]”转换为“int”
    -1>没有可以进行此转换的上下文
    -错误C2065:“输入”:未声明的标识符
    -错误C4430:缺少类型说明符-假定为int。注意:C++不支持default-int
  • 错误C2365:'cvReleaseCapture':重新定义;以前的定义是“功能”
    -1> c:\ opencv2.3 \ opencv \ build \ include \ opencv2 \ highgui \ highgui_c.h(311):请参见'cvReleaseCapture'的声明
    -错误C2065:“outputMovie”:未声明的标识符
  • 错误C4430:缺少类型说明符-假定为int。注意:C++不支持default-int
    -错误C2365:“cvReleaseVideoWriter”:重新定义;以前的定义是“功能”
    -1 c:\ opencv2.3 \ opencv \ build \ include \ opencv2 \ highgui \ highgui_c.h(436):请参见'cvReleaseVideoWriter'的声明
    -错误C2059:语法错误:“返回”
    ========== Build:0成功,1失败,0最新,0跳过==========

  • CODE
    // MotionDetection.cpp : Defines the entry point for the console application.
    //


    // Contourold.cpp : Defines the entry point for the console application.
    //
    #include "stdafx.h"

    #include "iostream"
    #include "stdlib.h"

    // OpenCV includes.
    #include "cv.h"
    #include "highgui.h"
    #pragma comment(lib,"cv.lib")
    #pragma comment(lib,"cxcore.lib")
    #pragma comment(lib,"highgui.lib")

    using namespace std;

    int main(int argc, char* argv[])
    {

    //Create a new window.
    cvNamedWindow("My Window", CV_WINDOW_AUTOSIZE);

    //Create a new movie capture object.
    CvCapture *input;

    //Assign the movie to capture.
    //inputMovie = cvCaptureFromAVI("vinoth.avi");

    char *fileName = "E:\\highway.avi";
    //char *fileName = "D:\\Profile\\AVI\\cardriving.wmv";
    input = cvCaptureFromFile(fileName);
    //if (!input)

    //cout << "Can't open file" << fileName < ;



    //Size of the image.
    CvSize imgSize;
    IplImage* frame = cvQueryFrame(input);
    CvSize imgSize = cvGetSize(frame);

    //Images to use in the program.
    IplImage* greyImage = cvCreateImage( imgSize, IPL_DEPTH_8U, 1);
    IplImage* colourImage;
    IplImage* movingAverage = cvCreateImage( imgSize, IPL_DEPTH_32F, 3);
    IplImage* difference;
    IplImage* temp;
    IplImage* motionHistory = cvCreateImage( imgSize, IPL_DEPTH_8U, 3);

    //Rectangle to use to put around the people.
    CvRect bndRect = cvRect(0,0,0,0);

    //Points for the edges of the rectangle.
    CvPoint pt1, pt2;

    //Create a font object.
    CvFont font;


    //Create video to output to.
    char* outFilename = argc==2 ? argv[1] : "E:\\outputMovie.avi";
    CvVideoWriter* outputMovie = cvCreateVideoWriter(outFilename,
    CV_FOURCC('F', 'L', 'V', 'I'), 29.97, cvSize(720, 576));

    //Capture the movie frame by frame.
    int prevX = 0;
    int numPeople = 0;

    //Buffer to save the number of people when converting the integer
    //to a string.
    char wow[65];

    //The midpoint X position of the rectangle surrounding the moving objects.
    int avgX = 0;

    //Indicates whether this is the first time in the loop of frames.
    bool first = true;

    //Indicates the contour which was closest to the left boundary before the object
    //entered the region between the buildings.
    int closestToLeft = 0;
    //Same as above, but for the right.
    int closestToRight = 320;

    //Keep processing frames...
    for(;;)
    {
    //Get a frame from the input video.
    colourImage = cvQueryFrame(input);

    //If there are no more frames, jump out of the for.
    if( !colourImage )
    {
    break;
    }

    //If this is the first time, initialize the images.
    if(first)
    {
    difference = cvCloneImage(colourImage);
    temp = cvCloneImage(colourImage);
    cvConvertScale(colourImage, movingAverage, 1.0, 0.0);
    first = false;
    }
    //else, make a running average of the motion.
    else
    {
    cvRunningAvg(colourImage, movingAverage, 0.020, NULL);
    }

    //Convert the scale of the moving average.
    cvConvertScale(movingAverage,temp, 1.0, 0.0);

    //Minus the current frame from the moving average.
    cvAbsDiff(colourImage,temp,difference);

    //Convert the image to grayscale.
    cvCvtColor(difference,greyImage,CV_RGB2GRAY);

    //Convert the image to black and white.
    cvThreshold(greyImage, greyImage, 70, 255, CV_THRESH_BINARY);

    //Dilate and erode to get people blobs
    cvDilate(greyImage, greyImage, 0, 18);
    cvErode(greyImage, greyImage, 0, 10);

    //Find the contours of the moving images in the frame.
    CvMemStorage* storage = cvCreateMemStorage(0);
    CvSeq* contour = 0;
    cvFindContours( greyImage, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );

    //Process each moving contour in the current frame...
    for( ; contour != 0; contour = contour->h_next )
    {
    //Get a bounding rectangle around the moving object.
    bndRect = cvBoundingRect(contour, 0);

    pt1.x = bndRect.x;
    pt1.y = bndRect.y;
    pt2.x = bndRect.x + bndRect.width;
    pt2.y = bndRect.y + bndRect.height;

    //Get an average X position of the moving contour.
    avgX = (pt1.x + pt2.x) / 2;

    //If the contour is within the edges of the building...
    if(avgX > 90 && avgX < 250)
    {
    //If the the previous contour was within 2 of the left boundary...
    if(closestToLeft >= 88 && closestToLeft <= 90)
    {
    //If the current X position is greater than the previous...
    if(avgX > prevX)
    {
    //Increase the number of people.
    numPeople++;

    //Reset the closest object to the left indicator.
    closestToLeft = 0;
    }
    }
    //else if the previous contour was within 2 of the right boundary...
    else if(closestToRight >= 250 && closestToRight <= 252)
    {
    //If the current X position is less than the previous...
    if(avgX < prevX)
    {
    //Increase the number of people.
    numPeople++;

    //Reset the closest object to the right counter.
    closestToRight = 320;
    }
    }

    //Draw the bounding rectangle around the moving object.
    cvRectangle(colourImage, pt1, pt2, CV_RGB(255,0,0), 1);
    }

    //If the current object is closer to the left boundary but still not across
    //it, then change the closest to the left counter to this value.
    if(avgX > closestToLeft && avgX <= 90)
    {
    closestToLeft = avgX;
    }

    //If the current object is closer to the right boundary but still not across
    //it, then change the closest to the right counter to this value.
    if(avgX < closestToRight && avgX >= 250)
    {
    closestToRight = avgX;
    }

    //Save the current X value to use as the previous in the next iteration.
    prevX = avgX;
    }
    //Save the current X value to use as the previous in the next iteration.
    prevX = avgX;
    }


    //Write the number of people counted at the top of the output frame.
    cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, 0.8, 0.8, 0, 2);
    cvPutText(colourImage, _itoa(numPeople, wow, 10), cvPoint(60, 200), &font, cvScalar(0, 0, 300));

    //Show the frame.
    cvShowImage("My Window", colourImage);

    //Wait for the user to see it.
    cvWaitKey(10);

    //Write the frame to the output movie.
    cvWriteFrame(outputMovie, colourImage);
    }

    // Destroy the image, movies, and window.
    cvReleaseImage(&temp);
    cvReleaseImage(&difference);
    cvReleaseImage(&greyImage);
    cvReleaseImage(&movingAverage);
    cvDestroyWindow("My Window");

    cvReleaseCapture(&input);
    cvReleaseVideoWriter(&outputMovie);


    return 0;

    }
  • 请帮助解决错误和问题。
  • 如何进行运动(人类)跟踪,可能通过骨架化方法返回轨迹的坐标。
  • 最佳答案

    1.)我想您直接从网站复制了代码(如果我错了,请更正我。)。但是,由于您使用的是OpenCV 2.3,因此大多数API都位于不同的模块中。以下包括您应该拥有的...

    #include <opencv/core/core.hpp>
    #include <opencv/highgui/highgui.hpp>
    #include <opencv/imgproc/imgproc.hpp>

    以及相应的库。

    2.)要过滤连接的组件,可以使用 cvblob library。我认为OpenCV提供的旧Blob库是使用VC 6构建的,因此可能是所需的 stdafx.h
    3.)缓慢浏览代码以获取某些语法和重新声明错误。

    编辑代码
    #include <iostream>
    #include "stdlib.h"


    #include <opencv2/core/core.hpp>
    #include <opencv2/highgui/highgui.hpp>
    #include <opencv2/imgproc/imgproc.hpp>

    #include "cv.h"
    #include "highgui.h"

    using namespace std;
    using namespace cv;

    int main(int argc, char* argv[])
    {
    cvNamedWindow("My Window", CV_WINDOW_AUTOSIZE);
    CvCapture *input;

    //char *fileName = "E:\\highway.avi";
    input = cvCaptureFromCAM(0);

    //input = cvCaptureFromFile(fileName);

    CvSize imgSize;
    IplImage* frame = cvQueryFrame(input);
    imgSize = cvGetSize(frame);

    IplImage* greyImage = cvCreateImage( imgSize, IPL_DEPTH_8U, 1);
    IplImage* colourImage;
    IplImage* movingAverage = cvCreateImage( imgSize, IPL_DEPTH_32F, 3);
    IplImage* difference;
    IplImage* temp;
    IplImage* motionHistory = cvCreateImage( imgSize, IPL_DEPTH_8U, 3);

    CvRect bndRect = cvRect(0,0,0,0);

    CvPoint pt1, pt2;

    CvFont font;


    char* outFilename = argc==2 ? argv[1] : "E:\\outputMovie.avi";
    CvVideoWriter* outputMovie = cvCreateVideoWriter(outFilename,
    CV_FOURCC('F', 'L', 'V', 'I'), 29.97, cvSize(720, 576));

    int prevX = 0;
    int numPeople = 0;

    char wow[65];

    int avgX = 0;

    bool first = true;

    int closestToLeft = 0;
    int closestToRight = 320;

    for(;;)
    {
    colourImage = cvQueryFrame(input);

    if( !colourImage )
    {
    break;
    }
    if(first)
    {
    difference = cvCloneImage(colourImage);
    temp = cvCloneImage(colourImage);
    cvConvertScale(colourImage, movingAverage, 1.0, 0.0);
    first = false;
    }
    else

    {
    cvRunningAvg(colourImage, movingAverage, 0.020, NULL);
    }

    cvConvertScale(movingAverage,temp, 1.0, 0.0);

    cvAbsDiff(colourImage,temp,difference);

    cvCvtColor(difference,greyImage,CV_RGB2GRAY);

    cvThreshold(greyImage, greyImage, 70, 255, CV_THRESH_BINARY);

    cvDilate(greyImage, greyImage, 0, 18);
    cvErode(greyImage, greyImage, 0, 10);

    CvMemStorage* storage = cvCreateMemStorage(0);
    CvSeq* contour = 0;

    cvFindContours( greyImage, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );


    for( ; contour != 0; contour = contour->h_next )
    {
    bndRect = cvBoundingRect(contour, 0);
    pt1.x = bndRect.x;
    pt1.y = bndRect.y;
    pt2.x = bndRect.x + bndRect.width;
    pt2.y = bndRect.y + bndRect.height;

    avgX = (pt1.x + pt2.x) / 2;

    if(avgX > 90 && avgX < 250)
    {
    if(closestToLeft >= 88 && closestToLeft <= 90)
    {
    if(avgX > prevX)
    {
    numPeople++;
    closestToLeft = 0;
    }
    }
    else if(closestToRight >= 250 && closestToRight <= 252)
    {
    if(avgX < prevX)
    {
    numPeople++;
    closestToRight = 320;
    }
    }
    cvRectangle(colourImage, pt1, pt2, CV_RGB(255,0,0), 1);
    }

    if(avgX > closestToLeft && avgX <= 90)
    {
    closestToLeft = avgX;
    }

    if(avgX < closestToRight && avgX >= 250)
    {
    closestToRight = avgX;
    }

    prevX = avgX;
    prevX = avgX;

    }

    cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, 0.8, 0.8, 0, 2);
    cvPutText(colourImage, _itoa(numPeople, wow, 10), cvPoint(60, 200), &font, cvScalar(0, 0, 300));
    cvShowImage("My Window", colourImage);

    cvWaitKey(10);
    cvWriteFrame(outputMovie, colourImage);

    }


    cvReleaseImage(&temp);
    cvReleaseImage(&difference);
    cvReleaseImage(&greyImage);
    cvReleaseImage(&movingAverage);
    cvDestroyWindow("My Window");

    cvReleaseCapture(&input);
    cvReleaseVideoWriter(&outputMovie);


    return 0;

    }

    它至少可以正确编译...它有一些运行时错误...我现在没有调试器来跟踪它...尝试它...我也在尝试..

    关于c++ - 用于运动检测的opencv代码中的错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14309111/

    38 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com