gpt4 book ai didi

c++ - 如何实时捕捉摄像头的视频?

转载 作者:太空宇宙 更新时间:2023-11-04 12:30:31 27 4
gpt4 key购买 nike

我正在使用 USB 3.0 Basler 相机 acA640-750uc 来捕捉视频,这里是使用 2 个相机并抓取帧的程序:

问题是当我运行这个程序时,我的电脑从 2 个摄像头捕获了视频,但视频比我的实际移动慢了大约 2 秒。这意味着我的视频比实时慢,我想实时捕捉视频。我该如何解决这个问题?

我试图改变 for (size_t i = 0; i < cameras.GetSize(); ++i) 的条件来自 ++ii++ , 但它不起作用。

#include <pylon/PylonIncludes.h>
#ifdef PYLON_WIN_BUILD
#include <pylon/PylonGUI.h>
#endif

// Namespace for using pylon objects.
using namespace Pylon;

// Namespace for using cout.
using namespace std;

// Number of images to be grabbed.
static const uint32_t c_countOfImagesToGrab = 1000;

// Limits the amount of cameras used for grabbing.
// It is important to manage the available bandwidth when grabbing with
// multiple cameras.
// This applies, for instance, if two GigE cameras are connected to the
// same network adapter via a switch.
// To manage the bandwidth, the GevSCPD interpacket delay parameter and
// the GevSCFTD transmission delay
// parameter can be set for each GigE camera device.
// The "Controlling Packet Transmission Timing with the Interpacket and
// Frame Transmission Delays on Basler GigE Vision Cameras"
// Application Notes (AW000649xx000)
// provide more information about this topic.
// The bandwidth used by a FireWire camera device can be limited by
// adjusting the packet size.
static const size_t c_maxCamerasToUse = 2;

int main(int argc, char* argv[])
{
// The exit code of the sample application.
int exitCode = 0;

// Before using any pylon methods, the pylon runtime must be initialized.
PylonInitialize();

try
{
// Get the transport layer factory.
CTlFactory& tlFactory = CTlFactory::GetInstance();

// Get all attached devices and exit application if no device is found.
DeviceInfoList_t devices;
if (tlFactory.EnumerateDevices(devices) == 0)
{
throw RUNTIME_EXCEPTION("No camera present.");
}

// Create an array of instant cameras for the found devices and avoid
// exceeding a maximum number of devices.
CInstantCameraArray cameras(min(devices.size(), c_maxCamerasToUse));

// Create and attach all Pylon Devices.
for (size_t i = 0; i < cameras.GetSize(); ++i)
{
cameras[i].Attach(tlFactory.CreateDevice(devices[i]));

// Print the model name of the camera.
cout << "Using device " << cameras[i].GetDeviceInfo().GetModelName() <<
endl;
}

// Starts grabbing for all cameras starting with index 0. The grabbing
// is started for one camera after the other. That's why the images of
// all
// cameras are not taken at the same time.
// However, a hardware trigger setup can be used to cause all cameras to
// grab images synchronously.
// According to their default configuration, the cameras are
// set up for free-running continuous acquisition.
cameras.StartGrabbing();

// This smart pointer will receive the grab result data.
CGrabResultPtr ptrGrabResult;
// Grab c_countOfImagesToGrab from the cameras.
for (uint32_t i = 0; i < c_countOfImagesToGrab && cameras.IsGrabbing();
++i)
{
cameras.RetrieveResult(5000, ptrGrabResult,
TimeoutHandling_ThrowException);

// When the cameras in the array are created the camera context value
// is set to the index of the camera in the array.
// The camera context is a user settable value.
// This value is attached to each grab result and can be used
// to determine the camera that produced the grab result.
intptr_t cameraContextValue = ptrGrabResult->GetCameraContext();

#ifdef PYLON_WIN_BUILD
// Show the image acquired by each camera in the window related to each
// camera.

Pylon::DisplayImage(cameraContextValue, ptrGrabResult);
#endif

// Print the index and the model name of the camera.
cout << "Camera " << cameraContextValue << ": " <<
cameras[cameraContextValue].GetDeviceInfo().GetModelName() << endl;

// Now, the image data can be processed.
cout << "GrabSucceeded: " << ptrGrabResult->GrabSucceeded() << endl;
cout << "SizeX: " << ptrGrabResult->GetWidth() << endl;
cout << "SizeY: " << ptrGrabResult->GetHeight() << endl;
const uint8_t* pImageBuffer = (uint8_t*)ptrGrabResult->GetBuffer();
cout << "Gray value of first pixel: " << (uint32_t)pImageBuffer[0] <<
endl <<
endl;
}
}
catch (const GenericException& e)
{
// Error handling
cerr << "An exception occurred." << endl
<< e.GetDescription() << endl;
exitCode = 1;
}

// Comment the following two lines to disable waiting on exit.
cerr << endl << "Press Enter to exit." << endl;
while (cin.get() != '\n');
// Releases all pylon resources.
PylonTerminate();

return exitCode;
}

最佳答案

我在这方面没有经验但正在改变++ii++显然不能解决你的问题,因为它们在定义上是等价的( for (size_t i = 0; i < cameras.GetSize(); ++i) )。
我不确定,但根据代码中的注释,您可能需要手动配置摄像头(摄像头的配置可能不同):

// According to their ***default configuration***, the cameras are
// set up for free-running continuous acquisition.
cameras.StartGrabbing();

另外,请仔细阅读代码中的这些注释,看看您是否正确配置了网络和参数。我建议您先尝试使用一台相机:

// Limits the amount of cameras used for grabbing.
// It is important to manage the available bandwidth when grabbing with
// multiple cameras.
// This applies, for instance, if two GigE cameras are connected to the
// same network adapter via a switch.
// To manage the bandwidth, the GevSCPD interpacket delay parameter and
// the GevSCFTD transmission delay
// parameter can be set for each GigE camera device.
// The "Controlling Packet Transmission Timing with the Interpacket and
// Frame Transmission Delays on Basler GigE Vision Cameras"
// Application Notes (AW000649xx000)
// provide more information about this topic.
// The bandwidth used by a FireWire camera device can be limited by
// adjusting the packet size.

关于c++ - 如何实时捕捉摄像头的视频?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58818523/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com