- iOS/Objective-C 元类和类别
- objective-c - -1001 错误,当 NSURLSession 通过 httpproxy 和/etc/hosts
- java - 使用网络类获取 url 地址
- ios - 推送通知中不播放声音
我使用的是基于 Texas Instruments OMAP-L138 的定制板,它基本上由基于 ARM9 的 SoC 和 DSP 处理器组成。它连接到相机镜头。我想做的是捕获发送到 dsp 处理器的实时视频流以进行 H264 编码,该编码通过 8192 字节的数据包通过 uPP 发送。我想使用 Live555 提供的 testH264VideoStreamer 通过 RTSP 直播 H264 编码视频。我修改后的代码如下所示:
#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h> //to allow read() function
UsageEnvironment* env;
H264VideoStreamFramer* videoSource;
RTPSink* videoSink;
//-------------------------------------------------------------------------------
/* Open File Descriptor*/
int stream = open("/dev/upp", O_RDONLY);
/* Declaring a static 8 bit unsigned integer of size 8192 bytes that keeps its value between invocations */
static uint8_t buf[8192];
//------------------------------------------------------------------------------
//------------------------------------------------------------------------------
// Execute play function as a forwarding mechanism
//------------------------------------------------------------------------------
void play(); // forward
//------------------------------------------------------------------------------
// MAIN FUNCTION / ENTRY POINT
//------------------------------------------------------------------------------
int main(int argc, char** argv)
{
// Begin by setting up our live555 usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
// Create 'groupsocks' for RTP and RTCP:
struct in_addr destinationAddress;
destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
// Note: This is a multicast address. If you wish instead to stream
// using unicast, then you should use the "testOnDemandRTSPServer"
// test program - not this test program - as a model.
const unsigned short rtpPortNum = 18888;
const unsigned short rtcpPortNum = rtpPortNum+1;
const unsigned char ttl = 255;
const Port rtpPort(rtpPortNum);
const Port rtcpPort(rtcpPortNum);
Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
rtpGroupsock.multicastSendOnly(); // we're a SSM source
Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
rtcpGroupsock.multicastSendOnly(); // we're a SSM source
// Create a 'H264 Video RTP' sink from the RTP 'groupsock':
OutPacketBuffer::maxSize = 1000000;
videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, 96);
// Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidth = 500; // in kbps; for RTCP b/w share
const unsigned maxCNAMElen = 100;
unsigned char CNAME[maxCNAMElen+1];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
RTCPInstance* rtcp
= RTCPInstance::createNew(*env, &rtcpGroupsock,
estimatedSessionBandwidth, CNAME,
videoSink, NULL /* we're a server */,
True /* we're a SSM source */);
// Note: This starts RTCP running automatically
/*Create RTSP SERVER*/
RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554);
if (rtspServer == NULL)
{
*env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
exit(1);
}
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, "IPCAM @ TeReSol","UPP Buffer" ,
"Session streamed by \"testH264VideoStreamer\"",
True /*SSM*/);
sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
rtspServer->addServerMediaSession(sms);
char* url = rtspServer->rtspURL(sms);
*env << "Play this stream using the URL \"" << url << "\"\n";
delete[] url;
// Start the streaming:
*env << "Beginning streaming...\n";
play();
env->taskScheduler().doEventLoop(); // does not return
return 0; // only to prevent compiler warning
}
//----------------------------------------------------------------------------------
// afterPlaying() -> Defines what to do once a buffer is streamed
//----------------------------------------------------------------------------------
void afterPlaying(void* /*clientData*/)
{
*env << "...done reading from upp buffer\n";
//videoSink->stopPlaying();
//Medium::close(videoSource);
// Note that this also closes the input file that this source read from.
// Start playing once again to get the next stream
play();
/* We don't need to close the dev as long as we're reading from it. But if we do, use: close( "/dev/upp", O_RDWR);*/
}
//----------------------------------------------------------------------------------------------
// play() Method -> Defines how to read and what to make of the input stream
//----------------------------------------------------------------------------------------------
void play()
{
/* Read nbytes of buffer (sizeof buf ) from the filedescriptor stream and assign them to address where buf is located */
read(stream, &buf, sizeof buf);
printf("Reading from UPP in to Buffer");
/*Open the input file as a 'byte-stream file source': */
ByteStreamMemoryBufferSource* buffSource
= ByteStreamMemoryBufferSource::createNew(*env, buf, sizeof buf,False/*Empty Buffer After Reading*/);
/*By passing False in the above creatNew() method means that the buffer would be read at once */
if (buffSource == NULL)
{
*env << "Unable to read from\"" << "Buffer"
<< "\" as a byte-stream source\n";
exit(1);
}
FramedSource* videoES = buffSource;
// Create a framer for the Video Elementary Stream:
videoSource = H264VideoStreamFramer::createNew(*env, videoES,False);
// Finally, start playing:
*env << "Beginning to read from UPP...\n";
videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}
问题是代码虽然编译成功但我无法获得所需的输出。 VLC 播放器上的 RTSP 流处于播放模式,但我看不到任何视频。我将不胜感激在这件事上的任何帮助。我的描述可能有点含糊,但我很乐意进一步解释所需的任何部分。
最佳答案
好的,所以我弄清楚了需要做什么,并且为了所有可能面临类似问题的人的利益而写这篇文章。我需要做的是修改我的 testH264VideoStreamer.cpp 和 DeviceSource.cpp 文件,使其直接从设备(在我的例子中是自定义 am1808 板)读取数据,将其存储在缓冲区中并进行流式传输。我所做的更改是:
testH264VideoStreamer.cpp
#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h> //to allow read() function
UsageEnvironment* env;
H264VideoStreamFramer* videoSource;
RTPSink* videoSink;
void play(); // forward
//-------------------------------------------------------------------------
//Entry Point -> Main FUNCTION
//-------------------------------------------------------------------------
int main(int argc, char** argv) {
// Begin by setting up our usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
// Create 'groupsocks' for RTP and RTCP:
struct in_addr destinationAddress;
destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
// Note: This is a multicast address. If you wish instead to stream
// using unicast, then you should use the "testOnDemandRTSPServer"
// test program - not this test program - as a model.
const unsigned short rtpPortNum = 18888;
const unsigned short rtcpPortNum = rtpPortNum+1;
const unsigned char ttl = 255;
const Port rtpPort(rtpPortNum);
const Port rtcpPort(rtcpPortNum);
Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
rtpGroupsock.multicastSendOnly(); // we're a SSM source
Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
rtcpGroupsock.multicastSendOnly(); // we're a SSM source
// Create a 'H264 Video RTP' sink from the RTP 'groupsock':
OutPacketBuffer::maxSize = 600000;
videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, 96);
// Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidth = 1024; // in kbps; for RTCP b/w share
const unsigned maxCNAMElen = 100;
unsigned char CNAME[maxCNAMElen+1];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
RTCPInstance* rtcp
= RTCPInstance::createNew(*env, &rtcpGroupsock,
estimatedSessionBandwidth, CNAME,
videoSink, NULL /* we're a server */,
True /* we're a SSM source */);
// Note: This starts RTCP running automatically
RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554);
if (rtspServer == NULL) {
*env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
exit(1);
}
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, "ipcamera","UPP Buffer" ,
"Session streamed by \"testH264VideoStreamer\"",
True /*SSM*/);
sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
rtspServer->addServerMediaSession(sms);
char* url = rtspServer->rtspURL(sms);
*env << "Play this stream using the URL \"" << url << "\"\n";
delete[] url;
// Start the streaming:
*env << "Beginning streaming...\n";
play();
env->taskScheduler().doEventLoop(); // does not return
return 0; // only to prevent compiler warning
}
//----------------------------------------------------------------------
//AFTER PLAY FUNCTION CALLED HERE
//----------------------------------------------------------------------
void afterPlaying(void* /*clientData*/)
{
play();
}
//------------------------------------------------------------------------
//PLAY FUNCTION ()
//------------------------------------------------------------------------
void play()
{
// Open the input file as with Device as the source:
DeviceSource* devSource
= DeviceSource::createNew(*env);
if (devSource == NULL)
{
*env << "Unable to read from\"" << "Buffer"
<< "\" as a byte-stream source\n";
exit(1);
}
FramedSource* videoES = devSource;
// Create a framer for the Video Elementary Stream:
videoSource = H264VideoStreamFramer::createNew(*env, videoES,False);
// Finally, start playing:
*env << "Beginning to read from UPP...\n";
videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}
DeviceSource.cpp
#include "DeviceSource.hh"
#include <GroupsockHelper.hh> // for "gettimeofday()"
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h>
//static uint8_t *buf = (uint8_t*)malloc(102400);
static uint8_t buf[8192];
int upp_stream;
//static uint8_t *bufPtr = buf;
DeviceSource*
DeviceSource::createNew(UsageEnvironment& env)
{
return new DeviceSource(env);
}
EventTriggerId DeviceSource::eventTriggerId = 0;
unsigned DeviceSource::referenceCount = 0;
DeviceSource::DeviceSource(UsageEnvironment& env):FramedSource(env)
{
if (referenceCount == 0)
{
upp_stream = open("/dev/upp",O_RDWR);
}
++referenceCount;
if (eventTriggerId == 0)
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
}
DeviceSource::~DeviceSource(void) {
--referenceCount;
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
if (referenceCount == 0)
{
}
}
int loop_count;
void DeviceSource::doGetNextFrame()
{
//for (loop_count=0; loop_count < 13; loop_count++)
//{
read(upp_stream,buf, 8192);
//bufPtr+=8192;
//}
deliverFrame();
}
void DeviceSource::deliverFrame0(void* clientData)
{
((DeviceSource*)clientData)->deliverFrame();
}
void DeviceSource::deliverFrame()
{
if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet
u_int8_t* newFrameDataStart = (u_int8_t*) buf; //(u_int8_t*) buf; //%%% TO BE WRITTEN %%%
unsigned newFrameSize = sizeof(buf); //%%% TO BE WRITTEN %%%
// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
} else {
fFrameSize = newFrameSize;
}
gettimeofday(&fPresentationTime, NULL);
memmove(fTo, newFrameDataStart, fFrameSize);
FramedSource::afterGetting(this);
}
通过这些修改编译代码后,我能够在 vlc 播放器上接收视频流。
关于c++ - 使用 Live555 从连接到 H264 编码器的 IP 摄像机流式传输实时视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27279161/
如何同时管理多个 GoPro 摄像机?我想同时串流三个 GoPro 摄像机的三个视频,并将视频记录在硬盘上。 我用 Java 为一个 GoPro 编写了一个工具,它可以正常工作。 请帮帮我! 这是代码
关闭。这个问题不符合 Stack Overflow guidelines 。它目前不接受答案。 我们不允许提出有关书籍、工具、软件库等建议的问题。您可以编辑问题,以便可以用事实和引用来回答它。 去年关
当插入的外部麦克风插孔兼容 TRRS(通常是 2 个标准之一:CTIA 和 OMTP)时,大多数 Android 录音应用程序会自动识别音频输入。 尽管我可能会进行搜索,但我还没有遇到使用外接麦克风录
当 GoPro 相机插入我的 Fedora 笔记本电脑时,我可以通过 GUI 访问它 - 它显示为 StillImage(在设备下),但我似乎无法通过命令行访问它。按 CTRL+L 给出的位置为“gp
我正在尝试使用 Javafx 和 OpenCV 通过无线访问网络摄像头(Axis M1013),以便为我的 FRC 团队运行视觉处理。当我运行我的代码时,我可以访问我使用 Scenebuilder 制
我有一台带麦克风的大华网络摄像机。我想在现场广播之类的网站上播放音频流。 我有一个树莓派,我打算将它与 ffmpeg 一起使用,但我在弥合它与我的网站之间的差距以形成音频流方面并没有取得太大的成功。
试图通过FFmpeg正确抓取一个IP摄像机,海康威视品牌。 这就是 FFmpeg 的情况: "ffmpeg", "-rtsp_transport", "tcp",
我有 3 台 ONVIF 摄像机(博世、松下和安讯士)。我使用 WS-Discovery 查找摄像头,并且可以使用 GetDeviceInformation 从摄像头获取信息。我的问题是,当我尝试从中
我正在尝试使用 opencv java 从网络摄像头 (sony snc p1) 获取图像以进行运动检测。该流采用 mjpeg 格式,我正在使用 opencv 的 VideoCapture 尝试检索图
我正在尝试使用 OpenCV 和 Java 从 IP 摄像机访问 RTSP 视频流。我可以使用以下格式的 VLC 播放器访问流:rtsp://192.168.1.10:554/rtsp_live0 但
我正在尝试让 IP 摄像头流在浏览器中运行,并最终在电话中运行。但是,我在通过 ffmpeg 访问 RTSP 流时遇到了问题。 我正在运行下面的命令,替换正确的信息。我将相机更改为静态 IP 地址并将
任何人请帮助我理解这段代码。这是从 android 的 IPCamera 中获取的,我从 googlecode 中获取的。我试图弄清楚的代码是: public NanoHTTPD( int port,
我想通过 WIFI 从 PC 上控制一个基于 arduino 的小型机器人和一个 IP 摄像头,但我已经浏览互联网有一段时间了,我仍然不知道如何设置它。 我想在机器人上安装一个WIFI路由器,例如th
我有一台罗技 PTZ USB 摄像头。我已经使用 WebRtc 准备了视频通话功能。现在我需要的是在浏览器中添加平移、倾斜和缩放控件,以便用户可以根据需要控制摄像机。 是否可以使用JavaScript
#include #include #include int main(int, char**) { cv::VideoCapture vcap; cv::Mat image;
我使用了 onvifcpplib 库,您可以在以下位置找到它: https://github.com/veyesys/onvifcpplib我想用这个库编写 IP 摄像机发现(它可以在网络上找到 IP
我将在不同的计算机上使用多个客户端来查看 IP 摄像机流 url 的视频。因为网络摄像头对连接的客户端数量有限制,所以我想为此设置一个流媒体。我用谷歌搜索并尝试使用不同命令行选项的 GStreamer
我使用以下命令从 RTSP h264 编解码器获取帧。我无法从网络摄像机中获取帧。 $ ffmpeg -i rtsp://xxxx:yyy@192.168.1.yy:xx/tcp/av0_0 -f i
任何现有的 java 或 matlab 库 与背景进行图像相减图片 清除阴影 进行膨胀和腐 eclipse 来计算如何一个房间里有很多人? 最佳答案 OpenCV 将帮助您做您想做的事情,并且有 Ja
我正在使用以下命令通过 gstreamer 从 ip 摄像头获取图像。 gst-launch-0.10 -v rtspsrc location="rtsp://ipaddress :554/user=
我是一名优秀的程序员,十分优秀!