- r - 以节省内存的方式增长 data.frame
- ruby-on-rails - ruby/ruby on rails 内存泄漏检测
- android - 无法解析导入android.support.v7.app
- UNIX 域套接字与共享内存(映射文件)
我有一个 RTP 流套接字,从三星网络摄像机接收 JPEG 流。
我不太了解 JPEG 格式的工作原理,但我知道这个传入的 JFIF 或 JPEG 流正在为我提供 JPEG header
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type-specific | Fragment Offset |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Q | Width | Height |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
and then
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Restart Interval |F|L| Restart Count |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
and then in the first packet, there is this header
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| MBZ | Precision | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Quantization Table Data |
| ... |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
我想我正确地解析了它们,这是一段代码,我如何存储一个 JPEG 流数据包。
int extraOff=0;
public bool Decode(byte* data, int offset)
{
if (_initialized == false)
{
type_specific = data[offset + 0];
_frag[0] = data[offset + 3];
_frag[1] = data[offset + 2];
_frag[2] = data[offset + 1];
_frag[3] = 0x0;
fragment_offset = System.BitConverter.ToInt32(_frag, 0);
jpeg_type = data[offset + 4];
q = data[offset + 5];
width = data[offset + 6];
height = data[offset + 7];
_frag[0] = data[offset + 8];
_frag[1] = data[offset + 9];
restart_interval = (ushort)(System.BitConverter.ToUInt16(_frag, 0) & 0x3FF);
if (width == 0) /** elphel 333 full image size more than just one byte less that < 256 **/
width = 256;
byte jpegMBZ = (byte)(data[offset + 12]);
byte jpegPrecision = (byte)(data[offset + 13]);
int jpegLength = (int)((data[offset + 14]) * 256 + data[offset + 15]);
byte[] tableData1 = new byte[64];
byte[] tableData2 = new byte[64];
for (int i = 0; i < 64; ++i)
{
tableData1[i] = data[offset + 16 + i];
tableData2[i] = data[offset + 16+64 + i];
}
byte[] tmp = new byte[1024];
_offset = Utils.MakeHeaders(tmp,jpeg_type, width, height, tableData1, tableData2, 0);
qtable = new byte[_offset];
Array.Copy(tmp, 0, _buffer, 0, _offset);
_initialized = true;
tmp = null;
GC.Collect();
extraOff = jpegLength + 4 ;
}
else
{
_frag[0] = data[15]; //12 + 3
_frag[1] = data[14]; //12 + 2
_frag[2] = data[13]; //12 + 1]
_frag[3] = 0x0;
fragment_offset = System.BitConverter.ToInt32(_frag, 0);
_frag[0] = data[offset + 8];
_frag[1] = data[offset + 9];
restart_interval = (ushort)(System.BitConverter.ToUInt16(_frag, 0) & 0x3FF);
extraOff = 0;
}
return (next_fragment_offset == fragment_offset);
}
public unsafe bool Write(byte* data, int size, out bool sync) //Write(ref byte[] data, int size,out bool sync)
{
if (Decode(data, 12))
{
for (int i = 24 + extraOff; i < size; )
buffer_ptr[_offset++] = data[i++];
size -= 24+extraOff;
next_fragment_offset += size;
sync = true;
return ((data[1] >> 7) == 1);
}
else
{
_initialized = false;
_offset = qtable.Length;
next_fragment_offset = 0;
sync = false;
return false;
}
}
我遇到的问题是我成功保存到硬盘的 JPEG 文件,因为连接 JPEG 流没有正确显示整个流,所有图像预览器显示前两个传入数据包数据,但其余部分保持灰色,我相信这意味着,从第三个到最后一个 RTP 数据包的数据没有被正确解析或保存。
这是我得到的框架 http://rectsoft.net/ideerge/zzz.jpg
size = rawBuffer.Length;
if (sync == true)
{
unsafe
{
fixed (byte* p = rawBuffer)
{
if (_frame.Write(p, size, out sync)) //if (_frame.Write(ref _buffer, size, out sync))
{
// i save my buffer to file here
}
}
}
}
else if ((rawBuffer[1] >> 7) == 1)
{
sync = true;
}
rawBuffer 由我的 UDP 接收函数填充,它的行为与我处理 h264 流的方式完全一样,看起来 100% 与我在 VLC 上从 WIRESHARK 捕获的内容相似。
最佳答案
查看我的实现 @ https://net7mma.codeplex.com/SourceControl/latest#Rtp/RFC2435Frame.cs
它比上面的实现简单得多,如果需要的话,还有一个用于 RtspClient 和 RtpClient 的类
节选
#region Methods
/// <summary>
/// Writes the packets to a memory stream and creates the default header and quantization tables if necessary.
/// Assigns Image from the result
/// </summary>
internal virtual void ProcessPackets(bool allowLegacyPackets = false)
{
if (!Complete) return;
byte TypeSpecific, Type, Quality;
ushort Width, Height, RestartInterval = 0, RestartCount = 0;
uint FragmentOffset;
//A byte which is bit mapped, each bit indicates 16 bit coeffecients for the table .
byte PrecisionTable = 0;
ArraySegment<byte> tables = default(ArraySegment<byte>);
Buffer = new System.IO.MemoryStream();
//Loop each packet
foreach (RtpPacket packet in m_Packets.Values)
{
//Payload starts at the offset of the first PayloadOctet
int offset = packet.NonPayloadOctets;
if (packet.Extension) throw new NotSupportedException("RFC2035 nor RFC2435 defines extensions.");
//Decode RtpJpeg Header
TypeSpecific = (packet.Payload.Array[packet.Payload.Offset + offset++]);
FragmentOffset = (uint)(packet.Payload.Array[packet.Payload.Offset + offset++] << 16 | packet.Payload.Array[packet.Payload.Offset + offset++] << 8 | packet.Payload.Array[packet.Payload.Offset + offset++]);
#region RFC2435 - The Type Field
/*
4.1. The Type Field
The Type field defines the abbreviated table-specification and
additional JFIF-style parameters not defined by JPEG, since they are
not present in the body of the transmitted JPEG data.
Three ranges of the type field are currently defined. Types 0-63 are
reserved as fixed, well-known mappings to be defined by this document
and future revisions of this document. Types 64-127 are the same as
types 0-63, except that restart markers are present in the JPEG data
and a Restart Marker header appears immediately following the main
JPEG header. Types 128-255 are free to be dynamically defined by a
session setup protocol (which is beyond the scope of this document).
Of the first group of fixed mappings, types 0 and 1 are currently
defined, along with the corresponding types 64 and 65 that indicate
the presence of restart markers. They correspond to an abbreviated
table-specification indicating the "Baseline DCT sequential" mode,
8-bit samples, square pixels, three components in the YUV color
space, standard Huffman tables as defined in [1, Annex K.3], and a
single interleaved scan with a scan component selector indicating
components 1, 2, and 3 in that order. The Y, U, and V color planes
correspond to component numbers 1, 2, and 3, respectively. Component
1 (i.e., the luminance plane) uses Huffman table number 0 and
quantization table number 0 (defined below) and components 2 and 3
(i.e., the chrominance planes) use Huffman table number 1 and
quantization table number 1 (defined below).
Type numbers 2-5 are reserved and SHOULD NOT be used. Applications
based on previous versions of this document (RFC 2035) should be
updated to indicate the presence of restart markers with type 64 or
65 and the Restart Marker header.
The two RTP/JPEG types currently defined are described below:
horizontal vertical Quantization
types component samp. fact. samp. fact. table number
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| | 1 (Y) | 2 | 1 | 0 |
| 0, 64 | 2 (U) | 1 | 1 | 1 |
| | 3 (V) | 1 | 1 | 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| | 1 (Y) | 2 | 2 | 0 |
| 1, 65 | 2 (U) | 1 | 1 | 1 |
| | 3 (V) | 1 | 1 | 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
These sampling factors indicate that the chrominance components of
type 0 video is downsampled horizontally by 2 (often called 4:2:2)
while the chrominance components of type 1 video are downsampled both
horizontally and vertically by 2 (often called 4:2:0).
Types 0 and 1 can be used to carry both progressively scanned and
interlaced image data. This is encoded using the Type-specific field
in the main JPEG header. The following values are defined:
0 : Image is progressively scanned. On a computer monitor, it can
be displayed as-is at the specified width and height.
1 : Image is an odd field of an interlaced video signal. The
height specified in the main JPEG header is half of the height
of the entire displayed image. This field should be de-
interlaced with the even field following it such that lines
from each of the images alternate. Corresponding lines from
the even field should appear just above those same lines from
the odd field.
2 : Image is an even field of an interlaced video signal.
3 : Image is a single field from an interlaced video signal, but
it should be displayed full frame as if it were received as
both the odd & even fields of the frame. On a computer
monitor, each line in the image should be displayed twice,
doubling the height of the image.
*/
#endregion
Type = (packet.Payload.Array[packet.Payload.Offset + offset++]);
//Check for a RtpJpeg Type of less than 5 used in RFC2035 for which RFC2435 is the errata
if (!allowLegacyPackets && Type >= 2 && Type <= 5)
{
//Should allow for 2035 decoding seperately
throw new ArgumentException("Type numbers 2-5 are reserved and SHOULD NOT be used. Applications based on RFC 2035 should be updated to indicate the presence of restart markers with type 64 or 65 and the Restart Marker header.");
}
Quality = packet.Payload.Array[packet.Payload.Offset + offset++];
Width = (ushort)(packet.Payload.Array[packet.Payload.Offset + offset++] * 8);// in 8 pixel multiples
Height = (ushort)(packet.Payload.Array[packet.Payload.Offset + offset++] * 8);// in 8 pixel multiples
//It is worth noting Rtp does not care what you send and more tags such as comments and or higher resolution pictures may be sent and these values will simply be ignored.
//Restart Interval 64 - 127
if (Type > 63 && Type < 128)
{
/*
This header MUST be present immediately after the main JPEG header
when using types 64-127. It provides the additional information
required to properly decode a data stream containing restart markers.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Restart Interval |F|L| Restart Count |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
*/
RestartInterval = (ushort)(packet.Payload.Array[packet.Payload.Offset + offset++] << 8 | packet.Payload.Array[packet.Payload.Offset + offset++]);
RestartCount = (ushort)((packet.Payload.Array[packet.Payload.Offset + offset++] << 8 | packet.Payload.Array[packet.Payload.Offset + offset++]) & 0x3fff);
}
// A Q value of 255 denotes that the quantization table mapping is dynamic and can change on every frame.
// Decoders MUST NOT depend on any previous version of the tables, and need to reload these tables on every frame.
if (/*FragmentOffset == 0 || */Buffer.Position == 0)
{
//RFC2435 http://tools.ietf.org/search/rfc2435#section-3.1.8
//3.1.8. Quantization Table header
/*
This header MUST be present after the main JPEG header (and after the
Restart Marker header, if present) when using Q values 128-255. It
provides a way to specify the quantization tables associated with
this Q value in-band.
*/
if (Quality == 0) throw new InvalidOperationException("(Q)uality = 0 is Reserved.");
else if (Quality >= 100)
{
/* http://tools.ietf.org/search/rfc2435#section-3.1.8
* Quantization Table Header
* -------------------------
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| MBZ | Precision | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Quantization Table Data |
| ... |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
*/
if ((packet.Payload.Array[packet.Payload.Offset + offset++]) != 0)
{
//Must Be Zero is Not Zero
if (System.Diagnostics.Debugger.IsAttached) System.Diagnostics.Debugger.Break();
}
//Read the PrecisionTable (notes below)
PrecisionTable = (packet.Payload.Array[packet.Payload.Offset + offset++]);
#region RFC2435 Length Field
/*
The Length field is set to the length in bytes of the quantization
table data to follow. The Length field MAY be set to zero to
indicate that no quantization table data is included in this frame.
See section 4.2 for more information. If the Length field in a
received packet is larger than the remaining number of bytes, the
packet MUST be discarded.
When table data is included, the number of tables present depends on
the JPEG type field. For example, type 0 uses two tables (one for
the luminance component and one shared by the chrominance
components). Each table is an array of 64 values given in zig-zag
order, identical to the format used in a JFIF DQT marker segment.
* PrecisionTable *
For each quantization table present, a bit in the Precision field
specifies the size of the coefficients in that table. If the bit is
zero, the coefficients are 8 bits yielding a table length of 64
bytes. If the bit is one, the coefficients are 16 bits for a table
length of 128 bytes. For 16 bit tables, the coefficients are
presented in network byte order. The rightmost bit in the Precision
field (bit 15 in the diagram above) corresponds to the first table
and each additional table uses the next bit to the left. Bits beyond
those corresponding to the tables needed by the type in use MUST be
ignored.
*/
#endregion
//Length of all tables
ushort Length = (ushort)(packet.Payload.Array[packet.Payload.Offset + offset++] << 8 | packet.Payload.Array[packet.Payload.Offset + offset++]);
//If there is Table Data Read it from the payload, Length should never be larger than 128 * tableCount
if (Length == 0 && Quality == byte.MaxValue) throw new InvalidOperationException("RtpPackets MUST NOT contain Q = 255 and Length = 0.");
else if (Length > packet.Payload.Count - offset) //If the indicated length is greater than that of the packet taking into account the offset
continue; // The packet must be discarded
//Copy the tables present
tables = new ArraySegment<byte>(packet.Payload.Array, packet.Payload.Offset + offset, (int)Length);
offset += (int)Length;
}
else // Create them from the given Quality parameter ** Duality (Unify Branch)
{
tables = new ArraySegment<byte>(CreateQuantizationTables(Type, Quality, PrecisionTable));
}
//Write the JFIF Header after reading or generating the QTables
byte[] header = CreateJFIFHeader(Type, Width, Height, tables, PrecisionTable, RestartInterval);
Buffer.Write(header, 0, header.Length);
}
//Write the Payload data from the offset
Buffer.Write(packet.Payload.Array, packet.Payload.Offset + offset, packet.Payload.Count - (offset + packet.PaddingOctets));
}
//Check for EOI Marker and write if not found
if (Buffer.Position == Buffer.Length || Buffer.ReadByte() != JpegMarkers.EndOfInformation)
{
Buffer.WriteByte(JpegMarkers.Prefix);
Buffer.WriteByte(JpegMarkers.EndOfInformation);
}
//Create the Image form the Buffer
Image = System.Drawing.Image.FromStream(Buffer);
}
关于c# - 保存来自网络摄像机 RTP 流的 JPEG 文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/8118881/
今天我在一个 Java 应用程序中看到了几种不同的加载文件的方法。 文件:/ 文件:// 文件:/// 这三个 URL 开头有什么区别?使用它们的首选方式是什么? 非常感谢 斯特凡 最佳答案 file
就目前而言,这个问题不适合我们的问答形式。我们希望答案得到事实、引用或专业知识的支持,但这个问题可能会引起辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the he
我有一个 javascript 文件,并且在该方法中有一个“测试”方法,我喜欢调用 C# 函数。 c# 函数与 javascript 文件不在同一文件中。 它位于 .cs 文件中。那么我该如何管理 j
需要检查我使用的文件/目录的权限 //filePath = path of file/directory access denied by user ( in windows ) File fil
我在一个目录中有很多 java 文件,我想在我的 Intellij 项目中使用它。但是我不想每次开始一个新项目时都将 java 文件复制到我的项目中。 我知道我可以在 Visual Studio 和
已关闭。此问题不符合Stack Overflow guidelines 。目前不接受答案。 这个问题似乎不是关于 a specific programming problem, a software
我有 3 个组件的 Twig 文件: 文件 1: {# content-here #} 文件 2: {{ title-here }} {# content-here #}
我得到了 mod_ldap.c 和 mod_authnz_ldap.c 文件。我需要使用 Linux 命令的 mod_ldap.so 和 mod_authnz_ldap.so 文件。 最佳答案 从 c
我想使用PIE在我的项目中使用 IE7。 但是我不明白的是,我只能在网络服务器上使用 .htc 文件吗? 我可以在没有网络服务器的情况下通过浏览器加载的本地页面中使用它吗? 我在 PIE 的文档中看到
我在 CI 管道中考虑这一点,我应该首先构建和测试我的应用程序,结果应该是一个 docker 镜像。 我想知道使用构建环境在构建服务器上构建然后运行测试是否更常见。也许为此使用构建脚本。最后只需将 j
using namespace std; struct WebSites { string siteName; int rank; string getSiteName() {
我是 Linux 新手,目前正在尝试使用 ginkgo USB-CAN 接口(interface) 的 API 编程功能。为了使用 C++ 对 API 进行编程,他们提供了库文件,其中包含三个带有 .
我刚学C语言,在实现一个程序时遇到了问题将 test.txt 文件作为程序的输入。 test.txt 文件的内容是: 1 30 30 40 50 60 2 40 30 50 60 60 3 30 20
如何连接两个tcpdump文件,使一个流量在文件中出现一个接一个?具体来说,我想“乘以”一个 tcpdump 文件,这样所有的 session 将一个接一个地按顺序重复几次。 最佳答案 mergeca
我有一个名为 input.MP4 的文件,它已损坏。它来自闭路电视摄像机。我什么都试过了,ffmpeg , VLC 转换,没有运气。但是,我使用了 mediainfo和 exiftool并提取以下信息
我想做什么? 我想提取 ISO 文件并编辑其中的文件,然后将其重新打包回 ISO 文件。 (正如你已经读过的) 我为什么要这样做? 我想开始修改 PSP ISO,为此我必须使用游戏资源、 Assets
给定一个 gzip 文件 Z,如果我将其解压缩为 Z',有什么办法可以重新压缩它以恢复完全相同的 gzip 文件 Z?在粗略阅读了 DEFLATE 格式后,我猜不会,因为任何给定的文件都可能在 DEF
我必须从数据库向我的邮件 ID 发送一封带有附件的邮件。 EXEC msdb.dbo.sp_send_dbmail @profile_name = 'Adventure Works Admin
我有一个大的 M4B 文件和一个 CUE 文件。我想将其拆分为多个 M4B 文件,或将其拆分为多个 MP3 文件(以前首选)。 我想在命令行中执行此操作(OS X,但如果需要可以使用 Linux),而
快速提问。我有一个没有实现文件的类的项目。 然后在 AppDelegate 我有: #import "AppDelegate.h" #import "SomeClass.h" @interface A
我是一名优秀的程序员,十分优秀!