gpt4 book ai didi

c - 抽象出这个过程是否值得?

转载 作者:行者123 更新时间:2023-12-04 11:45:26 25 4
gpt4 key购买 nike

我有以下内存布局:

typedef struct map_obj_s
{
thing_t** things;

linedef_t** linedefs;

sidedef_t** sidedefs;

vertex_t** vertices;

segment_t** segments;

ssector_t** subsectors;

node_t* node_tree;

sector_t** sectors;

int32_t lump_counts[ MAP_LUMP_COUNT ];
}
map_obj_t;

问题是,我基本上是对这里的每种数据类型重复完全相同的过程,除了 node_tree 和 lump_counts 成员。

这是重复的结果:

map_obj_t* Map_Read( lumpbuffer_t* map_lump )
{
int32_t lump_counts[ MAP_LUMP_COUNT ];

__GetLumpCounts( map_lump, lump_counts );

// laziness
const lumpinfo_t* const mlumps = map_lump->lumps;

FILE* mapfile = Wad_GetFilePtr();

map_obj_t* map = Mem_Alloc( 1, sizeof( map_obj_t ) );

// allocate buffers

map->things = Mem_Alloc( lump_counts[ LUMP_THINGS ], sizeof( thing_t* ) );
map->linedefs = Mem_Alloc( lump_counts[ LUMP_LINEDEFS ], sizeof( linedef_t* ) );
map->sidedefs = Mem_Alloc( lump_counts[ LUMP_SIDEDEFS ], sizeof( sidedef_t* ) );
map->vertices = Mem_Alloc( lump_counts[ LUMP_VERTICES ], sizeof( vertex_t* ) );
map->segments = Mem_Alloc( lump_counts[ LUMP_SEGMENTS ], sizeof( segment_t* ) );
map->subsectors = Mem_Alloc( lump_counts[ LUMP_SSECTORS ], sizeof( ssector_t* ) );
map->node_tree = Mem_Alloc( lump_counts[ LUMP_NODES ], sizeof( node_t ) );
map->sectors = Mem_Alloc( lump_counts[ LUMP_SECTORS ], sizeof( sector_t* ) );

// parse things
PARSE_LUMP( mapfile,
map->things,
sizeof( thing_t ),
lump_counts[ LUMP_THINGS ],
mlumps,
LUMP_THINGS );

// parse linedefs
PARSE_LUMP( mapfile,
map->linedefs,
sizeof( linedef_t ),
lump_counts[ LUMP_LINEDEFS ],
mlumps,
LUMP_LINEDEFS );


// parse sidedefs
PARSE_LUMP( mapfile,
map->sidedefs,
sizeof( sidedef_t ),
lump_counts[ LUMP_SIDEDEFS ],
mlumps,
LUMP_SIDEDEFS );

// parse vertices
PARSE_LUMP( mapfile,
map->vertices,
sizeof( vertex_t ),
lump_counts[ LUMP_VERTICES ],
mlumps,
LUMP_VERTICES );

// parse segments
PARSE_LUMP( mapfile,
map->segments,
sizeof( vertex_t ),
lump_counts[ LUMP_SEGMENTS ],
mlumps,
LUMP_SEGMENTS );


// parse subsectors
PARSE_LUMP( mapfile,
map->subsectors,
sizeof( ssector_t ),
lump_counts[ LUMP_SSECTORS ],
mlumps,
LUMP_SSECTORS );

// parse nodes
PARSE_LUMP( mapfile,
map->node_tree,
sizeof( node_t ),
lump_counts[ LUMP_NODES ],
mlumps,
LUMP_NODES );


// parse sectors
PARSE_LUMP( mapfile,
map->sectors,
sizeof( sector_t ),
lump_counts[ LUMP_SECTORS ],
mlumps,
LUMP_SECTORS );

memcpy( map->lump_counts, lump_counts, sizeof( int32_t ) * MAP_LUMP_COUNT );

return map;
}

还有 PARSE_LUMP 宏:

#define PARSE_LUMP( wad_fileptr, data, data_size, count, lumps_ptr, lump_type ) \
do { \
\
Mem_AllocBuffer( ( generic_buffer_t ) ( data ), ( data_size ), ( count ) ); \
\
fseek( ( wad_fileptr ), \
( lumps_ptr )[ ( lump_type ) ].address_offset, \
SEEK_SET ); \
\
for ( int32_t i = 0; i < count; ++i ) \
{ \
fread( ( data )[ i ], ( data_size ), 1, ( wad_fileptr ) ); \
} \
\
} while( 0 ) \

重点

我想将其抽象出来是错误的吗?当然,它是可读的,但是它的想法包含大量代码。我不是一个很棒的 C 程序员(这是我第一个真正的/严肃的项目),但我有 C++ 的经验。在 C++ 方面,使用模板很容易,但在 C 中,我仅限于 void* 和宏函数。序列化似乎是一种可能性,但所有这些问题似乎都表明我的缓冲区有指向指针的指针。 这样做有什么意义吗,还是我只是在浪费时间甚至为此烦恼?更不用说我什至确定如何从序列化结构中动态分配内存这一事实。

最佳答案

我猜你有什么工作,但我认为没有理由分配然后分别读取每组 block ,如果你提前知道 block 的数量和它们的大小那么你可以分配你需要的所有内存并且一次读取整个文件,然后您只需将每个指针设置为其各自 block 集的开始(偏移量),如下所示:

//first set of lumps
map->things = map_data;
//increment pointer
map_data += lump_counts[ LUMP_THINGS ] * sizeof( thing_t );

//second set of lumps
map->linedefs = map_data;
map_data += lump_counts[ LUMP_LINEDEFS ] * sizeof( linedefs_t );
...

关于c - 抽象出这个过程是否值得?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18039792/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com