gpt4 book ai didi

java - 有没有办法找出Kafka序列化数据所花费的时间

转载 作者:行者123 更新时间:2023-11-30 05:23:40 24 4
gpt4 key购买 nike

我想计算 Kafka 序列化不同数据格式所花费的时间。并怀疑我是否可以在我这边完成(因为我认为这是在 Kafka 端完成的。)如果可以,我们该怎么做?序列化是在 message.send() 之后完成的吗?另外,我还在检查可用的 Kafka 监控指标,但在他们的文档中也没有找到与此相关的任何内容。已将 request-latency-avg 视为一个可能的指标,但其值似乎太高而不仅仅是序列化部分。有人可以为此提出任何建议吗?

最佳答案

Kafka 具有针对多种格式的内置序列化器和反序列化器,例如 Strings、Long、ByteArrays、ByteBuffers,社区也有针对 JSON、ProtoBuf、Avro 的内置序列化器和反序列化器。

如果您关注的是序列化和反序列化的性能,您可以检查一些基准测试的结果:https://labs.criteo.com/2017/05/serialization/

作者的结论是:

Protobuf and Thrift have similar performances, in terms of file sizes and serialization/deserialization time. The slightly better performances of Thrift did not outweigh the easier and less risky integration of Protobuf as it was already in use in our systems, thus the final choice. Protobuf also has a better documentation, whereas Thrift lacks it. Luckily there was the missing guide that helped us implement Thrift quickly for benchmarking.

https://diwakergupta.github.io/thrift-missing-guide/#_types Avro should not be used if your objects are small. But it looks interesting for its speed if you have very big objects and don’t have complex data structures as they are difficult to express. Avro tools also look more targeted at the Java world than cross-language development. The C# implementation’s bugs and limitations are quite frustrating.

关于java - 有没有办法找出Kafka序列化数据所花费的时间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59086033/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com