- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我使用 librdkafka 库用 C 语言编写了 kafka 消费者和生产者。 Kafka代理版本是kafka_2.12-2.3.0。生产者正在成功生成消息,并且 dr_msg_cb 函数确认已成功交付。但是,消费者没有收到来自代理的消息。有人可以帮忙进一步调试吗?
我可以看到从消费者到代理的 TCP 连接已建立。但 TCPdump 显示代理没有向消费者发送任何数据。我在消费者代码上启用了调试,下面是消息。
[2019 Nov 8 19:18:09.458553135:155:E:logger:1741] TID 05 : [LOG_TRACE]:RDKAFKA-7-SSL: rdkafka#consumer-2: [thrd:app]: Loading CA certificate(s) from file /mnt/ifc/cfg/d
ata/securedata/clientcerts/kafka/ApicCa.crt
[2019 Nov 8 19:18:09.458880860:156:E:logger:1741] TID 05 : [LOG_TRACE]:RDKAFKA-7-SSL: rdkafka#consumer-2: [thrd:app]: Loading certificate from file /mnt/ifc/cfg/data/se
curedata/clientcerts/kafka/KafkaClient.crt
[2019 Nov 8 19:18:09.459151178:157:E:logger:1741] TID 05 : [LOG_TRACE]:RDKAFKA-7-SSL: rdkafka#consumer-2: [thrd:app]: Loading private key file from /mnt/ifc/cfg/data/se
curedata/clientcerts/kafka/KafkaClient8.key
[2019 Nov 8 19:18:09.459583515:158:E:logger:1741] TID 06 : [LOG_TRACE]:RDKAFKA-7-BRKMAIN: rdkafka#consumer-2: [thrd::0/internal]: :0/internal: Enter main broker thread
[2019 Nov 8 19:18:09.459589163:159:E:logger:1741] TID 06 : [LOG_TRACE]:RDKAFKA-7-STATE: rdkafka#consumer-2: [thrd::0/internal]: :0/internal: Broker changed state INIT -
> UP
[2019 Nov 8 19:18:09.459593374:160:E:logger:1741] TID 06 : [LOG_TRACE]:RDKAFKA-7-BROADCAST: rdkafka#consumer-2: [thrd::0/internal]: Broadcasting state change
[2019 Nov 8 19:18:09.459608395:161:E:logger:1741] TID 07 : [LOG_TRACE]:RDKAFKA-7-BROADCAST: rdkafka#consumer-2: [thrd:main]: Broadcasting state change
[2019 Nov 8 19:18:09.459708091:162:E:logger:1741] TID 05 : [LOG_TRACE]:RDKAFKA-7-BROKER: rdkafka#consumer-2: [thrd:app]: ssl://10.0.0.1:9092/bootstrap: Added new broker with NodeId -1
[2019 Nov 8 19:18:09.459718029:163:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-BRKMAIN: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Enter main broker thread
[2019 Nov 8 19:18:09.459723538:164:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-CONNECT: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: broker in state INIT connecting
[2019 Nov 8 19:18:09.459918518:165:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-CONNECT: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Connecting to ipv4#10.0.0.1:9092 (ssl) with socket 34
[2019 Nov 8 19:18:09.460017515:166:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-STATE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Broker changed state INIT -> CONNECT
[2019 Nov 8 19:18:09.460021977:167:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-BROADCAST: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: Broadcasting state change
[2019 Nov 8 19:18:09.460228677:168:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-CONNECT: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Connected to ipv4#10.0.0.1:9092
[2019 Nov 8 19:18:09.790145695:169:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-SSLVERIFY: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Broker SSL certificate verified
[2019 Nov 8 19:18:09.790151895:170:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-CONNECTED: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Connected (#1)
[2019 Nov 8 19:18:09.790168266:171:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-FEATURE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Updated enabled protocol features +ApiVersion to ApiVersion
[2019 Nov 8 19:18:09.790172810:172:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-STATE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Broker changed state CONNECT -> APIVERSION_QUERY
[2019 Nov 8 19:18:09.790176880:173:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-BROADCAST: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: Broadcasting state change
[2019 Nov 8 19:18:09.790888559:174:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-FEATURE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Updated enabled protocol features to MsgVer1,ApiVersion,BrokerBalancedConsumer,ThrottleTime,Sasl,SaslHandshake,BrokerGroupCoordinator,LZ4,OffsetTime,MsgVer2
[2019 Nov 8 19:18:09.790893525:175:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-STATE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: Broker changed state APIVERSION_QUERY -> UP
[2019 Nov 8 19:18:09.790897645:176:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-BROADCAST: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: Broadcasting state change
[2019 Nov 8 19:18:09.791643149:177:E:logger:1741] TID 07 : [LOG_TRACE]:RDKAFKA-7-CLUSTERID: rdkafka#consumer-2: [thrd:main]: ssl://10.0.0.1:9092/bootstrap: ClusterId update "" -> "r7Us-jYGQRq34re8owKyJA"
[2019 Nov 8 19:18:09.791654890:178:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-UPDATE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/bootstrap: NodeId changed from -1 to 1
[2019 Nov 8 19:18:09.791663562:179:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-UPDATE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/1: Name changed from ssl://10.0.0.1:9092/bootstrap to ssl://10.0.0.1:9092/1
[2019 Nov 8 19:18:09.791668295:180:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-LEADER: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/1: Mapped 0 partition(s) to broker
[2019 Nov 8 19:18:09.791671709:181:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-STATE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/1: Broker changed state UP -> UPDATE
[2019 Nov 8 19:18:09.791675360:182:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-BROADCAST: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: Broadcasting state change
[2019 Nov 8 19:18:09.791692544:183:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-STATE: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: ssl://10.0.0.1:9092/1: Broker changed state UPDATE -> UP
[2019 Nov 8 19:18:09.791696027:184:E:logger:1741] TID 08 : [LOG_TRACE]:RDKAFKA-7-BROADCAST: rdkafka#consumer-2: [thrd:ssl://10.0.0.1:9092/bootstrap]: Broadcasting state change
消费者代码如下所示。
static void
msg_consume(rd_kafka_message_t *rkmessage, void *opaque)
{
if (rkmessage == NULL) {
return;
}
if (rkmessage->err) {
if (rkmessage->err == RD_KAFKA_RESP_ERR__PARTITION_EOF) {
DEBUG_PRINT(DBG_TRACE,
"%% Consumer reached end of %s [%"PRId32"] "
"message queue at offset %"PRId64"\n",
(rkmessage->rkt) ?
rd_kafka_topic_name(rkmessage->rkt) : "NULL",
rkmessage->partition, rkmessage->offset);
return;
}
if (rkmessage->err == RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION ||
rkmessage->err == RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC) {
return;
}
return;
}
if (rkmessage->key_len) {
DEBUG_PRINT(DBG_TRACE, "Key: %.*s\n", (int)rkmessage->key_len,
(char *)rkmessage->key);
}
DEBUG_PRINT(DBG_TRACE, "%.*s\n", (int)rkmessage->len,
(char *)rkmessage->payload);
}
syserr_t
kafka_consumer_create()
{
rd_kafka_topic_conf_t *consTopicCfg;
rd_kafka_conf_t *conf = NULL;
rd_kafka_t *rk = NULL;
char errstr[512];
rd_kafka_resp_err_t errCode;
conf = rd_kafka_conf_new();
if (!conf) {
return ~SUCCESS;
}
rd_kafka_conf_set_log_cb(conf, logger);
if (rd_kafka_conf_set(conf, "debug",
"generic,broker,topic,security,msg,fetch",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
}
if (rd_kafka_conf_set(conf, "bootstrap.servers", broker_list[0],
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "security.protocol", "SSL",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "ssl.certificate.location", kafka_clnt_cert,
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "ssl.key.location", kafka_clnt_key,
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "ssl.ca.location", kafka_apic_cert,
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "auto.commit.enable", "true",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "auto.commit.interval.ms", "500",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "group.id", "consumerGroup",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
consTopicCfg = rd_kafka_topic_conf_new();
if (RD_KAFKA_CONF_OK != rd_kafka_topic_conf_set(consTopicCfg,
"auto.offset.reset",
"latest" ,errstr,
sizeof(errstr))) {
return ~SUCCESS;
}
rd_kafka_conf_set_default_topic_conf(conf, consT opicCfg);
rk = rd_kafka_new(RD_KAFKA_CONSUMER, conf, errstr, sizeof(errstr));
if (!rk) {
return ~SUCCESS;
}
// conf = NULL; // Disown conf as rd_kafka_new() has ownership now.
const char *ep_topic="eprecords";
rd_kafka_topic_partition_list_t *tp_list =
rd_kafka_topic_partition_list_new(1);
rd_kafka_topic_partition_t* tpObj =
rd_kafka_topic_partition_list_add(tp_list,
ep_topic,
RD_KAFKA_PARTITION_UA);
if (NULL == tpObj) {
return ~SUCCESS;
}
errCode = rd_kafka_subscribe(rk, tp_list);
if (errCode != RD_KAFKA_RESP_ERR_NO_ERROR) {
return ~SUCCESS;
}
rd_kafka_topic_partition_list_destroy(tp_list);
while(1) {
rd_kafka_message_t *msg = rd_kafka_consumer_poll(rk, 1000);
if (msg != NULL) {
if (msg->err == RD_KAFKA_RESP_ERR_NO_ERROR) {
msg_consume(msg, NULL);
}
rd_kafka_message_destroy(msg);
}
rd_kafka_poll(rk,0);
}
}
我期望在生产者定期发布数据时调用 msg_consume() 。我不确定以下日志消息是否是问题的根源。
“[2019 年 11 月 8 日 19:18:09.791668295:180:E:logger:1741] TID 08:[LOG_TRACE]:RDKAFKA-7-LEADER:rdkafka#consumer-2:[thrd:ssl://10.0。 0.1:9092/bootstrap]: ssl://10.0.0.1:9092/1: 将 0 个分区映射到代理”
最佳答案
在我转向不同的消费者模式后,程序开始运行。代码如下。
static void logger (const rd_kafka_t *rk, int level,
const char *fac, const char *buf)
{
PRINT_ERR("RDKAFKA-%i-%s: %s: %s", level, fac, rk ? rd_kafka_name(rk) : NULL, buf);
}
rd_kafka_t *
kafka_consumer_create()
{
rd_kafka_conf_t *conf = NULL;
rd_kafka_t *rk = NULL;
char errstr[512];
rd_kafka_resp_err_t errCode;
conf = rd_kafka_conf_new();
if (!conf) {
return ~SUCCESS;
}
rd_kafka_conf_set_log_cb(conf, logger);
if (rd_kafka_conf_set(conf, "enable.partition.eof", "true",
errstr, sizeof(errstr) != RD_KAFKA_CONF_OK)) {
PRINT_ERR("enable partition eof failed %s", errstr);
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "client.id", "kafka-python-Python 2.7.16",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
PRINT_ERR("DR_MSG : failed to set client id %s", errstr);
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "debug", "generic,broker,topic,security,msg,fetch",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "enable.auto.commit","true",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "bootstrap.servers", "192.168.1.1",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "security.protocol", "SSL",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "ssl.certificate.location", "/root/kafka/KafkaClient.crt",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "ssl.key.location", "/root/kafka/KafkaClient8.key",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "ssl.ca.location", "/root/kafka/ApicCa.crt",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "group.id", "test-consumer-group",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
if (rd_kafka_conf_set(conf, "auto.offset.reset", "latest",
errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
return ~SUCCESS;
}
rk = rd_kafka_new(RD_KAFKA_CONSUMER, conf, errstr, sizeof(errstr));
if (!rk) {
return ~SUCCESS;
}
conf = NULL; // Disown conf as rd_kafka_new() has ownership now.
return rk;
}
static void
msg_consume(rd_kafka_message_t *rkmessage, void *opaque)
{
if (rkmessage == NULL) {
PRINT_ERR("Aentp rkmessage error");
return;
}
if (rkmessage->err) {
if (rkmessage->err == RD_KAFKA_RESP_ERR__PARTITION_EOF) {
PRINT_ERR("AT2");
DEBUG_PRINT(DBG_AENTP_TRACE,
"%% Consumer reached end of %s [%"PRId32"] "
"message queue at offset %"PRId64"\n",
(rkmessage->rkt) ? rd_kafka_topic_name(rkmessage->rkt) : "NULL",
rkmessage->partition, rkmessage->offset);
return;
}
if (rkmessage->err == RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION ||
rkmessage->err == RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC) {
PRINT_ERR("Unknown partition or unknow topic");
return;
}
return;
}
if (rkmessage->key_len) {
DEBUG_PRINT("Key: %.*s\n", (int)rkmessage->key_len, (char *)rkmessage->key);
}
DEBUG_PRINT("%.*s\n", (int)rkmessage->len, (char *)rkmessage->payload);
}
void *
kafka_consumer_thread(void *ptr)
{
rd_kafka_t *rk = kafka_consumer_create();
rd_kafka_topic_t *rkt = NULL;
rd_kafka_topic_conf_t *topic_conf;
rd_kafka_resp_err_t err;
char errstr[512];
if (rk == NULL) {
return NULL;
}
DEBUG_PRINT("Starting Kafka consumer thread");
topic_conf = rd_kafka_topic_conf_new();
if (RD_KAFKA_CONF_OK != rd_kafka_topic_conf_set(topic_conf, "auto.offset.reset",
"earliest" ,errstr, sizeof(errstr))) {
AENTP_PRINT_ERR("rd_kafka_topic_conf_set() failed with error: %s\n", errstr);
return NULL;
}
rkt = rd_kafka_topic_new(rk, "records", topic_conf);
topic_conf = NULL; /* Now owned by topic */
/* Start consuming */
if (rd_kafka_consume_start(rkt, 0, RD_KAFKA_OFFSET_END) == -1){
rd_kafka_resp_err_t err = rd_kafka_last_error();
PRINT_ERR("%% Failed to start consuming: %s\n", rd_kafka_err2str(err));
if (err == RD_KAFKA_RESP_ERR__INVALID_ARG) {
APRINT_ERR("%% Broker based offset storage "
"requires a group.id, "
"add: -X group.id=yourGroup\n");
}
return NULL;
}
while(1) {
rd_kafka_message_t *rkmessage;
/* Poll for errors, etc. */
rd_kafka_poll(rk, 0);
/*
* Consume single message.
* See rdkafka_performance.c for high speed
* consuming of messages.
*/
rkmessage = rd_kafka_consume(rkt, 0, 1000);
if (!rkmessage) /* timeout */
continue;
msg_consume(rkmessage, NULL);
/* Return message to rdkafka */
rd_kafka_message_destroy(rkmessage);
/* XXX Do we need the seek??? */
seek_offset = 0;
if (seek_offset) {
err = rd_kafka_seek(rkt, 0, seek_offset, 2000);
if (err)
AENTP_PRINT_ERR("Seek failed: %s\n", rd_kafka_err2str(err));
else
printf("Seeked to %"PRId64"\n", seek_offset);
seek_offset = 0;
}
/* XXX Do we need the seek??? */
}
return NULL;
}
请注意为什么之前的方法不起作用。
关于c - librdkafka 消费者未收到来自代理的消息,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58797316/
我一直在读到,如果一个集合“被释放”,它也会释放它的所有对象。另一方面,我还读到,一旦集合被释放,集合就会释放它的对象。 但最后一件事可能并不总是发生,正如苹果所说。系统决定是否取消分配。在大多数情况
我有一个客户端-服务器应用程序,它使用 WCF 进行通信,并使用 NetDataContractSerializer 序列化对象图。 由于服务器和客户端之间传输了大量数据,因此我尝试通过微调数据成员的
我需要有关 JMS 队列和消息处理的帮助。 我有一个场景,需要针对特定属性组同步处理消息,但可以在不同属性组之间同时处理消息。 我了解了特定于每个属性的消息组和队列的一些知识。我的想法是,我想针对
我最近开始使用 C++,并且有一种强烈的冲动 #define print(msg) std::cout void print(T const& msg) { std::cout void
我已经为使用 JGroups 编写了简单的测试。有两个像这样的简单应用程序 import org.jgroups.*; import org.jgroups.conf.ConfiguratorFact
这个问题在这里已经有了答案: Firebase messaging is not supported in your browser how to solve this? (3 个回答) 7 个月前关
在我的 C# 控制台应用程序中,我正在尝试更新 CRM 2016 中的帐户。IsFaulted 不断返回 true。当我向下钻取时它返回的错误消息如下: EntityState must be set
我正在尝试通过 tcp 将以下 json 写入 graylog 服务器: {"facility":"GELF","file":"","full_message":"Test Message Tcp",
我正在使用 Django 的消息框架来指示成功的操作和失败的操作。 如何排除帐户登录和注销消息?目前,登录后登陆页面显示 已成功登录为“用户名”。我不希望显示此消息,但应显示所有其他成功消息。我的尝试
我通过编写禁用qDebug()消息 CONFIG(release, debug|release):DEFINES += QT_NO_DEBUG_OUTPUT 在.pro文件中。这很好。我想知道是否可以
我正在使用 ThrottleRequest 来限制登录尝试。 在 Kendler.php 我有 'throttle' => \Illuminate\Routing\Middleware\Throttl
我有一个脚本,它通过die引发异常。捕获异常时,我想输出不附加位置信息的消息。 该脚本: #! /usr/bin/perl -w use strict; eval { die "My erro
允许的消息类型有哪些(字符串、字节、整数等)? 消息的最大大小是多少? 队列和交换器的最大数量是多少? 最佳答案 理论上任何东西都可以作为消息存储/发送。实际上您不想在队列上存储任何内容。如果队列大部
基本上,我正在尝试创建一个简单的 GUI 来与 Robocopy 一起使用。我正在使用进程打开 Robocopy 并将输出重定向到文本框,如下所示: With MyProcess.StartI
我想将进入 MQ 队列的消息记录到数据库/文件或其他日志队列,并且我无法修改现有代码。是否有任何方法可以实现某种类似于 HTTP 嗅探器的消息记录实用程序?或者也许 MQ 有一些内置的功能来记录消息?
我得到了一个带有 single_selection 数据表和一个命令按钮的页面。命令按钮调用一个 bean 方法来验证是否进行了选择。如果不是,它应该显示一条消息警告用户。如果进行了选择,它将导航到另
我知道 MSVC 可以通过 pragma 消息做到这一点 -> http://support.microsoft.com/kb/155196 gcc 是否有办法打印用户创建的警告或消息? (我找不到谷
当存在大量节点或二进制数据时, native Erlang 消息能否提供合理的性能? 情况 1:有一个大约 50-200 台机器的动态池(erlang 节点)。它在不断变化,每 10 分钟大约添加或删
我想知道如何在用户登录后显示“欢迎用户,您已登录”的问候消息,并且该消息应在 5 秒内消失。 该消息将在用户成功登录后显示一次,但在同一 session 期间连续访问主页时不会再次显示。因为我在 ho
如果我仅使用Welcome消息,我的代码可以正常工作,但是当打印p->client_name指针时,消息不居中。 所以我的问题是如何将消息和客户端名称居中,就像它是一条消息一样。为什么它目前仅将消
我是一名优秀的程序员,十分优秀!