- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中kafka.utils.ZkUtils.apply()
方法的一些代码示例,展示了ZkUtils.apply()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZkUtils.apply()
方法的具体详情如下:
包路径:kafka.utils.ZkUtils
类名称:ZkUtils
方法名:apply
暂无
代码示例来源:origin: linkedin/cruise-control
public static ZkUtils createZkUtils(String zkConnect) {
return ZkUtils.apply(zkConnect, ZK_SESSION_TIMEOUT, ZK_CONNECTION_TIMEOUT, IS_ZK_SECURITY_ENABLED);
}
代码示例来源:origin: OryxProject/oryx
/**
* @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
* @param topic topic to check for existence
* @return {@code true} if and only if the given topic exists
*/
public static boolean topicExists(String zkServers, String topic) {
ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
try {
return AdminUtils.topicExists(zkUtils, topic);
} finally {
zkUtils.close();
}
}
代码示例来源:origin: OryxProject/oryx
/**
* @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
* @param topic topic to delete, if it exists
*/
public static void deleteTopic(String zkServers, String topic) {
ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
try {
if (AdminUtils.topicExists(zkUtils, topic)) {
log.info("Deleting topic {}", topic);
AdminUtils.deleteTopic(zkUtils, topic);
log.info("Deleted Zookeeper topic {}", topic);
} else {
log.info("No need to delete topic {} as it does not exist", topic);
}
} finally {
zkUtils.close();
}
}
代码示例来源:origin: OryxProject/oryx
/**
* @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
* @param topic topic to create (if not already existing)
* @param partitions number of topic partitions
* @param topicProperties optional topic config properties
*/
public static void maybeCreateTopic(String zkServers,
String topic,
int partitions,
Properties topicProperties) {
ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
try {
if (AdminUtils.topicExists(zkUtils, topic)) {
log.info("No need to create topic {} as it already exists", topic);
} else {
log.info("Creating topic {} with {} partition(s)", topic, partitions);
try {
AdminUtils.createTopic(
zkUtils, topic, partitions, 1, topicProperties, RackAwareMode.Enforced$.MODULE$);
log.info("Created topic {}", topic);
} catch (TopicExistsException re) {
log.info("Topic {} already exists", topic);
}
}
} finally {
zkUtils.close();
}
}
代码示例来源:origin: apache/flink
public ZkUtils getZkUtils() {
LOG.info("In getZKUtils:: zookeeperConnectionString = {}", zookeeperConnectionString);
ZkClient creator = new ZkClient(zookeeperConnectionString, Integer.valueOf(standardProps.getProperty("zookeeper.session.timeout.ms")),
Integer.valueOf(standardProps.getProperty("zookeeper.connection.timeout.ms")), new ZooKeeperStringSerializer());
return ZkUtils.apply(creator, false);
}
代码示例来源:origin: apache/flink
public ZkUtils getZkUtils() {
ZkClient creator = new ZkClient(zookeeperConnectionString, Integer.valueOf(standardProps.getProperty("zookeeper.session.timeout.ms")),
Integer.valueOf(standardProps.getProperty("zookeeper.connection.timeout.ms")), new ZooKeeperStringSerializer());
return ZkUtils.apply(creator, false);
}
代码示例来源:origin: apache/flink
public ZkUtils getZkUtils() {
ZkClient creator = new ZkClient(zookeeperConnectionString, Integer.valueOf(standardProps.getProperty("zookeeper.session.timeout.ms")),
Integer.valueOf(standardProps.getProperty("zookeeper.connection.timeout.ms")), new ZooKeeperStringSerializer());
return ZkUtils.apply(creator, false);
}
代码示例来源:origin: OryxProject/oryx
/**
* @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
* @param groupID consumer group to update
* @param offsets mapping of (topic and) partition to offset to push to Zookeeper
*/
public static void setOffsets(String zkServers,
String groupID,
Map<Pair<String,Integer>,Long> offsets) {
ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
try {
offsets.forEach((topicAndPartition, offset) -> {
String topic = topicAndPartition.getFirst();
int partition = topicAndPartition.getSecond();
String partitionOffsetPath = "/consumers/" + groupID + "/offsets/" + topic + "/" + partition;
zkUtils.updatePersistentPath(partitionOffsetPath,
Long.toString(offset),
ZkUtils$.MODULE$.defaultAcls(false, ""));
});
} finally {
zkUtils.close();
}
}
代码示例来源:origin: linkedin/kafka-monitor
/**
* @param zkUrl zookeeper connection url
* @return number of brokers in this cluster
*/
public static int getBrokerCount(String zkUrl) {
ZkUtils zkUtils = ZkUtils.apply(zkUrl, ZK_SESSION_TIMEOUT_MS, ZK_CONNECTION_TIMEOUT_MS, JaasUtils.isZkSecurityEnabled());
try {
return zkUtils.getAllBrokersInCluster().size();
} finally {
zkUtils.close();
}
}
代码示例来源:origin: apache/incubator-gobblin
public void stopClients() throws IOException {
for (Map.Entry<String, KafkaConsumerSuite> consumerSuiteEntry: _topicConsumerMap.entrySet())
{
consumerSuiteEntry.getValue().shutdown();
AdminUtils.deleteTopic(ZkUtils.apply(_kafkaServerSuite.getZkClient(), false),
consumerSuiteEntry.getKey());
}
}
代码示例来源:origin: linkedin/kafka-monitor
/**
* Read number of partitions for the given topic on the specified zookeeper
* @param zkUrl zookeeper connection url
* @param topic topic name
*
* @return the number of partitions of the given topic
*/
public static int getPartitionNumForTopic(String zkUrl, String topic) {
ZkUtils zkUtils = ZkUtils.apply(zkUrl, ZK_SESSION_TIMEOUT_MS, ZK_CONNECTION_TIMEOUT_MS, JaasUtils.isZkSecurityEnabled());
try {
Seq<String> topics = scala.collection.JavaConversions.asScalaBuffer(Arrays.asList(topic));
return zkUtils.getPartitionsForTopics(topics).apply(topic).size();
} catch (NoSuchElementException e) {
return 0;
} finally {
zkUtils.close();
}
}
代码示例来源:origin: linkedin/kafka-monitor
ZkUtils zkUtils = ZkUtils.apply(zkUrl, ZK_SESSION_TIMEOUT_MS, ZK_CONNECTION_TIMEOUT_MS, JaasUtils.isZkSecurityEnabled());
try {
if (AdminUtils.topicExists(zkUtils, topic)) {
代码示例来源:origin: apache/incubator-gobblin
public void provisionTopic(String topic) {
if (_topicConsumerMap.containsKey(topic)) {
// nothing to do: return
} else {
// provision topic
AdminUtils.createTopic(ZkUtils.apply(_kafkaServerSuite.getZkClient(), false),
topic, 1, 1, new Properties());
List<KafkaServer> servers = new ArrayList<>();
servers.add(_kafkaServerSuite.getKafkaServer());
kafka.utils.TestUtils.waitUntilMetadataIsPropagated(scala.collection.JavaConversions.asScalaBuffer(servers), topic, 0, 5000);
KafkaConsumerSuite consumerSuite = new KafkaConsumerSuite(_kafkaServerSuite.getZkConnectString(), topic);
_topicConsumerMap.put(topic, consumerSuite);
}
}
代码示例来源:origin: apache/metron
/**
* Bean for ZooKeeper
*/
@Bean
public ZkUtils zkUtils() {
return ZkUtils.apply(zkClient, false);
}
代码示例来源:origin: uber/chaperone
private static void putOffsetInfoIntoZk(String groupId, Map<String, Map<Integer, Long>> topicOffsetsMap) {
ZkUtils zkUtils =
ZkUtils.apply(AuditConfig.INGESTER_ZK_CONNECT, Integer.valueOf(AuditConfig.INGESTER_ZK_SESSION_TIMEOUT_MS),
Integer.valueOf(AuditConfig.INGESTER_ZK_SESSION_TIMEOUT_MS), false);
try {
for (Map.Entry<String, Map<Integer, Long>> topicEntry : topicOffsetsMap.entrySet()) {
String zkPath = String.format("%s/%s/offsets/%s/", ZkUtils.ConsumersPath(), groupId, topicEntry.getKey());
for (Map.Entry<Integer, Long> offsetEntry : topicEntry.getValue().entrySet()) {
logger.info("Put offset={} to partition={} with znode path={}", offsetEntry.getValue(), offsetEntry.getKey(),
zkPath + offsetEntry.getKey());
zkUtils.updatePersistentPath(zkPath + offsetEntry.getKey(), offsetEntry.getValue().toString(),
zkUtils.DefaultAcls());
}
}
} catch (Exception e) {
logger.error("Got exception to put offset, with zkPathPrefix={}",
String.format("%s/%s/offsets", ZkUtils.ConsumersPath(), groupId));
throw e;
} finally {
zkUtils.close();
}
}
代码示例来源:origin: uber/chaperone
public KafkaBrokerTopicObserver(String brokerClusterName, String zkString) {
LOGGER.info("Trying to init KafkaBrokerTopicObserver {} with ZK: {}", brokerClusterName,
zkString);
_kakfaClusterName = brokerClusterName;
_zkUtils = ZkUtils.apply(zkString, 30000, 30000, false);
_zkClient = ZkUtils.createZkClient(zkString, 30000, 30000);
_zkClient.subscribeChildChanges(KAFKA_TOPICS_PATH, this);
registerMetric();
executorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
tryToRefreshCache();
}
}, 0, 600, TimeUnit.SECONDS);
}
代码示例来源:origin: uber/chaperone
private static void removeOffsetInfoFromZk(final String groupId) {
ZkUtils zkUtils =
ZkUtils.apply(AuditConfig.INGESTER_ZK_CONNECT, Integer.valueOf(AuditConfig.INGESTER_ZK_SESSION_TIMEOUT_MS),
Integer.valueOf(AuditConfig.INGESTER_ZK_SESSION_TIMEOUT_MS), false);
try {
String[] targets = new String[] {"offsets", "owners"};
for (String target : targets) {
String zkPath = String.format("%s/%s/%s", ZkUtils.ConsumersPath(), groupId, target);
logger.info("Remove {} with znode path={}", target, zkPath);
zkUtils.deletePathRecursive(zkPath);
}
} catch (Exception e) {
logger.error("Got exception to remove offsets or owners from zookeeper, with zkPathPrefix={}",
String.format("%s/%s/", ZkUtils.ConsumersPath(), groupId));
throw e;
} finally {
zkUtils.close();
}
}
代码示例来源:origin: uber/chaperone
public AutoTopicWhitelistingManager(KafkaBrokerTopicObserver srcKafkaTopicObserver,
KafkaBrokerTopicObserver destKafkaTopicObserver,
HelixMirrorMakerManager helixMirrorMakerManager,
String patternToExcludeTopics,
int refreshTimeInSec,
int initWaitTimeInSec) {
_srcKafkaTopicObserver = srcKafkaTopicObserver;
_destKafkaTopicObserver = destKafkaTopicObserver;
_helixMirrorMakerManager = helixMirrorMakerManager;
_patternToExcludeTopics = patternToExcludeTopics;
_refreshTimeInSec = refreshTimeInSec;
_initWaitTimeInSec = initWaitTimeInSec;
_zkUtils = ZkUtils.apply(_helixMirrorMakerManager.getHelixZkURL(), 30000, 30000, false);
_zkClient = ZkUtils.createZkClient(_helixMirrorMakerManager.getHelixZkURL(), 30000, 30000);
_blacklistedTopicsZPath =
String.format("/%s/BLACKLISTED_TOPICS", _helixMirrorMakerManager.getHelixClusterName());
}
代码示例来源:origin: uber/AthenaX
public static boolean createKafkaTopicIfNecessary(String brokerUri, int replFactor, int numPartitions, String topic)
throws IOException {
URI zkUri = URI.create(brokerUri);
Preconditions.checkArgument("zk".equals(zkUri.getScheme()));
String zkServerList = zkUri.getAuthority() + zkUri.getPath();
ZkUtils zkUtils = ZkUtils.apply(zkServerList, ZK_SESSION_TIMEOUT_MS,
ZK_CONNECTION_TIMEOUT_MS, JaasUtils.isZkSecurityEnabled());
try {
if (AdminUtils.topicExists(zkUtils, topic)) {
return false;
}
try {
AdminUtils.createTopic(zkUtils, topic, numPartitions, replFactor, new Properties());
} catch (TopicExistsException ignored) {
return false;
} catch (RuntimeException e) {
throw new IOException(e);
}
} finally {
if (zkUtils != null) {
zkUtils.close();
}
}
return true;
}
}
代码示例来源:origin: apache/phoenix
@Before
public void setUp() throws IOException, SQLException {
// setup Zookeeper
zkServer = new EmbeddedZookeeper();
String zkConnect = ZKHOST + ":" + zkServer.port();
zkClient = new ZkClient(zkConnect, 30000, 30000, ZKStringSerializer$.MODULE$);
ZkUtils zkUtils = ZkUtils.apply(zkClient, false);
// setup Broker
Properties brokerProps = new Properties();
brokerProps.setProperty("zookeeper.connect", zkConnect);
brokerProps.setProperty("broker.id", "0");
brokerProps.setProperty("log.dirs",
Files.createTempDirectory("kafka-").toAbsolutePath().toString());
brokerProps.setProperty("listeners", "PLAINTEXT://" + BROKERHOST + ":" + BROKERPORT);
KafkaConfig config = new KafkaConfig(brokerProps);
Time mock = new MockTime();
kafkaServer = TestUtils.createServer(config, mock);
kafkaServer.startup();
// create topic
AdminUtils.createTopic(zkUtils, TOPIC, 1, 1, new Properties());
pConsumer = new PhoenixConsumer();
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
conn = DriverManager.getConnection(getUrl(), props);
}
我在 Windows 机器上启动 Kafka-Server 时出现以下错误。我已经从以下链接下载了 Scala 2.11 - kafka_2.11-2.1.0.tgz:https://kafka.ap
关于Apache-Kafka messaging queue . 我已经从 Kafka 下载页面下载了 Apache Kafka。我已将其提取到 /opt/apache/installed/kafka
假设我有 Kafka 主题 cars。 我还有一个消费者组 cars-consumers 订阅了 cars 主题。 cars-consumers 消费者组当前位于偏移量 89。 当我现在删除 cars
我想知道什么最适合我:Kafka 流或 Kafka 消费者 api 或 Kafka 连接? 我想从主题中读取数据,然后进行一些处理并写入数据库。所以我编写了消费者,但我觉得我可以编写 Kafka 流应
我曾研究过一些 Kafka 流应用程序和 Kafka 消费者应用程序。最后,Kafka流不过是消费来自Kafka的实时事件的消费者。因此,我无法弄清楚何时使用 Kafka 流或为什么我们应该使用
Kafka Acknowledgement 和 Kafka 消费者 commitSync() 有什么区别 两者都用于手动偏移管理,并希望两者同步工作。 请协助 最佳答案 使用 spring-kafka
如何在 Kafka 代理上代理 Apache Kafka 生产者请求,并重定向到单独的 Kafka 集群? 在我的特定情况下,无法更新写入此集群的客户端。这意味着,执行以下操作是不可行的: 更新客户端
我需要在 Kafka 10 中命名我的消费者,就像我在 Kafka 8 中所做的一样,因为我有脚本可以嗅出并进一步使用这些信息。 显然,consumer.id 的默认命名已更改(并且现在还单独显示了
1.概述 我们会看到zk的数据中有一个节点/log_dir_event_notification/,这是一个序列号持久节点 这个节点在kafka中承担的作用是: 当某个Broker上的LogDir出现
我正在使用以下命令: bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test.topic --property
我很难理解 Java Spring Boot 中的一些 Kafka 概念。我想针对在服务器上运行的真实 Kafka 代理测试消费者,该服务器有一些生产者已将数据写入/已经将数据写入各种主题。我想与服务
我的场景是我使用了很多共享前缀的 Kafka 主题(例如 house.door, house.room ) 并使用 Kafka 流正则表达式主题模式 API 使用所有主题。 一切看起来都不错,我得到了
有没有办法以编程方式获取kafka集群的版本?例如,使用AdminClient应用程序接口(interface)。 我想在消费者/生产者应用程序中识别 kafka 集群的版本。 最佳答案 目前无法检索
每当我尝试重新启动 kafka 时,它都会出现以下错误。一旦我删除/tmp/kafka-logs 它就会得到解决,但它也会删除我的主题。 有办法解决吗? ERROR Error while
我是 Apache Kafka 的新用户,我仍在了解内部结构。 在我的用例中,我需要从 Kafka Producer 客户端动态增加主题的分区数。 我发现了其他类似的 questions关于增加分区大
正如 Kafka 文档所示,一种方法是通过 kafka.tools.MirrorMaker 来实现这一点。但是,我需要将一个主题(比如 测试 带 1 个分区)(其内容和元数据)从生产环境复制到没有连接
我已经在集群中配置了 3 个 kafka,我正在尝试与 spring-kafka 一起使用。 但是在我杀死 kafka 领导者之后,我无法将其他消息发送到队列中。 我将 spring.kafka.bo
我的 kafka sink 连接器从多个主题(配置了 10 个任务)读取,并处理来自所有主题的 300 条记录。根据每个记录中保存的信息,连接器可以执行某些操作。 以下是触发器记录中键值对的示例: "
我有以下 kafka 流代码 public class KafkaStreamHandler implements Processor{ private ProcessorConte
当 kafka-streams 应用程序正在运行并且 Kafka 突然关闭时,应用程序进入“等待”模式,发送警告日志的消费者和生产者线程无法连接,当 Kafka 回来时,一切都应该(理论上)去恢复正常
我是一名优秀的程序员,十分优秀!