- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中kafka.utils.ZkUtils
类的一些代码示例,展示了ZkUtils
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZkUtils
类的具体详情如下:
包路径:kafka.utils.ZkUtils
类名称:ZkUtils
暂无
代码示例来源:origin: apache/flink
public ZkUtils getZkUtils() {
LOG.info("In getZKUtils:: zookeeperConnectionString = {}", zookeeperConnectionString);
ZkClient creator = new ZkClient(zookeeperConnectionString, Integer.valueOf(standardProps.getProperty("zookeeper.session.timeout.ms")),
Integer.valueOf(standardProps.getProperty("zookeeper.connection.timeout.ms")), new ZooKeeperStringSerializer());
return ZkUtils.apply(creator, false);
}
代码示例来源:origin: apache/flink
@Override
public void deleteTestTopic(String topic) {
ZkUtils zkUtils = getZkUtils();
try {
LOG.info("Deleting topic {}", topic);
ZkClient zk = new ZkClient(zookeeperConnectionString, Integer.valueOf(standardProps.getProperty("zookeeper.session.timeout.ms")),
Integer.valueOf(standardProps.getProperty("zookeeper.connection.timeout.ms")), new ZooKeeperStringSerializer());
AdminUtils.deleteTopic(zkUtils, topic);
zk.close();
} finally {
zkUtils.close();
}
}
代码示例来源:origin: apache/incubator-gobblin
ZkClient zkClient = new ZkClient(zookeeperConnect, sessionTimeoutMs, connectionTimeoutMs, ZKStringSerializer$.MODULE$);
ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperConnect), false);
int partitions = ConfigUtils.getInt(config, KafkaWriterConfigurationKeys.PARTITION_COUNT, KafkaWriterConfigurationKeys.PARTITION_COUNT_DEFAULT);
int replication = ConfigUtils.getInt(config, KafkaWriterConfigurationKeys.REPLICATION_COUNT, KafkaWriterConfigurationKeys.PARTITION_COUNT_DEFAULT);
Properties topicConfig = new Properties();
if(AdminUtils.topicExists(zkUtils, topicName)) {
log.debug("Topic"+topicName+" already Exists with replication: "+replication+" and partitions :"+partitions);
return;
AdminUtils.createTopic(zkUtils, topicName, partitions, replication, topicConfig);
} catch (RuntimeException e) {
throw new RuntimeException(e);
代码示例来源:origin: OryxProject/oryx
/**
* @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
* @param topic topic to check for existence
* @return {@code true} if and only if the given topic exists
*/
public static boolean topicExists(String zkServers, String topic) {
ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
try {
return AdminUtils.topicExists(zkUtils, topic);
} finally {
zkUtils.close();
}
}
代码示例来源:origin: linkedin/kafka-monitor
/**
* @param zkUrl zookeeper connection url
* @return number of brokers in this cluster
*/
public static int getBrokerCount(String zkUrl) {
ZkUtils zkUtils = ZkUtils.apply(zkUrl, ZK_SESSION_TIMEOUT_MS, ZK_CONNECTION_TIMEOUT_MS, JaasUtils.isZkSecurityEnabled());
try {
return zkUtils.getAllBrokersInCluster().size();
} finally {
zkUtils.close();
}
}
代码示例来源:origin: OryxProject/oryx
/**
* @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
* @param topic topic to delete, if it exists
*/
public static void deleteTopic(String zkServers, String topic) {
ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
try {
if (AdminUtils.topicExists(zkUtils, topic)) {
log.info("Deleting topic {}", topic);
AdminUtils.deleteTopic(zkUtils, topic);
log.info("Deleted Zookeeper topic {}", topic);
} else {
log.info("No need to delete topic {} as it does not exist", topic);
}
} finally {
zkUtils.close();
}
}
代码示例来源:origin: confluentinc/kafka-streams-examples
/**
* Creates and starts the cluster.
*/
public void start() throws Exception {
log.debug("Initiating embedded Kafka cluster startup");
log.debug("Starting a ZooKeeper instance...");
zookeeper = new ZooKeeperEmbedded();
log.debug("ZooKeeper instance is running at {}", zookeeper.connectString());
zkUtils = ZkUtils.apply(
zookeeper.connectString(),
30000,
30000,
JaasUtils.isZkSecurityEnabled());
Properties effectiveBrokerConfig = effectiveBrokerConfigFrom(brokerConfig, zookeeper);
log.debug("Starting a Kafka instance on port {} ...",
effectiveBrokerConfig.getProperty(KafkaConfig$.MODULE$.PortProp()));
broker = new KafkaEmbedded(effectiveBrokerConfig);
log.debug("Kafka instance is running at {}, connected to ZooKeeper at {}",
broker.brokerList(), broker.zookeeperConnect());
Properties schemaRegistryProps = new Properties();
schemaRegistryProps.put(SchemaRegistryConfig.KAFKASTORE_TIMEOUT_CONFIG, KAFKASTORE_OPERATION_TIMEOUT_MS);
schemaRegistryProps.put(SchemaRegistryConfig.DEBUG_CONFIG, KAFKASTORE_DEBUG);
schemaRegistryProps.put(SchemaRegistryConfig.KAFKASTORE_INIT_TIMEOUT_CONFIG, KAFKASTORE_INIT_TIMEOUT);
schemaRegistry = new RestApp(0, zookeeperConnect(), KAFKA_SCHEMAS_TOPIC, AVRO_COMPATIBILITY_TYPE, schemaRegistryProps);
schemaRegistry.start();
running = true;
}
代码示例来源:origin: apache/incubator-gobblin
Properties props = new Properties();
props.setProperty(KafkaWriterConfigurationKeys.KAFKA_TOPIC, topic);
props.setProperty(KafkaWriterConfigurationKeys.REPLICATION_COUNT, topicReplicationCount);
props.setProperty(KafkaWriterConfigurationKeys.PARTITION_COUNT, topicPartitionCount );
props.setProperty(KafkaWriterConfigurationKeys.CLUSTER_ZOOKEEPER, liveZookeeper);
ZkClient zkClient = new ZkClient(
liveZookeeper,
sessionTimeoutMs,
ZKStringSerializer$.MODULE$);
boolean isSecureKafkaCluster = false;
ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(liveZookeeper), isSecureKafkaCluster);
AdminUtils.fetchTopicMetadataFromZk(topic,zkUtils);
Assert.assertEquals(metaData.partitionsMetadata().size(), Integer.parseInt(topicPartitionCount));
代码示例来源:origin: apache/drill
public static void createTopicHelper(final String topicName, final int partitions) {
Properties topicProps = new Properties();
topicProps.put(TopicConfig.MESSAGE_TIMESTAMP_TYPE_CONFIG, "CreateTime");
topicProps.put(TopicConfig.RETENTION_MS_CONFIG, "-1");
ZkUtils zkUtils = new ZkUtils(zkClient,
new ZkConnection(embeddedKafkaCluster.getZkServer().getConnectionString()), false);
AdminUtils.createTopic(zkUtils, topicName, partitions, 1,
topicProps, RackAwareMode.Disabled$.MODULE$);
org.apache.kafka.common.requests.MetadataResponse.TopicMetadata fetchTopicMetadataFromZk =
AdminUtils.fetchTopicMetadataFromZk(topicName, zkUtils);
logger.info("Topic Metadata: " + fetchTopicMetadataFromZk);
}
代码示例来源:origin: apache/phoenix
@Before
public void setUp() throws IOException, SQLException {
// setup Zookeeper
zkServer = new EmbeddedZookeeper();
String zkConnect = ZKHOST + ":" + zkServer.port();
zkClient = new ZkClient(zkConnect, 30000, 30000, ZKStringSerializer$.MODULE$);
ZkUtils zkUtils = ZkUtils.apply(zkClient, false);
// setup Broker
Properties brokerProps = new Properties();
brokerProps.setProperty("zookeeper.connect", zkConnect);
brokerProps.setProperty("broker.id", "0");
brokerProps.setProperty("log.dirs",
Files.createTempDirectory("kafka-").toAbsolutePath().toString());
brokerProps.setProperty("listeners", "PLAINTEXT://" + BROKERHOST + ":" + BROKERPORT);
KafkaConfig config = new KafkaConfig(brokerProps);
Time mock = new MockTime();
kafkaServer = TestUtils.createServer(config, mock);
kafkaServer.startup();
// create topic
AdminUtils.createTopic(zkUtils, TOPIC, 1, 1, new Properties());
pConsumer = new PhoenixConsumer();
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
conn = DriverManager.getConnection(getUrl(), props);
}
代码示例来源:origin: OryxProject/oryx
/**
* @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
* @param topic topic to create (if not already existing)
* @param partitions number of topic partitions
* @param topicProperties optional topic config properties
*/
public static void maybeCreateTopic(String zkServers,
String topic,
int partitions,
Properties topicProperties) {
ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
try {
if (AdminUtils.topicExists(zkUtils, topic)) {
log.info("No need to create topic {} as it already exists", topic);
} else {
log.info("Creating topic {} with {} partition(s)", topic, partitions);
try {
AdminUtils.createTopic(
zkUtils, topic, partitions, 1, topicProperties, RackAwareMode.Enforced$.MODULE$);
log.info("Created topic {}", topic);
} catch (TopicExistsException re) {
log.info("Topic {} already exists", topic);
}
}
} finally {
zkUtils.close();
}
}
代码示例来源:origin: com.hotels.road/road-kafka-store
@SuppressWarnings({ "rawtypes", "unchecked" })
private static void verifyTopic(ZkUtils zkUtils, String topic) {
Set topics = new HashSet();
topics.add(topic);
// check # partition and the replication factor
scala.collection.mutable.Map partitionAssignmentForTopics = zkUtils
.getPartitionAssignmentForTopics(JavaConversions.asScalaSet(topics).toSeq());
scala.collection.Map partitionAssignment = (scala.collection.Map) partitionAssignmentForTopics.get(topic).get();
if (partitionAssignment.size() != 1) {
throw new RuntimeException(String.format("The schema topic %s should have only 1 partition.", topic));
}
// check the retention policy
Properties prop = AdminUtils.fetchEntityConfig(zkUtils, ConfigType.Topic(), topic);
String retentionPolicy = prop.getProperty(LogConfig.CleanupPolicyProp());
if (retentionPolicy == null || "compact".compareTo(retentionPolicy) != 0) {
throw new RuntimeException(String.format("The retention policy of the schema topic %s must be compact.", topic));
}
}
}
代码示例来源:origin: apache/incubator-gobblin
public void provisionTopic(String topic) {
if (_topicConsumerMap.containsKey(topic)) {
// nothing to do: return
} else {
// provision topic
AdminUtils.createTopic(ZkUtils.apply(_kafkaServerSuite.getZkClient(), false),
topic, 1, 1, new Properties());
List<KafkaServer> servers = new ArrayList<>();
servers.add(_kafkaServerSuite.getKafkaServer());
kafka.utils.TestUtils.waitUntilMetadataIsPropagated(scala.collection.JavaConversions.asScalaBuffer(servers), topic, 0, 5000);
KafkaConsumerSuite consumerSuite = new KafkaConsumerSuite(_kafkaServerSuite.getZkConnectString(), topic);
_topicConsumerMap.put(topic, consumerSuite);
}
}
代码示例来源:origin: apache/tajo
public void createTopic(int partitions, int replication, String topic) {
checkState(started.get(), "not started!");
ZkClient zkClient = new ZkClient(getZookeeperConnectString(), 30000, 30000, ZKStringSerializer$.MODULE$);
try {
AdminUtils.createTopic(ZkUtils.apply(zkClient, false), topic, partitions, replication, new Properties(),
RackAwareMode.Enforced$.MODULE$);
} finally {
zkClient.close();
}
}
代码示例来源:origin: reactor/reactor-kafka
public String createNewTopic(String newTopic, int partitions) {
ZkUtils zkUtils = new ZkUtils(embeddedKafka.zkClient(), null, false);
Properties props = new Properties();
AdminUtils.createTopic(zkUtils, newTopic, partitions, 1, props, null);
waitForTopic(newTopic, partitions, true);
return newTopic;
}
代码示例来源:origin: homeaway/stream-registry
private ZkUtils initZkUtils(Properties config) {
String zkConnect = config.getProperty(KafkaProducerConfig.ZOOKEEPER_QUORUM);
ZkClient zkClient = new ZkClient(zkConnect);
zkClient.setZkSerializer(ZKStringSerializer$.MODULE$);
ZkConnection zkConnection = new ZkConnection(zkConnect);
ZkUtils zkUtils = new ZkUtils(zkClient, zkConnection, false);
return zkUtils;
}
代码示例来源:origin: linkedin/kafka-monitor
ZkUtils zkUtils = ZkUtils.apply(zkUrl, ZK_SESSION_TIMEOUT_MS, ZK_CONNECTION_TIMEOUT_MS, JaasUtils.isZkSecurityEnabled());
try {
if (AdminUtils.topicExists(zkUtils, topic)) {
return getPartitionNumForTopic(zkUrl, topic);
int brokerCount = zkUtils.getAllBrokersInCluster().size();
int partitionCount = Math.max((int) Math.ceil(brokerCount * partitionToBrokerRatio), minPartitionNum);
AdminUtils.createTopic(zkUtils, topic, partitionCount, replicationFactor, topicConfig, RackAwareMode.Enforced$.MODULE$);
} catch (TopicExistsException e) {
+ topicConfig.get(KafkaConfig.MinInSyncReplicasProp()) + " and replication factor of " + replicationFactor + ".");
zkUtils.close();
代码示例来源:origin: org.apache.kafka/kafka_2.10
public int run(final String[] args, final Properties config) {
consumerConfig.clear();
consumerConfig.putAll(config);
zkUtils = ZkUtils.apply(options.valueOf(zookeeperOption),
30000,
30000,
allTopics.addAll(scala.collection.JavaConversions.seqAsJavaList(zkUtils.getAllTopics()));
zkUtils.close();
代码示例来源:origin: org.apache.apex/malhar-contrib
/**
* There is always only one string in zkHost
* @param zkHost
* @return
*/
public static Set<String> getBrokers(Set<String> zkHost){
ZkClient zkclient = new ZkClient(zkHost.iterator().next(), 30000, 30000, ZKStringSerializer$.MODULE$);
Set<String> brokerHosts = new HashSet<String>();
for (Broker b : JavaConversions.asJavaIterable(ZkUtils.getAllBrokersInCluster(zkclient))) {
brokerHosts.add(b.connectionString());
}
zkclient.close();
return brokerHosts;
}
代码示例来源:origin: linkedin/cruise-control
public BrokerFailureDetector(KafkaCruiseControlConfig config,
LoadMonitor loadMonitor,
Queue<Anomaly> anomalies,
Time time,
KafkaCruiseControl kafkaCruiseControl) {
String zkUrl = config.getString(KafkaCruiseControlConfig.ZOOKEEPER_CONNECT_CONFIG);
ZkConnection zkConnection = new ZkConnection(zkUrl, 30000);
_zkClient = new ZkClient(zkConnection, 30000, new ZkStringSerializer());
// Do not support secure ZK at this point.
_zkUtils = new ZkUtils(_zkClient, zkConnection, false);
_failedBrokers = new HashMap<>();
_failedBrokersZkPath = config.getString(KafkaCruiseControlConfig.FAILED_BROKERS_ZK_PATH_CONFIG);
_loadMonitor = loadMonitor;
_anomalies = anomalies;
_time = time;
_kafkaCruiseControl = kafkaCruiseControl;
_allowCapacityEstimation = config.getBoolean(KafkaCruiseControlConfig.ANOMALY_DETECTION_ALLOW_CAPACITY_ESTIMATION_CONFIG);
}
我在网上搜索但没有找到任何合适的文章解释如何使用 javascript 使用 WCF 服务,尤其是 WebScriptEndpoint。 任何人都可以对此给出任何指导吗? 谢谢 最佳答案 这是一篇关于
我正在编写一个将运行 Linux 命令的 C 程序,例如: cat/etc/passwd | grep 列表 |剪切-c 1-5 我没有任何结果 *这里 parent 等待第一个 child (chi
所以我正在尝试处理文件上传,然后将该文件作为二进制文件存储到数据库中。在我存储它之后,我尝试在给定的 URL 上提供文件。我似乎找不到适合这里的方法。我需要使用数据库,因为我使用 Google 应用引
我正在尝试制作一个宏,将下面的公式添加到单元格中,然后将其拖到整个列中并在 H 列中复制相同的公式 我想在 F 和 H 列中输入公式的数据 Range("F1").formula = "=IF(ISE
问题类似于this one ,但我想使用 OperatorPrecedenceParser 解析带有函数应用程序的表达式在 FParsec . 这是我的 AST: type Expression =
我想通过使用 sequelize 和 node.js 将这个查询更改为代码取决于在哪里 select COUNT(gender) as genderCount from customers where
我正在使用GNU bash,版本5.0.3(1)-发行版(x86_64-pc-linux-gnu),我想知道为什么简单的赋值语句会出现语法错误: #/bin/bash var1=/tmp
这里,为什么我的代码在 IE 中不起作用。我的代码适用于所有浏览器。没有问题。但是当我在 IE 上运行我的项目时,它发现错误。 而且我的 jquery 类和 insertadjacentHTMl 也不
我正在尝试更改标签的innerHTML。我无权访问该表单,因此无法编辑 HTML。标签具有的唯一标识符是“for”属性。 这是输入和标签的结构:
我有一个页面,我可以在其中返回用户帖子,可以使用一些 jquery 代码对这些帖子进行即时评论,在发布新评论后,我在帖子下插入新评论以及删除 按钮。问题是 Delete 按钮在新插入的元素上不起作用,
我有一个大约有 20 列的“管道分隔”文件。我只想使用 sha1sum 散列第一列,它是一个数字,如帐号,并按原样返回其余列。 使用 awk 或 sed 执行此操作的最佳方法是什么? Accounti
我需要将以下内容插入到我的表中...我的用户表有五列 id、用户名、密码、名称、条目。 (我还没有提交任何东西到条目中,我稍后会使用 php 来做)但由于某种原因我不断收到这个错误:#1054 - U
所以我试图有一个输入字段,我可以在其中输入任何字符,但然后将输入的值小写,删除任何非字母数字字符,留下“。”而不是空格。 例如,如果我输入: 地球的 70% 是水,-!*#$^^ & 30% 土地 输
我正在尝试做一些我认为非常简单的事情,但出于某种原因我没有得到想要的结果?我是 javascript 的新手,但对 java 有经验,所以我相信我没有使用某种正确的规则。 这是一个获取输入值、检查选择
我想使用 angularjs 从 mysql 数据库加载数据。 这就是应用程序的工作原理;用户登录,他们的用户名存储在 cookie 中。该用户名显示在主页上 我想获取这个值并通过 angularjs
我正在使用 autoLayout,我想在 UITableViewCell 上放置一个 UIlabel,它应该始终位于单元格的右侧和右侧的中心。 这就是我想要实现的目标 所以在这里你可以看到我正在谈论的
我需要与 MySql 等效的 elasticsearch 查询。我的 sql 查询: SELECT DISTINCT t.product_id AS id FROM tbl_sup_price t
我正在实现代码以使用 JSON。 func setup() { if let flickrURL = NSURL(string: "https://api.flickr.com/
我尝试使用for循环声明变量,然后测试cols和rols是否相同。如果是,它将运行递归函数。但是,我在 javascript 中执行 do 时遇到问题。有人可以帮忙吗? 现在,在比较 col.1 和
我举了一个我正在处理的问题的简短示例。 HTML代码: 1 2 3 CSS 代码: .BB a:hover{ color: #000; } .BB > li:after {
我是一名优秀的程序员,十分优秀!