- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中com.yahoo.pulsar.zookeeper.ZooKeeperCache.getZooKeeper()
方法的一些代码示例,展示了ZooKeeperCache.getZooKeeper()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZooKeeperCache.getZooKeeper()
方法的具体详情如下:
包路径:com.yahoo.pulsar.zookeeper.ZooKeeperCache
类名称:ZooKeeperCache
方法名:getZooKeeper
暂无
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker-common
public ZooKeeper getZooKeeper() {
return this.cache.getZooKeeper();
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
private void saveQuotaToZnode(String zpath, ResourceQuota quota) throws Exception {
ZooKeeper zk = this.localZkCache.getZooKeeper();
if (zk.exists(zpath, false) == null) {
try {
ZkUtils.createFullPathOptimistic(zk, zpath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
} catch (KeeperException.NodeExistsException e) {
}
}
zk.setData(zpath, this.jsonMapper.writeValueAsBytes(quota), -1);
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker-common
private void initZK() throws PulsarServerException {
String[] paths = new String[] { CLUSTERS_ROOT, POLICIES_ROOT };
// initialize the zk client with values
try {
ZooKeeper zk = cache.getZooKeeper();
for (String path : paths) {
try {
if (zk.exists(path, false) == null) {
ZkUtils.createFullPathOptimistic(zk, path, new byte[0], Ids.OPEN_ACL_UNSAFE,
CreateMode.PERSISTENT);
}
} catch (KeeperException.NodeExistsException e) {
// Ok
}
}
} catch (Exception e) {
LOG.error(e.getMessage(), e);
throw new PulsarServerException(e);
}
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
protected ZooKeeper globalZk() {
return pulsar().getGlobalZkCache().getZooKeeper();
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
private void initZK() throws PulsarServerException {
String[] paths = new String[] { MANAGED_LEDGER_ROOT, OWNER_INFO_ROOT, LOCAL_POLICIES_ROOT };
// initialize the zk client with values
try {
ZooKeeper zk = cache.getZooKeeper();
for (String path : paths) {
if (cache.exists(path)) {
continue;
}
try {
ZkUtils.createFullPathOptimistic(zk, path, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
} catch (KeeperException.NodeExistsException e) {
// Ok
}
}
} catch (Exception e) {
LOG.error(e.getMessage(), e);
throw new PulsarServerException(e);
}
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
/**
* Disable bundle in local cache and on zk
*
* @param bundle
* @throws Exception
*/
public void disableOwnership(NamespaceBundle bundle) throws Exception {
String path = ServiceUnitZkUtils.path(bundle);
updateBundleState(bundle, false);
localZkCache.getZooKeeper().setData(path, jsonMapper.writeValueAsBytes(selfOwnerInfoDisabled), -1);
ownershipReadOnlyCache.invalidate(path);
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
ZkUtils.asyncCreateFullPathOptimistic(localZkCache.getZooKeeper(), namespaceBundleZNode, znodeContent,
Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL, (rc, path, ctx, name) -> {
if (rc == KeeperException.Code.OK.intValue()) {
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
ZkUtils.asyncCreateFullPathOptimistic(cache.getZooKeeper(), path, content, Ids.OPEN_ACL_UNSAFE,
CreateMode.PERSISTENT, (rc, path1, ctx, name) -> {
if (rc == KeeperException.Code.OK.intValue()
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
/**
* Method to remove the ownership of local broker on the <code>NamespaceBundle</code>, if owned
*
*/
public CompletableFuture<Void> removeOwnership(NamespaceBundle bundle) {
CompletableFuture<Void> result = new CompletableFuture<>();
String key = ServiceUnitZkUtils.path(bundle);
localZkCache.getZooKeeper().delete(key, -1, (rc, path, ctx) -> {
if (rc == KeeperException.Code.OK.intValue() || rc == KeeperException.Code.NONODE.intValue()) {
LOG.info("[{}] Removed zk lock for service unit: {}", key, KeeperException.Code.get(rc));
ownedBundlesCache.synchronous().invalidate(key);
ownershipReadOnlyCache.invalidate(key);
result.complete(null);
} else {
LOG.warn("[{}] Failed to delete the namespace ephemeral node. key={}", key,
KeeperException.Code.get(rc));
result.completeExceptionally(KeeperException.create(rc));
}
}, null);
return result;
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
/**
* Checks whether the broker is allowed to do read-write operations based on the existence of a node in global
* zookeeper.
*
* @throws WebApplicationException
* if broker has a read only access if broker is not connected to the global zookeeper
*/
public void validatePoliciesReadOnlyAccess() {
boolean arePoliciesReadOnly = true;
try {
arePoliciesReadOnly = globalZkCache().exists(POLICIES_READONLY_FLAG_PATH);
} catch (Exception e) {
log.warn("Unable to fetch contents of [{}] from global zookeeper", POLICIES_READONLY_FLAG_PATH, e);
throw new RestException(e);
}
if (arePoliciesReadOnly) {
log.debug("Policies are read-only. Broker cannot do read-write operations");
throw new RestException(Status.FORBIDDEN, "Broker is forbidden to do read-write operations");
} else {
// Make sure the broker is connected to the global zookeeper before writing. If not, throw an exception.
if (globalZkCache().getZooKeeper().getState() != States.CONNECTED) {
log.debug("Broker is not connected to the global zookeeper");
throw new RestException(Status.PRECONDITION_FAILED,
"Broker needs to be connected to global zookeeper before making a read-write operation");
} else {
// Do nothing, just log the message.
log.debug("Broker is allowed to make read-write operations");
}
}
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
/**
* update new bundle-range to LocalZk (create a new node if not present)
*
* @param nsname
* @param nsBundles
* @param callback
* @throws Exception
*/
private void updateNamespaceBundles(NamespaceName nsname, NamespaceBundles nsBundles, StatCallback callback)
throws Exception {
checkNotNull(nsname);
checkNotNull(nsBundles);
String path = joinPath(LOCAL_POLICIES_ROOT, nsname.toString());
Optional<LocalPolicies> policies = pulsar.getLocalZkCacheService().policiesCache().get(path);
if (!policies.isPresent()) {
// if policies is not present into localZk then create new policies
this.pulsar.getLocalZkCacheService().createPolicies(path, false).get(cacheTimeOutInSec, SECONDS);
policies = this.pulsar.getLocalZkCacheService().policiesCache().get(path);
}
policies.get().bundles = getBundlesData(nsBundles);
this.pulsar.getLocalZkCache().getZooKeeper().setData(path,
ObjectMapperFactory.getThreadLocal().writeValueAsBytes(policies.get()), -1, callback, null);
}
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
ZkUtils.createFullPathOptimistic(pulsar.getLocalZkCache().getZooKeeper(), ELECTION_ROOT,
jsonMapper.writeValueAsBytes(leaderBroker), Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL);
代码示例来源:origin: com.yahoo.pulsar/pulsar-discovery-service
localZkConnectionSvc.start(exitCode -> {
try {
localZkCache.getZooKeeper().close();
} catch (InterruptedException e) {
log.warn("Failed to shutdown ZooKeeper gracefully {}", e.getMessage(), e);
代码示例来源:origin: com.yahoo.pulsar/pulsar-broker
/**
* Default constructor.
*
* @throws PulsarServerException
*/
public NamespaceService(PulsarService pulsar) {
this.pulsar = pulsar;
host = pulsar.getAdvertisedAddress();
this.config = pulsar.getConfiguration();
this.loadManager = pulsar.getLoadManager();
ServiceUnitZkUtils.initZK(pulsar.getLocalZkCache().getZooKeeper(), pulsar.getBrokerServiceUrl());
this.bundleFactory = new NamespaceBundleFactory(pulsar, Hashing.crc32());
this.ownershipCache = new OwnershipCache(pulsar, bundleFactory);
}
我最近安装了一个带有 Exhibitor 的新 ZK 节点,它开始正常。当我执行 telnet localhost 2181 然后运行 stats 以查看版本时,即使我安装了 3.4.11,我仍然
每 the zookeeper docs , 可以创建一种以“ super ”用户身份访问 Zookeeper 集合的方法。这样做的方向在配置和连接方法方面都有些模糊。它确实巧妙地表明这只能通过 Ja
场景如下: Znode 创建:create/config 12345(例如创建于12/12/12) 更新此配置,设置/config 34567(例如在 2013 年 12 月 12 日修改) 一个月后
使用Zookeeper API,是否可以知道当前服务器是否是Zookeeper集群的leader? 文档提到领导者看到了所有追随者中最高的zxid。。是否有可能以某种方式检查? 最佳答案 您可以从不同
我正在学习 ZooKeeper 并研究备份存储在 ZooKeeper 中的数据的选项。 ZooKeeper 写入两个数据文件,快照和事务日志。人们经常提到快照是“模糊的”,需要重放事务日志以获取最新状
用例:一个池中有 100 个服务器;我想在每个服务器上启动一个 ZooKeeper 服务,服务器应用程序(ZooKeeper 客户端)将使用 ZooKeeper 集群(读/写)。那么就没有单点故障。
我正在运行 3 节点 zookeeper 集群来处理 Storm 和 kafka.Zookeeper 数据目录占用了我系统中的所有空间。我不知道如何清理它。因为,我不想完全删除数据,因为我会丢失进程的
我是 Zookeeper 的新手,试图了解它是否适合我的用例。 我有 1000 万个分层数据,我想将它们存储在 Zookeeper 中。 10M 键值对,键值对大小分别为 1KB。 因此,在没有复
在 here 有人说: "even if you read from a different follower every time, you'll never see version 3 of th
Zookeeper 临时节点是否写入磁盘? 我知道在 Zookeeper 确认写入客户端之前,正常的 Zookeeper 节点已写入磁盘。 但是,临时节点仅在客户端 session 期间持续,因此如果
在开发阶段使用zookeeper大约6个月后,虽然运行良好,但其数据目录的大小增长到 6 GIG !而且还在增加。下面列出了一些系统规范: zookeeper version: 3.4.6 numbe
我试图了解 Apache ZooKeeper 在裂脑情况下的内部工作原理。假设有一个由 5 个服务器组成的集群:A、B、C、D 和 E,其中 A 是领导者。现在假设子簇 {A, B} 与子簇 {C,
动物园管理员专家。 我问的问题对你来说可能很基础,但我是 ZK 的新手,我还没有掌握该工具,所以请原谅。考虑到这一点,这是我的问题。 假设我有一个由 5 个服务器组成的 ZK 集群,我有 3 个法定人
我正在尝试了解 Zookeeper 中的分层仲裁。文档 here 举了一个例子,但我仍然不确定我是否理解它。我的问题是,如果我有一个双节点 Zookeeper 集群(我知道不推荐这样做,但为了这个例子
我们使用的是2.3.0版本的curator-framework连接pom文件中的zookeeper。 org.apache.curator curator-fram
我们在开发机器上有一个独立的 zookeeper 设置。除了这台 testdev 机器之外,它适用于所有其他开发机器。 尝试通过 testdev 连接到 zookeeper 时,我们一遍又一遍地收到此
zookeeper 事务日志变得非常大(数千兆字节!)并且集群的每台机器中始终存在一个或最多两个事务日志文件,因为知道可能存在多个快照! .. 引入 autopurge.purgeInterval 和
本文整理了Java中org.apache.flink.shaded.zookeeper.org.apache.zookeeper.ZooKeeper.getSessionId()方法的一些代码示例,展
我需要一些帮助来使用 zookeeper-shell.sh 验证 znode(path) 是否存在于 zookeeper 中 示例:bin/zookeeper-shell.sh zk:9091 ls/
我需要使用 tcpdump 调试我的 kafka 消费者和 zookeeper 之间交换的数据。我浏览了 zookeeper 文档,但找不到任何关于 zookeeper 通信协议(protocol)的
我是一名优秀的程序员,十分优秀!