- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.accumulo.fate.zookeeper.ZooUtil
类的一些代码示例,展示了ZooUtil
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZooUtil
类的具体详情如下:
包路径:org.apache.accumulo.fate.zookeeper.ZooUtil
类名称:ZooUtil
暂无
代码示例来源:origin: apache/accumulo
@Override
public List<ACL> getACL(String zPath, Stat stat) throws KeeperException, InterruptedException {
return ZooUtil.getACL(info, zPath, stat);
}
代码示例来源:origin: apache/accumulo
/**
* Iterate over the queued work to remove entries that have been completed.
*/
@Override
protected void cleanupFinishedWork() {
final Iterator<String> work = queuedWork.iterator();
final String instanceId = client.getInstanceID();
while (work.hasNext()) {
String filename = work.next();
// Null equates to the work was finished
if (zooCache.get(ZooUtil.getRoot(instanceId) + ReplicationConstants.ZOO_WORK_QUEUE + "/"
+ filename) == null) {
work.remove();
}
}
}
代码示例来源:origin: apache/accumulo
public static boolean exists(ZooKeeperConnectionInfo info, String zPath)
throws KeeperException, InterruptedException {
return getStatus(info, zPath) != null;
}
代码示例来源:origin: apache/accumulo
public static byte[] getData(ZooKeeperConnectionInfo info, String zPath, Stat stat)
throws KeeperException, InterruptedException {
final Retry retry = RETRY_FACTORY.createRetry();
while (true) {
try {
return getZooKeeper(info).getData(zPath, false, stat);
} catch (KeeperException e) {
final Code c = e.code();
if (c == Code.CONNECTIONLOSS || c == Code.OPERATIONTIMEOUT || c == Code.SESSIONEXPIRED) {
retryOrThrow(retry, e);
} else {
throw e;
}
}
retry.waitForNextAttempt();
}
}
代码示例来源:origin: apache/accumulo
@Override
public List<String> getMasterLocations() {
String masterLocPath = ZooUtil.getRoot(getInstanceID()) + Constants.ZMASTER_LOCK;
OpTimer timer = null;
if (log.isTraceEnabled()) {
log.trace("tid={} Looking up master location in zookeeper.", Thread.currentThread().getId());
timer = new OpTimer().start();
}
byte[] loc = ZooUtil.getLockData(zooCache, masterLocPath);
if (timer != null) {
timer.stop();
log.trace("tid={} Found master at {} in {}", Thread.currentThread().getId(),
(loc == null ? "null" : new String(loc, UTF_8)),
String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
}
if (loc == null) {
return Collections.emptyList();
}
return Collections.singletonList(new String(loc, UTF_8));
}
代码示例来源:origin: apache/accumulo
public static void recursiveCopyPersistent(ZooKeeperConnectionInfo info, String source,
String destination, NodeExistsPolicy policy) throws KeeperException, InterruptedException {
Stat stat = null;
if (!exists(info, source))
throw KeeperException.create(Code.NONODE, source);
if (exists(info, destination)) {
switch (policy) {
case OVERWRITE:
byte[] data = getData(info, source, stat);
putPersistentData(info, destination, data, policy);
if (stat.getNumChildren() > 0) {
List<String> children;
while (true) {
try {
children = getZooKeeper(info).getChildren(source, false);
break;
} catch (KeeperException e) {
if (c == Code.CONNECTIONLOSS || c == Code.OPERATIONTIMEOUT
|| c == Code.SESSIONEXPIRED) {
retryOrThrow(retry, e);
} else {
throw e;
recursiveCopyPersistent(info, source + "/" + child, destination + "/" + child, policy);
代码示例来源:origin: apache/accumulo
while (true) {
try {
children = getZooKeeper(info).getChildren(zPath, false);
break;
} catch (KeeperException e) {
final Code c = e.code();
if (c == Code.CONNECTIONLOSS || c == Code.OPERATIONTIMEOUT || c == Code.SESSIONEXPIRED) {
retryOrThrow(retry, e);
} else {
throw e;
recursiveDelete(info, zPath + "/" + child, NodeMissingPolicy.SKIP);
stat = getZooKeeper(info).exists(zPath, null);
getZooKeeper(info).delete(zPath, -1);
return;
} catch (NoNodeException e) {
final Code c = e.code();
if (c == Code.CONNECTIONLOSS || c == Code.OPERATIONTIMEOUT || c == Code.SESSIONEXPIRED) {
retryOrThrow(retry, e);
} else {
throw e;
代码示例来源:origin: apache/accumulo
public static boolean putPersistentData(ZooKeeperConnectionInfo info, String zPath, byte[] data,
int version, NodeExistsPolicy policy, List<ACL> acls)
throws KeeperException, InterruptedException {
return putData(info, zPath, data, CreateMode.PERSISTENT, version, policy, acls);
}
代码示例来源:origin: apache/accumulo
@Override
public String putEphemeralData(String zPath, byte[] data)
throws KeeperException, InterruptedException {
return ZooUtil.putEphemeralData(info, zPath, data);
}
代码示例来源:origin: apache/accumulo
@Override
public String putEphemeralSequential(String zPath, byte[] data)
throws KeeperException, InterruptedException {
return ZooUtil.putEphemeralSequential(info, zPath, data);
}
代码示例来源:origin: apache/accumulo
@Override
public boolean isLockHeld(ZooUtil.LockID lockID) throws KeeperException, InterruptedException {
return ZooUtil.isLockHeld(info, lockID);
}
代码示例来源:origin: apache/accumulo
protected static ZooKeeper getZooKeeper(ZooKeeperConnectionInfo info) {
return getZooKeeper(info.keepers, info.timeout, info.scheme, info.auth);
}
代码示例来源:origin: org.apache.accumulo/accumulo-fate
public static void recursiveCopyPersistent(ZooKeeperConnectionInfo info, String source,
String destination, NodeExistsPolicy policy) throws KeeperException, InterruptedException {
Stat stat = null;
if (!exists(info, source))
throw KeeperException.create(Code.NONODE, source);
if (exists(info, destination)) {
switch (policy) {
case OVERWRITE:
byte[] data = getData(info, source, stat);
putPersistentData(info, destination, data, policy);
if (stat.getNumChildren() > 0) {
List<String> children;
while (true) {
try {
children = getZooKeeper(info).getChildren(source, false);
break;
} catch (KeeperException e) {
if (c == Code.CONNECTIONLOSS || c == Code.OPERATIONTIMEOUT
|| c == Code.SESSIONEXPIRED) {
retryOrThrow(retry, e);
} else {
throw e;
recursiveCopyPersistent(info, source + "/" + child, destination + "/" + child, policy);
代码示例来源:origin: apache/accumulo
public static String putEphemeralData(ZooKeeperConnectionInfo info, String zPath, byte[] data)
throws KeeperException, InterruptedException {
final Retry retry = RETRY_FACTORY.createRetry();
while (true) {
try {
return getZooKeeper(info).create(zPath, data, ZooUtil.PUBLIC, CreateMode.EPHEMERAL);
} catch (KeeperException e) {
final Code c = e.code();
if (c == Code.CONNECTIONLOSS || c == Code.OPERATIONTIMEOUT || c == Code.SESSIONEXPIRED) {
retryOrThrow(retry, e);
} else {
throw e;
}
}
retry.waitForNextAttempt();
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-fate
while (true) {
try {
children = getZooKeeper(info).getChildren(zPath, false);
break;
} catch (KeeperException e) {
final Code c = e.code();
if (c == Code.CONNECTIONLOSS || c == Code.OPERATIONTIMEOUT || c == Code.SESSIONEXPIRED) {
retryOrThrow(retry, e);
} else {
throw e;
recursiveDelete(info, zPath + "/" + child, NodeMissingPolicy.SKIP);
stat = getZooKeeper(info).exists(zPath, null);
getZooKeeper(info).delete(zPath, -1);
return;
} catch (NoNodeException e) {
final Code c = e.code();
if (c == Code.CONNECTIONLOSS || c == Code.OPERATIONTIMEOUT || c == Code.SESSIONEXPIRED) {
retryOrThrow(retry, e);
} else {
throw e;
代码示例来源:origin: apache/accumulo
/**
* Create a persistent node with the default ACL
*
* @return true if the node was created or altered; false if it was skipped
*/
public static boolean putPersistentData(ZooKeeperConnectionInfo info, String zPath, byte[] data,
NodeExistsPolicy policy) throws KeeperException, InterruptedException {
return putData(info, zPath, data, CreateMode.PERSISTENT, -1, policy, PUBLIC);
}
代码示例来源:origin: org.apache.accumulo/accumulo-fate
@Override
public String putEphemeralData(String zPath, byte[] data)
throws KeeperException, InterruptedException {
return ZooUtil.putEphemeralData(info, zPath, data);
}
代码示例来源:origin: org.apache.accumulo/accumulo-fate
@Override
public String putEphemeralSequential(String zPath, byte[] data)
throws KeeperException, InterruptedException {
return ZooUtil.putEphemeralSequential(info, zPath, data);
}
代码示例来源:origin: org.apache.accumulo/accumulo-fate
@Override
public boolean isLockHeld(ZooUtil.LockID lockID) throws KeeperException, InterruptedException {
return ZooUtil.isLockHeld(info, lockID);
}
代码示例来源:origin: org.apache.accumulo/accumulo-fate
protected static ZooKeeper getZooKeeper(ZooKeeperConnectionInfo info) {
return getZooKeeper(info.keepers, info.timeout, info.scheme, info.auth);
}
我最近安装了一个带有 Exhibitor 的新 ZK 节点,它开始正常。当我执行 telnet localhost 2181 然后运行 stats 以查看版本时,即使我安装了 3.4.11,我仍然
每 the zookeeper docs , 可以创建一种以“ super ”用户身份访问 Zookeeper 集合的方法。这样做的方向在配置和连接方法方面都有些模糊。它确实巧妙地表明这只能通过 Ja
场景如下: Znode 创建:create/config 12345(例如创建于12/12/12) 更新此配置,设置/config 34567(例如在 2013 年 12 月 12 日修改) 一个月后
使用Zookeeper API,是否可以知道当前服务器是否是Zookeeper集群的leader? 文档提到领导者看到了所有追随者中最高的zxid。。是否有可能以某种方式检查? 最佳答案 您可以从不同
我正在学习 ZooKeeper 并研究备份存储在 ZooKeeper 中的数据的选项。 ZooKeeper 写入两个数据文件,快照和事务日志。人们经常提到快照是“模糊的”,需要重放事务日志以获取最新状
用例:一个池中有 100 个服务器;我想在每个服务器上启动一个 ZooKeeper 服务,服务器应用程序(ZooKeeper 客户端)将使用 ZooKeeper 集群(读/写)。那么就没有单点故障。
我正在运行 3 节点 zookeeper 集群来处理 Storm 和 kafka.Zookeeper 数据目录占用了我系统中的所有空间。我不知道如何清理它。因为,我不想完全删除数据,因为我会丢失进程的
我是 Zookeeper 的新手,试图了解它是否适合我的用例。 我有 1000 万个分层数据,我想将它们存储在 Zookeeper 中。 10M 键值对,键值对大小分别为 1KB。 因此,在没有复
在 here 有人说: "even if you read from a different follower every time, you'll never see version 3 of th
Zookeeper 临时节点是否写入磁盘? 我知道在 Zookeeper 确认写入客户端之前,正常的 Zookeeper 节点已写入磁盘。 但是,临时节点仅在客户端 session 期间持续,因此如果
在开发阶段使用zookeeper大约6个月后,虽然运行良好,但其数据目录的大小增长到 6 GIG !而且还在增加。下面列出了一些系统规范: zookeeper version: 3.4.6 numbe
我试图了解 Apache ZooKeeper 在裂脑情况下的内部工作原理。假设有一个由 5 个服务器组成的集群:A、B、C、D 和 E,其中 A 是领导者。现在假设子簇 {A, B} 与子簇 {C,
动物园管理员专家。 我问的问题对你来说可能很基础,但我是 ZK 的新手,我还没有掌握该工具,所以请原谅。考虑到这一点,这是我的问题。 假设我有一个由 5 个服务器组成的 ZK 集群,我有 3 个法定人
我正在尝试了解 Zookeeper 中的分层仲裁。文档 here 举了一个例子,但我仍然不确定我是否理解它。我的问题是,如果我有一个双节点 Zookeeper 集群(我知道不推荐这样做,但为了这个例子
我们使用的是2.3.0版本的curator-framework连接pom文件中的zookeeper。 org.apache.curator curator-fram
我们在开发机器上有一个独立的 zookeeper 设置。除了这台 testdev 机器之外,它适用于所有其他开发机器。 尝试通过 testdev 连接到 zookeeper 时,我们一遍又一遍地收到此
zookeeper 事务日志变得非常大(数千兆字节!)并且集群的每台机器中始终存在一个或最多两个事务日志文件,因为知道可能存在多个快照! .. 引入 autopurge.purgeInterval 和
本文整理了Java中org.apache.flink.shaded.zookeeper.org.apache.zookeeper.ZooKeeper.getSessionId()方法的一些代码示例,展
我需要一些帮助来使用 zookeeper-shell.sh 验证 znode(path) 是否存在于 zookeeper 中 示例:bin/zookeeper-shell.sh zk:9091 ls/
我需要使用 tcpdump 调试我的 kafka 消费者和 zookeeper 之间交换的数据。我浏览了 zookeeper 文档,但找不到任何关于 zookeeper 通信协议(protocol)的
我是一名优秀的程序员,十分优秀!