- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.accumulo.server.zookeeper.ZooReaderWriter.getInstance()
方法的一些代码示例,展示了ZooReaderWriter.getInstance()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZooReaderWriter.getInstance()
方法的具体详情如下:
包路径:org.apache.accumulo.server.zookeeper.ZooReaderWriter
类名称:ZooReaderWriter
方法名:getInstance
暂无
代码示例来源:origin: org.apache.accumulo/accumulo-server-base
public DeadServerList(String path) {
this.path = path;
IZooReaderWriter zoo = ZooReaderWriter.getInstance();
try {
zoo.mkdirs(path);
} catch (Exception ex) {
log.error("Unable to make parent directories of " + path, ex);
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-server
public void post(String server, String cause) {
IZooReaderWriter zoo = ZooReaderWriter.getInstance();
try {
zoo.putPersistentData(path + "/" + server, cause.getBytes(UTF_8), NodeExistsPolicy.SKIP);
} catch (Exception ex) {
log.error(ex, ex);
}
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-server-base
private void deleteServerNode(String serverNode) throws InterruptedException, KeeperException {
try {
ZooReaderWriter.getInstance().delete(serverNode, -1);
} catch (NotEmptyException ex) {
// race condition: tserver created the lock after our last check; we'll see it at the next
// check
} catch (NoNodeException nne) {
// someone else deleted it
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-server-base
public void delete(String server) {
IZooReaderWriter zoo = ZooReaderWriter.getInstance();
try {
zoo.recursiveDelete(path + "/" + server, NodeMissingPolicy.SKIP);
} catch (Exception ex) {
log.error("delete failed with exception", ex);
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-server
public DeadServerList(String path) {
this.path = path;
IZooReaderWriter zoo = ZooReaderWriter.getInstance();
try {
zoo.mkdirs(path);
} catch (Exception ex) {
log.error("Unable to make parent directories of " + path, ex);
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-master
@Override
public void process(WatchedEvent event) {
nextEvent.event("Noticed recovery changes", event.getType());
try {
// watcher only fires once, add it back
ZooReaderWriter.getInstance().getChildren(zroot + Constants.ZRECOVERY, this);
} catch (Exception e) {
log.error("Failed to add log recovery watcher back", e);
}
}
});
代码示例来源:origin: org.apache.accumulo/accumulo-tracer
private void registerInZooKeeper(String name, String root) throws Exception {
IZooReaderWriter zoo = ZooReaderWriter.getInstance();
zoo.putPersistentData(root, new byte[0], NodeExistsPolicy.SKIP);
log.info("Registering tracer " + name + " at " + root);
String path = zoo.putEphemeralSequential(root + "/trace-", name.getBytes(UTF_8));
zoo.exists(path, this);
}
代码示例来源:origin: org.apache.accumulo/accumulo-server-base
public static void prepareNewNamespaceState(String instanceId, String namespaceId,
String namespace, NodeExistsPolicy existsPolicy)
throws KeeperException, InterruptedException {
log.debug(
"Creating ZooKeeper entries for new namespace " + namespace + " (ID: " + namespaceId + ")");
String zPath = Constants.ZROOT + "/" + instanceId + Constants.ZNAMESPACES + "/" + namespaceId;
IZooReaderWriter zoo = ZooReaderWriter.getInstance();
zoo.putPersistentData(zPath, new byte[0], existsPolicy);
zoo.putPersistentData(zPath + Constants.ZNAMESPACE_NAME, namespace.getBytes(UTF_8),
existsPolicy);
zoo.putPersistentData(zPath + Constants.ZNAMESPACE_CONF, new byte[0], existsPolicy);
}
代码示例来源:origin: org.apache.accumulo/accumulo-server
@Override
public void run() {
try {
// Initially set the logger if the Monitor's log4j advertisement node exists
if (ZooReaderWriter.getInstance().exists(path, this))
updateMonitorLog4jLocation();
log.info("Set watch for Monitor Log4j watcher");
} catch (Exception e) {
log.error("Unable to set watch for Monitor Log4j watcher on " + path);
}
super.run();
}
代码示例来源:origin: org.apache.accumulo/accumulo-server-base
private void createUserNodeInZk(String principal) throws KeeperException, InterruptedException {
synchronized (zooCache) {
zooCache.clear();
IZooReaderWriter zoo = ZooReaderWriter.getInstance();
zoo.putPrivatePersistentData(zkUserPath + "/" + principal, new byte[0],
NodeExistsPolicy.FAIL);
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-server-base
/**
* Sets up the user in ZK for the provided user. No checking for existence is done here, it should
* be done before calling.
*/
private void constructUser(String user, byte[] pass)
throws KeeperException, InterruptedException {
synchronized (zooCache) {
zooCache.clear();
IZooReaderWriter zoo = ZooReaderWriter.getInstance();
zoo.putPrivatePersistentData(ZKUserPath + "/" + user, pass, NodeExistsPolicy.FAIL);
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-server
public void clearMergeState(Text tableId) throws IOException, KeeperException, InterruptedException {
synchronized (mergeLock) {
String path = ZooUtil.getRoot(instance.getInstanceID()) + Constants.ZTABLES + "/" + tableId.toString() + "/merge";
ZooReaderWriter.getInstance().recursiveDelete(path, NodeMissingPolicy.SKIP);
mergeLock.notifyAll();
}
nextEvent.event("Merge state of %s cleared", tableId);
}
代码示例来源:origin: org.apache.accumulo/accumulo-server
@Override
public void put(String path, byte[] bs) throws DistributedStoreException {
try {
path = relative(path);
ZooReaderWriter.getInstance().putPersistentData(path, bs, NodeExistsPolicy.OVERWRITE);
cache.clear();
log.debug("Wrote " + new String(bs, UTF_8) + " to " + path);
} catch (Exception ex) {
throw new DistributedStoreException(ex);
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-server-base
public static void stop(String type, long tid) throws KeeperException, InterruptedException {
Instance instance = HdfsZooInstance.getInstance();
IZooReaderWriter writer = ZooReaderWriter.getInstance();
writer.recursiveDelete(ZooUtil.getRoot(instance) + "/" + type + "/" + tid,
NodeMissingPolicy.SKIP);
}
代码示例来源:origin: org.apache.accumulo/accumulo-server
public static void removeSystemProperty(String property) throws InterruptedException, KeeperException {
String zPath = ZooUtil.getRoot(HdfsZooInstance.getInstance()) + Constants.ZCONFIG + "/" + property;
ZooReaderWriter.getInstance().recursiveDelete(zPath, NodeMissingPolicy.FAIL);
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-master
MasterGoalState getMasterGoalState() {
while (true)
try {
byte[] data = ZooReaderWriter.getInstance()
.getData(ZooUtil.getRoot(getInstance()) + Constants.ZMASTER_GOAL_STATE, null);
return MasterGoalState.valueOf(new String(data));
} catch (Exception e) {
log.error("Problem getting real goal state from zookeeper: " + e);
sleepUninterruptibly(1, TimeUnit.SECONDS);
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-server-base
@Override
public void process(WatchedEvent event) {
// We got an update, process the data in the node
updateMonitorLog4jLocation();
if (event.getPath() != null) {
try {
ZooReaderWriter.getInstance().exists(event.getPath(), this);
} catch (Exception ex) {
log.error("Unable to reset watch for Monitor Log4j watcher", ex);
}
}
}
代码示例来源:origin: org.apache.accumulo/accumulo-master
public void clearMergeState(String tableId)
throws IOException, KeeperException, InterruptedException {
synchronized (mergeLock) {
String path = ZooUtil.getRoot(getInstance().getInstanceID()) + Constants.ZTABLES + "/"
+ tableId + "/merge";
ZooReaderWriter.getInstance().recursiveDelete(path, NodeMissingPolicy.SKIP);
mergeLock.notifyAll();
}
nextEvent.event("Merge state of %s cleared", tableId);
}
代码示例来源:origin: org.apache.accumulo/accumulo-server
public static void cleanup(String type, long tid) throws KeeperException, InterruptedException {
Instance instance = HdfsZooInstance.getInstance();
IZooReaderWriter writer = ZooReaderWriter.getInstance();
writer.recursiveDelete(ZooUtil.getRoot(instance) + "/" + type + "/" + tid, NodeMissingPolicy.SKIP);
writer.recursiveDelete(ZooUtil.getRoot(instance) + "/" + type + "/" + tid + "-running", NodeMissingPolicy.SKIP);
}
代码示例来源:origin: org.apache.accumulo/accumulo-server
public static void start(String type, long tid) throws KeeperException, InterruptedException {
Instance instance = HdfsZooInstance.getInstance();
IZooReaderWriter writer = ZooReaderWriter.getInstance();
writer.putPersistentData(ZooUtil.getRoot(instance) + "/" + type, new byte[] {}, NodeExistsPolicy.OVERWRITE);
writer.putPersistentData(ZooUtil.getRoot(instance) + "/" + type + "/" + tid, new byte[] {}, NodeExistsPolicy.OVERWRITE);
writer.putPersistentData(ZooUtil.getRoot(instance) + "/" + type + "/" + tid + "-running", new byte[] {}, NodeExistsPolicy.OVERWRITE);
}
我最近安装了一个带有 Exhibitor 的新 ZK 节点,它开始正常。当我执行 telnet localhost 2181 然后运行 stats 以查看版本时,即使我安装了 3.4.11,我仍然
每 the zookeeper docs , 可以创建一种以“ super ”用户身份访问 Zookeeper 集合的方法。这样做的方向在配置和连接方法方面都有些模糊。它确实巧妙地表明这只能通过 Ja
场景如下: Znode 创建:create/config 12345(例如创建于12/12/12) 更新此配置,设置/config 34567(例如在 2013 年 12 月 12 日修改) 一个月后
使用Zookeeper API,是否可以知道当前服务器是否是Zookeeper集群的leader? 文档提到领导者看到了所有追随者中最高的zxid。。是否有可能以某种方式检查? 最佳答案 您可以从不同
我正在学习 ZooKeeper 并研究备份存储在 ZooKeeper 中的数据的选项。 ZooKeeper 写入两个数据文件,快照和事务日志。人们经常提到快照是“模糊的”,需要重放事务日志以获取最新状
用例:一个池中有 100 个服务器;我想在每个服务器上启动一个 ZooKeeper 服务,服务器应用程序(ZooKeeper 客户端)将使用 ZooKeeper 集群(读/写)。那么就没有单点故障。
我正在运行 3 节点 zookeeper 集群来处理 Storm 和 kafka.Zookeeper 数据目录占用了我系统中的所有空间。我不知道如何清理它。因为,我不想完全删除数据,因为我会丢失进程的
我是 Zookeeper 的新手,试图了解它是否适合我的用例。 我有 1000 万个分层数据,我想将它们存储在 Zookeeper 中。 10M 键值对,键值对大小分别为 1KB。 因此,在没有复
在 here 有人说: "even if you read from a different follower every time, you'll never see version 3 of th
Zookeeper 临时节点是否写入磁盘? 我知道在 Zookeeper 确认写入客户端之前,正常的 Zookeeper 节点已写入磁盘。 但是,临时节点仅在客户端 session 期间持续,因此如果
在开发阶段使用zookeeper大约6个月后,虽然运行良好,但其数据目录的大小增长到 6 GIG !而且还在增加。下面列出了一些系统规范: zookeeper version: 3.4.6 numbe
我试图了解 Apache ZooKeeper 在裂脑情况下的内部工作原理。假设有一个由 5 个服务器组成的集群:A、B、C、D 和 E,其中 A 是领导者。现在假设子簇 {A, B} 与子簇 {C,
动物园管理员专家。 我问的问题对你来说可能很基础,但我是 ZK 的新手,我还没有掌握该工具,所以请原谅。考虑到这一点,这是我的问题。 假设我有一个由 5 个服务器组成的 ZK 集群,我有 3 个法定人
我正在尝试了解 Zookeeper 中的分层仲裁。文档 here 举了一个例子,但我仍然不确定我是否理解它。我的问题是,如果我有一个双节点 Zookeeper 集群(我知道不推荐这样做,但为了这个例子
我们使用的是2.3.0版本的curator-framework连接pom文件中的zookeeper。 org.apache.curator curator-fram
我们在开发机器上有一个独立的 zookeeper 设置。除了这台 testdev 机器之外,它适用于所有其他开发机器。 尝试通过 testdev 连接到 zookeeper 时,我们一遍又一遍地收到此
zookeeper 事务日志变得非常大(数千兆字节!)并且集群的每台机器中始终存在一个或最多两个事务日志文件,因为知道可能存在多个快照! .. 引入 autopurge.purgeInterval 和
本文整理了Java中org.apache.flink.shaded.zookeeper.org.apache.zookeeper.ZooKeeper.getSessionId()方法的一些代码示例,展
我需要一些帮助来使用 zookeeper-shell.sh 验证 znode(path) 是否存在于 zookeeper 中 示例:bin/zookeeper-shell.sh zk:9091 ls/
我需要使用 tcpdump 调试我的 kafka 消费者和 zookeeper 之间交换的数据。我浏览了 zookeeper 文档,但找不到任何关于 zookeeper 通信协议(protocol)的
我是一名优秀的程序员,十分优秀!