- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
类的一些代码示例,展示了ZooKeeperHiveLockManager
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZooKeeperHiveLockManager
类的具体详情如下:
包路径:org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
类名称:ZooKeeperHiveLockManager
暂无
代码示例来源:origin: apache/hive
/**
* @param key
* The object to be locked
* @param mode
* The mode of the lock
* @param keepAlive
* Whether the lock is to be persisted after the statement Acquire the
* lock. Return null if a conflicting lock is present.
**/
@Override
public ZooKeeperHiveLock lock(HiveLockObject key, HiveLockMode mode,
boolean keepAlive) throws LockException {
return lock(key, mode, keepAlive, false);
}
代码示例来源:origin: apache/hive
/**
* @param hiveLocks
* list of hive locks to be released Release all the locks specified. If some of the
* locks have already been released, ignore them
**/
@Override
public void releaseLocks(List<HiveLock> hiveLocks) {
if (hiveLocks != null) {
int len = hiveLocks.size();
for (int pos = len-1; pos >= 0; pos--) {
HiveLock hiveLock = hiveLocks.get(pos);
try {
LOG.debug("About to release lock for {}",
hiveLock.getHiveLockObject().getName());
unlock(hiveLock);
} catch (LockException e) {
// The lock may have been released. Ignore and continue
LOG.warn("Error when releasing lock", e);
}
}
}
}
代码示例来源:origin: apache/hive
private void checkRedundantNode(String node) {
try {
// Nothing to do if it is a lock mode
if (getLockMode(node) != null) {
return;
}
List<String> children = curatorFramework.getChildren().forPath(node);
for (String child : children) {
checkRedundantNode(node + "/" + child);
}
children = curatorFramework.getChildren().forPath(node);
if ((children == null) || (children.isEmpty()))
{
curatorFramework.delete().forPath(node);
}
} catch (Exception e) {
LOG.warn("Error in checkRedundantNode for node " + node, e);
}
}
代码示例来源:origin: apache/hive
public static void releaseAllLocks(HiveConf conf) throws Exception {
try {
String parent = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_NAMESPACE);
List<HiveLock> locks = getLocks(conf, null, parent, false, false);
Exception lastExceptionGot = null;
if (locks != null) {
for (HiveLock lock : locks) {
try {
unlockPrimitive(lock, parent, curatorFramework);
} catch (Exception e) {
lastExceptionGot = e;
}
}
}
// if we got exception during doing the unlock, rethrow it here
if(lastExceptionGot != null) {
throw lastExceptionGot;
}
} catch (Exception e) {
LOG.error("Failed to release all locks: ", e);
throw new Exception(ErrorMsg.ZOOKEEPER_CLIENT_COULD_NOT_BE_INITIALIZED.getMsg());
}
}
代码示例来源:origin: apache/hive
lastName = getLastObjectName(parent, key);
names.add(lastName);
} else {
names = getObjectNames(key);
lastName = names.get(names.size() - 1);
res = createChild(name, new byte[0], CreateMode.PERSISTENT);
} catch (Exception e) {
if (!(e instanceof KeeperException) || ((KeeperException)e).code() != KeeperException.Code.NODEEXISTS) {
res = createChild(getLockName(lastName, mode), key.getData().toString()
.getBytes(), keepAlive ? CreateMode.PERSISTENT_SEQUENTIAL
: CreateMode.EPHEMERAL_SEQUENTIAL);
int seqNo = getSequenceNumber(res, getLockName(lastName, mode));
if (seqNo == -1) {
curatorFramework.delete().forPath(res);
String exLock = getLockName(lastName, HiveLockMode.EXCLUSIVE);
String shLock = getLockName(lastName, HiveLockMode.SHARED);
childSeq = getSequenceNumber(child, exLock);
childSeq = getSequenceNumber(child, shLock);
代码示例来源:origin: apache/hive
@Test
public void testMetrics() throws Exception{
conf.setVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM, "localhost");
conf.setVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT, String.valueOf(server.getPort()));
conf.setBoolVar(HiveConf.ConfVars.HIVE_SERVER2_METRICS_ENABLED, true);
conf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
conf.setVar(HiveConf.ConfVars.HIVE_METRICS_REPORTER, MetricsReporting.JSON_FILE.name() + "," + MetricsReporting.JMX.name());
MetricsFactory.init(conf);
CodahaleMetrics metrics = (CodahaleMetrics) MetricsFactory.getInstance();
HiveLockManagerCtx ctx = new HiveLockManagerCtx(conf);
ZooKeeperHiveLockManager zMgr= new ZooKeeperHiveLockManager();
zMgr.setContext(ctx);
ZooKeeperHiveLock curLock = zMgr.lock(hiveLock, HiveLockMode.SHARED, false);
String json = metrics.dumpJson();
MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.COUNTER, MetricsConstant.ZOOKEEPER_HIVE_SHAREDLOCKS, 1);
zMgr.unlock(curLock);
json = metrics.dumpJson();
MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.COUNTER, MetricsConstant.ZOOKEEPER_HIVE_SHAREDLOCKS, 0);
zMgr.close();
}
代码示例来源:origin: apache/hive
lock = lock(lockObject.getObj(), lockObject.getMode(), keepAlive, true);
} catch (LockException e) {
console.printError("Error in acquireLocks..." );
releaseLocks(hiveLocks);
if (isInterrupted) {
throw new LockException(ErrorMsg.LOCK_ACQUIRE_CANCELLED.getMsg());
代码示例来源:origin: apache/hive
HiveLockMode mode = getLockMode(curChild);
if (mode == null) {
continue;
HiveLockObject obj = getLockObject(conf, curChild, mode, data, parent, verifyTablePartition);
if (obj == null) {
continue;
代码示例来源:origin: apache/hive
@Override
public List<HiveLock> getLocks(HiveLockObject key, boolean verifyTablePartitions,
boolean fetchData) throws LockException {
return getLocks(ctx.getConf(), key, parent, verifyTablePartitions, fetchData);
}
代码示例来源:origin: org.apache.hadoop.hive/hive-exec
public static void releaseAllLocks(HiveConf conf) throws Exception {
try {
int sessionTimeout = conf.getIntVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_SESSION_TIMEOUT);
String quorumServers = getQuorumServers(conf);
ZooKeeper zkpClient = new ZooKeeper(quorumServers, sessionTimeout, new DummyWatcher());
String parent = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_NAMESPACE);
List<HiveLock> locks = getLocks(conf, zkpClient, null, parent, false, false);
if (locks != null) {
for (HiveLock lock : locks) {
unlock(conf, zkpClient, lock, parent);
}
}
zkpClient.close();
zkpClient = null;
} catch (Exception e) {
LOG.error("Failed to release all locks: " + e.getMessage());
throw new Exception(ErrorMsg.ZOOKEEPER_CLIENT_COULD_NOT_BE_INITIALIZED.getMsg());
}
}
代码示例来源:origin: apache/hive
HiveLockMode lMode = hiveLock.getHiveLockMode();
HiveLockObject obj = zLock.getHiveLockObject();
String name = getLastObjectName(parent, obj);
try {
代码示例来源:origin: apache/hive
/** Remove all redundant nodes **/
private void removeAllRedundantNodes() {
try {
checkRedundantNode("/" + parent);
} catch (Exception e) {
LOG.warn("Exception while removing all redundant nodes", e);
}
}
代码示例来源:origin: org.apache.hadoop.hive/hive-exec
/** Remove all redundant nodes **/
private void removeAllRedundantNodes() {
try {
renewZookeeperInstance(sessionTimeout, quorumServers);
checkRedundantNode("/" + parent);
} catch (Exception e) {
// ignore all errors
}
}
代码示例来源:origin: apache/drill
lastName = getLastObjectName(parent, key);
names.add(lastName);
} else {
names = getObjectNames(key);
lastName = names.get(names.size() - 1);
res = createChild(name, new byte[0], CreateMode.PERSISTENT);
} catch (Exception e) {
if (!(e instanceof KeeperException) || ((KeeperException)e).code() != KeeperException.Code.NODEEXISTS) {
res = createChild(getLockName(lastName, mode), key.getData().toString()
.getBytes(), keepAlive ? CreateMode.PERSISTENT_SEQUENTIAL
: CreateMode.EPHEMERAL_SEQUENTIAL);
int seqNo = getSequenceNumber(res, getLockName(lastName, mode));
if (seqNo == -1) {
curatorFramework.delete().forPath(res);
String exLock = getLockName(lastName, HiveLockMode.EXCLUSIVE);
String shLock = getLockName(lastName, HiveLockMode.SHARED);
childSeq = getSequenceNumber(child, exLock);
childSeq = getSequenceNumber(child, shLock);
代码示例来源:origin: apache/drill
lock = lock(lockObject.getObj(), lockObject.getMode(), keepAlive, true);
} catch (LockException e) {
console.printError("Error in acquireLocks..." );
releaseLocks(hiveLocks);
if (isInterrupted) {
throw new LockException(ErrorMsg.LOCK_ACQUIRE_CANCELLED.getMsg());
代码示例来源:origin: apache/drill
HiveLockMode mode = getLockMode(curChild);
if (mode == null) {
continue;
HiveLockObject obj = getLockObject(conf, curChild, mode, data, parent, verifyTablePartition);
if (obj == null) {
continue;
代码示例来源:origin: apache/drill
public static void releaseAllLocks(HiveConf conf) throws Exception {
try {
String parent = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_NAMESPACE);
List<HiveLock> locks = getLocks(conf, null, parent, false, false);
Exception lastExceptionGot = null;
if (locks != null) {
for (HiveLock lock : locks) {
try {
unlockPrimitive(lock, parent, curatorFramework);
} catch (Exception e) {
lastExceptionGot = e;
}
}
}
// if we got exception during doing the unlock, rethrow it here
if(lastExceptionGot != null) {
throw lastExceptionGot;
}
} catch (Exception e) {
LOG.error("Failed to release all locks: ", e);
throw new Exception(ErrorMsg.ZOOKEEPER_CLIENT_COULD_NOT_BE_INITIALIZED.getMsg());
}
}
代码示例来源:origin: apache/hive
@Override
public List<HiveLock> getLocks(boolean verifyTablePartition, boolean fetchData)
throws LockException {
return getLocks(ctx.getConf(), null, parent, verifyTablePartition, fetchData);
}
代码示例来源:origin: apache/drill
HiveLockMode lMode = hiveLock.getHiveLockMode();
HiveLockObject obj = zLock.getHiveLockObject();
String name = getLastObjectName(parent, obj);
try {
代码示例来源:origin: apache/drill
/** Remove all redundant nodes **/
private void removeAllRedundantNodes() {
try {
checkRedundantNode("/" + parent);
} catch (Exception e) {
LOG.warn("Exception while removing all redundant nodes", e);
}
}
我可以将 CSV 或任何其他平面文件导入到 hive 中,而无需先在 hive 中创建和定义表结构吗?假设我的 csv 文件有 200 列,需要导入到 hive 表中。所以我必须首先在 hive 中创
我有以下示例数据,我试图在 hive 中爆炸它.. 我使用了 split 但我知道我错过了一些东西.. ["[[-80.742426,35.23248],[-80.740424,35.23184],[
我有一个很大的日志文件,我加载到 HDFS . HDFS将根据机架感知复制到不同的节点。 现在我将相同的文件加载到配置单元表中。命令如下: create table log_analysis (log
我正在尝试使用 UDF在 hive 中。但是当我尝试使用 userdate as 'unixtimeToDate' 创建一个临时函数时,我得到这个异常(exception) hive> create
在Mysql中,我们可以使用DO sleep(5) ;来进行暂停。但它在 Hive 中不起作用。 Hive有 sleep 功能吗? 最佳答案 你可以通过反射调用Thread让hive在处理每一行后多等
我正在将数据从 csv 文件导入 Hive。我的表包含字符串和整数。但是,在我的输入文件中,整数周围有空格,所以它看起来像这样: some string, 2 ,another stri
我可以嵌套吗select在 Hive 中具有不同的条件?例如 如果我有以下两个 Hive 查询: select percentile(x, 0.95) from t1 where y = 1; sel
hive 安装有什么特定的模式吗? 例如,Hadoop 安装有 3 种模式:独立、伪分布式和完全分布式。 同样,Hive 是否有任何特定类型的分布? Hive 可以分布式安装吗? 最佳答案 Hive
我正在使用 Hive,我有一个结构如下的表: CREATE TABLE t1 ( id INT, created TIMESTAMP, some_value BIGINT ); 我需要找到
我是 Hadoop 生态系统工具的新手。 任何人都可以帮助我了解 hive 、直线和 hive 之间的区别。 提前致谢! 最佳答案 Apache hive : 1] Apache Hive 是一个建立
如何在 Hive 中写出数组文字? SELECT PERCENTILE(my_column, [0.5, 0.25, 0.50, 0.75, 0.95]) AS quantiles FROM my_t
我正在尝试在Hive中重命名columnName。是否可以在Hive中重命名列名称。 tableA(栏1,_c1,_c2) 至 tableA(column1,column2,column3) ?? 最
减号查询似乎在 HIVE 中不起作用。 尝试过: select x from abc minus select x from bcd ; 我做错了还是没有为 HIVE 定义负查询?如果是这样,还有其他
我正在尝试使用 hive-jdbc 连接将数据插入 Hive (NON-ACID) 表。如果我在“语句”中执行单个 SQL 查询,它就可以工作。如果我尝试使用“addBatch”对 SQL 进行批处理
我知道这些, 要获取表中的列名,我们可以触发: show columns in . 要获取表的描述(包括 column_name、column_type 和许多其他详细信息): describe [f
无法找到有关 Hive 表最大字符限制的合适规范。 我正在开发一个涉及 hive 表的 ETL 过程,这些表已指定格式为 _ 的命名约定,并且提供的表名称远大于 30 字节(pl/sql 的正常限制)
在安装了Hive的集群中,metastore和namenode有什么?我了解 Metastore 拥有所有表架构、分区详细信息和元数据。现在这个元数据是什么?那么namenode有什么呢?这个元存储在
Hive 中静态分区和动态分区的主要区别是什么?使用单独的插入意味着静态,而对分区表的单个插入意味着动态。还有什么优点吗? 最佳答案 在静态分区中,我们需要在每个 LOAD 语句中指定分区列值。 假设
我是 hadoop 和 hive 的新手。如果有人研究过pivot in hive的概念,请与我分享。 例如:来自 teradata 或 oracle 的数据未转置,这些数据应在 hive 中转置。那
1)如果分区列没有数据,那么当你查询它时,你会得到什么错误? 2)如果某些行没有分区列,这些行将如何处理?会不会有数据丢失? 3)为什么需要对数字列进行分桶?我们也可以使用字符串列吗?流程是什么?您将
我是一名优秀的程序员,十分优秀!