- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.hadoop.hive.thrift.ZooKeeperTokenStore
类的一些代码示例,展示了ZooKeeperTokenStore
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZooKeeperTokenStore
类的具体详情如下:
包路径:org.apache.hadoop.hive.thrift.ZooKeeperTokenStore
类名称:ZooKeeperTokenStore
[英]ZooKeeper token store implementation.
[中]ZooKeeper令牌存储实现。
代码示例来源:origin: org.spark-project.hive.shims/hive-shims-common-secure
private void init() {
if (this.zkConnectString == null) {
throw new IllegalStateException("Not initialized");
}
if (this.zkSession != null) {
try {
this.zkSession.close();
} catch (InterruptedException ex) {
LOGGER.warn("Failed to close existing session.", ex);
}
}
ZooKeeper zk = getSession();
try {
ensurePath(zk, rootNode + NODE_KEYS, newNodeAcl);
ensurePath(zk, rootNode + NODE_TOKENS, newNodeAcl);
} catch (Exception e) {
throw new TokenStoreException("Failed to validate token path.", e);
}
}
代码示例来源:origin: org.spark-project.hive.shims/hive-shims-common-secure
private Map<Integer, byte[]> getAllKeys() throws KeeperException,
InterruptedException {
String masterKeyNode = rootNode + NODE_KEYS;
ZooKeeper zk = getSession();
List<String> nodes = zk.getChildren(masterKeyNode, false);
Map<Integer, byte[]> result = new HashMap<Integer, byte[]>();
for (String node : nodes) {
byte[] data = zk.getData(masterKeyNode + "/" + node, false, null);
if (data != null) {
result.put(getSeq(node), data);
}
}
return result;
}
代码示例来源:origin: org.apache.hive.shims/hive-shims-common-secure
@Override
public boolean removeToken(DelegationTokenIdentifier tokenIdentifier) {
String tokenPath = getTokenPath(tokenIdentifier);
zkDelete(tokenPath);
return true;
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
private Map<Integer, byte[]> getAllKeys() throws KeeperException, InterruptedException {
String masterKeyNode = rootNode + NODE_KEYS;
// get children of key node
List<String> nodes = zkGetChildren(masterKeyNode);
// read each child node, add to results
Map<Integer, byte[]> result = new HashMap<Integer, byte[]>();
for (String node : nodes) {
String nodePath = masterKeyNode + "/" + node;
byte[] data = zkGetData(nodePath);
if (data != null) {
result.put(getSeq(node), data);
}
}
return result;
}
代码示例来源:origin: org.spark-project.hive.shims/hive-shims-common-secure
@Override
public boolean removeToken(DelegationTokenIdentifier tokenIdentifier) {
try {
ZooKeeper zk = getSession();
zk.delete(getTokenPath(tokenIdentifier), -1);
return true;
} catch (KeeperException.NoNodeException ex) {
return false;
} catch (KeeperException ex) {
throw new TokenStoreException(ex);
} catch (InterruptedException ex) {
throw new TokenStoreException(ex);
}
}
代码示例来源:origin: com.github.hyukjinkwon.shims/hive-shims-common
String aclStr = conf.get(HadoopThriftAuthBridge.Server.DELEGATION_TOKEN_STORE_ZK_ACL, null);
if (StringUtils.isNotBlank(aclStr)) {
this.newNodeAcl = parseACLs(aclStr);
setupJAASConfig(conf);
} catch (IOException e) {
throw new TokenStoreException("Error setting up JAAS configuration for zookeeper client "
+ e.getMessage(), e);
initClientAndPaths();
代码示例来源:origin: com.facebook.presto.hive/hive-apache
private List<String> zkGetChildren(String path) {
CuratorFramework zk = getSession();
try {
return zk.getChildren().forPath(path);
} catch (Exception e) {
throw new TokenStoreException("Error getting children for " + path, e);
}
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
@Override
public DelegationTokenInformation getToken(DelegationTokenIdentifier tokenIdentifier) {
byte[] tokenBytes = zkGetData(getTokenPath(tokenIdentifier));
try {
return HiveDelegationTokenSupport.decodeDelegationTokenInformation(tokenBytes);
} catch (Exception ex) {
throw new TokenStoreException("Failed to decode token", ex);
}
}
代码示例来源:origin: org.apache.hive.shims/hive-shims-common-secure
@Override
public String[] getMasterKeys() {
try {
Map<Integer, byte[]> allKeys = getAllKeys();
String[] result = new String[allKeys.size()];
int resultIdx = 0;
for (byte[] keyBytes : allKeys.values()) {
result[resultIdx++] = new String(keyBytes);
}
return result;
} catch (KeeperException ex) {
throw new TokenStoreException(ex);
} catch (InterruptedException ex) {
throw new TokenStoreException(ex);
}
}
代码示例来源:origin: com.github.hyukjinkwon.shims/hive-shims-common
/**
* Parse comma separated list of ACL entries to secure generated nodes, e.g.
* <code>sasl:hive/host1@MY.DOMAIN:cdrwa,sasl:hive/host2@MY.DOMAIN:cdrwa</code>
* @param aclString
* @return ACL list
*/
public static List<ACL> parseACLs(String aclString) {
String[] aclComps = StringUtils.splitByWholeSeparator(aclString, ",");
List<ACL> acl = new ArrayList<ACL>(aclComps.length);
for (String a : aclComps) {
if (StringUtils.isBlank(a)) {
continue;
}
a = a.trim();
// from ZooKeeperMain private method
int firstColon = a.indexOf(':');
int lastColon = a.lastIndexOf(':');
if (firstColon == -1 || lastColon == -1 || firstColon == lastColon) {
LOGGER.error(a + " does not have the form scheme:id:perm");
continue;
}
ACL newAcl = new ACL();
newAcl.setId(new Id(a.substring(0, firstColon), a.substring(
firstColon + 1, lastColon)));
newAcl.setPerms(getPermFromString(a.substring(lastColon + 1)));
acl.add(newAcl);
}
return acl;
}
代码示例来源:origin: org.spark-project.hive.shims/hive-shims-common-secure
@Override
public void setConf(Configuration conf) {
if (conf == null) {
throw new IllegalArgumentException("conf is null");
}
this.zkConnectString = conf.get(
HadoopThriftAuthBridge20S.Server.DELEGATION_TOKEN_STORE_ZK_CONNECT_STR, null);
this.connectTimeoutMillis = conf.getLong(
HadoopThriftAuthBridge20S.Server.DELEGATION_TOKEN_STORE_ZK_CONNECT_TIMEOUTMILLIS, -1);
this.rootNode = conf.get(
HadoopThriftAuthBridge20S.Server.DELEGATION_TOKEN_STORE_ZK_ZNODE,
HadoopThriftAuthBridge20S.Server.DELEGATION_TOKEN_STORE_ZK_ZNODE_DEFAULT);
String csv = conf.get(HadoopThriftAuthBridge20S.Server.DELEGATION_TOKEN_STORE_ZK_ACL, null);
if (StringUtils.isNotBlank(csv)) {
this.newNodeAcl = parseACLs(csv);
}
init();
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
private void initClientAndPaths() {
if (this.zkSession != null) {
this.zkSession.close();
}
try {
ensurePath(rootNode + NODE_KEYS, newNodeAcl);
ensurePath(rootNode + NODE_TOKENS, newNodeAcl);
} catch (TokenStoreException e) {
throw e;
}
}
代码示例来源:origin: com.github.hyukjinkwon.shims/hive-shims-common
private void setupJAASConfig(Configuration conf) throws IOException {
if (!UserGroupInformation.getLoginUser().isFromKeytab()) {
// The process has not logged in using keytab
// this should be a test mode, can't use keytab to authenticate
// with zookeeper.
LOGGER.warn("Login is not from keytab");
return;
}
String principal;
String keytab;
switch (serverMode) {
case METASTORE:
principal = getNonEmptyConfVar(conf, "hive.metastore.kerberos.principal");
keytab = getNonEmptyConfVar(conf, "hive.metastore.kerberos.keytab.file");
break;
case HIVESERVER2:
principal = getNonEmptyConfVar(conf, "hive.server2.authentication.kerberos.principal");
keytab = getNonEmptyConfVar(conf, "hive.server2.authentication.kerberos.keytab");
break;
default:
throw new AssertionError("Unexpected server mode " + serverMode);
}
Utils.setZookeeperClientKerberosJaasConfig(principal, keytab);
}
代码示例来源:origin: org.spark-project.hive.shims/hive-shims-common-secure
@Override
public boolean addToken(DelegationTokenIdentifier tokenIdentifier,
DelegationTokenInformation token) {
try {
ZooKeeper zk = getSession();
byte[] tokenBytes = HiveDelegationTokenSupport.encodeDelegationTokenInformation(token);
String newNode = zk.create(getTokenPath(tokenIdentifier),
tokenBytes, newNodeAcl, CreateMode.PERSISTENT);
LOGGER.info("Added token: {}", newNode);
return true;
} catch (KeeperException.NodeExistsException ex) {
return false;
} catch (KeeperException ex) {
throw new TokenStoreException(ex);
} catch (InterruptedException ex) {
throw new TokenStoreException(ex);
}
}
代码示例来源:origin: org.apache.hive.shims/hive-shims-common-secure
private Map<Integer, byte[]> getAllKeys() throws KeeperException, InterruptedException {
String masterKeyNode = rootNode + NODE_KEYS;
// get children of key node
List<String> nodes = zkGetChildren(masterKeyNode);
// read each child node, add to results
Map<Integer, byte[]> result = new HashMap<Integer, byte[]>();
for (String node : nodes) {
String nodePath = masterKeyNode + "/" + node;
byte[] data = zkGetData(nodePath);
if (data != null) {
result.put(getSeq(node), data);
}
}
return result;
}
代码示例来源:origin: org.apache.hive.shims/hive-shims-common-secure
String aclStr = conf.get(HadoopThriftAuthBridge20S.Server.DELEGATION_TOKEN_STORE_ZK_ACL, null);
if (StringUtils.isNotBlank(aclStr)) {
this.newNodeAcl = parseACLs(aclStr);
setupJAASConfig(conf);
} catch (IOException e) {
throw new TokenStoreException("Error setting up JAAS configuration for zookeeper client "
+ e.getMessage(), e);
initClientAndPaths();
代码示例来源:origin: org.apache.hive.shims/hive-shims-common-secure
private byte[] zkGetData(String nodePath) {
CuratorFramework zk = getSession();
try {
return zk.getData().forPath(nodePath);
} catch (KeeperException.NoNodeException ex) {
return null;
} catch (Exception e) {
throw new TokenStoreException("Error reading " + nodePath, e);
}
}
代码示例来源:origin: com.github.hyukjinkwon.shims/hive-shims-common
@Override
public DelegationTokenInformation getToken(DelegationTokenIdentifier tokenIdentifier) {
byte[] tokenBytes = zkGetData(getTokenPath(tokenIdentifier));
try {
return HiveDelegationTokenSupport.decodeDelegationTokenInformation(tokenBytes);
} catch (Exception ex) {
throw new TokenStoreException("Failed to decode token", ex);
}
}
代码示例来源:origin: com.github.hyukjinkwon.shims/hive-shims-common
@Override
public String[] getMasterKeys() {
try {
Map<Integer, byte[]> allKeys = getAllKeys();
String[] result = new String[allKeys.size()];
int resultIdx = 0;
for (byte[] keyBytes : allKeys.values()) {
result[resultIdx++] = new String(keyBytes);
}
return result;
} catch (KeeperException ex) {
throw new TokenStoreException(ex);
} catch (InterruptedException ex) {
throw new TokenStoreException(ex);
}
}
代码示例来源:origin: org.spark-project.hive.shims/hive-shims-common-secure
/**
* Parse comma separated list of ACL entries to secure generated nodes, e.g.
* <code>sasl:hive/host1@MY.DOMAIN:cdrwa,sasl:hive/host2@MY.DOMAIN:cdrwa</code>
* @param aclString
* @return ACL list
*/
public static List<ACL> parseACLs(String aclString) {
String[] aclComps = StringUtils.splitByWholeSeparator(aclString, ",");
List<ACL> acl = new ArrayList<ACL>(aclComps.length);
for (String a : aclComps) {
if (StringUtils.isBlank(a)) {
continue;
}
a = a.trim();
// from ZooKeeperMain private method
int firstColon = a.indexOf(':');
int lastColon = a.lastIndexOf(':');
if (firstColon == -1 || lastColon == -1 || firstColon == lastColon) {
LOGGER.error(a + " does not have the form scheme:id:perm");
continue;
}
ACL newAcl = new ACL();
newAcl.setId(new Id(a.substring(0, firstColon), a.substring(
firstColon + 1, lastColon)));
newAcl.setPerms(getPermFromString(a.substring(lastColon + 1)));
acl.add(newAcl);
}
return acl;
}
我们有数据(此时未分配)要转换/聚合/透视到 wazoo。 我在 www 上看了看,我问的所有答案都指向 hadoop 可扩展、运行便宜(没有 SQL 服务器机器和许可证)、快速(如果你有足够的数据)
这很明显,我们都同意我们可以将 HDFS + YARN + MapReduce 称为 Hadoop。但是,Hadoop 生态系统中的其他不同组合和其他产品会怎样? 例如,HDFS + YARN + S
如果 es-hadoop 只是连接到 HDFS 的 Hadoop 连接器,它如何支持 Hadoop 分析? 最佳答案 我假设您指的是 this project .在这种情况下,ES Hadoop 项目
看完this和 this论文,我决定我想在 MapReduce 上为大型数据集实现分布式体积渲染设置作为我的本科论文工作。 Hadoop 是一个合理的选择吗? Java 不会扼杀一些性能提升或使与 C
我一直在尝试查找有关如何通过命令行提交 hadoop 作业的信息。 我知道命令 - hadoop jar jar-file 主类输入输出 还有另一个命令,我正在尝试查找有关它的信息,但未能找到 - h
Hadoop 服务器在 Kubernetes 中。而Hadoop客户端位于外网。所以我尝试使用 kubernetes-service 来使用 Hadoop 服务器。但是 hadoop fs -put
有没有人遇到奇怪的环境问题,在调用 hadoop 命令时被迫使用 SU 而不是 SUDO? sudo su -c 'hadoop fs -ls /' hdfs Found 4 itemsdrwxr-x
在更改 mapred-site.xml 中的属性后,我给出了一个 tar.bz2 文件、.gz 和 tar.gz 文件作为输入。以上似乎都没有奏效。我假设这里发生的是 hadoop 作为输入读取的记录
如何在 Hadoop Pipes 中获取正在 hadoop 映射器 中执行的输入文件 名称? 我可以很容易地在基于 java 的 map reducer 中获取文件名,比如 FileSplit fil
我想使用 MapReduce 方法分析连续的数据流(通过 HTTP 访问),因此我一直在研究 Apache Hadoop。不幸的是,Hadoop 似乎期望以固定大小的输入文件开始作业,而不是能够在新数
名称节点可以执行任务吗?默认情况下,任务在集群的数据节点上执行。 最佳答案 假设您正在询问MapReduce ... 使用YARN,MapReduce任务在应用程序主数据库中执行,而不是在nameno
我有一个关系A包含 (zip-code). 我还有另一个关系B包含 (name:gender:zip-code) (x:m:1234) (y:f:1234) (z:m:1245) (s:f:1235)
我是hadoop地区的新手。您能帮我负责(k2,list[v2,v2,v2...])形式的输出(意味着将键及其所有关联值组合在一起)的责任是吗? 谢谢。 最佳答案 这是Hadoop的MapReduce
因此,我一直在尝试编写一个hadoop程序,该程序将输入作为一个包含许多文件的文件,并且我希望hadoop程序的输出仅是输入文件的一行。但是我还没有做到这一点。我也不想去 reducer 课。如果有人
我使用的输入文本文件的内容是 1 "Come 1 "Defects," 1 "I 1 "Information 1 "J" 2 "Plain 5 "Project 1
谁能告诉我以下grep命令的作用: $ bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' 最佳答案 http:/
我不了解mapreducer的基本功能,mapreducer是否有助于将文件放入HDFS 或mapreducer仅有助于分析HDFS中现有文件中的内容 我对hadoop非常陌生,任何人都可以指导我理解
CopyFromLocal将从本地文件系统上载数据。 不要放会从任何文件上传数据,例如。本地FS,亚马逊S3 或仅来自本地fs ??? 最佳答案 请找到两个命令的用法。 put ======= Usa
我开始研究hadoop mapreduce。 我是Java和hadoop的初学者,并且了解hadoop mapreduce的编码,但是有兴趣了解它在云中的内部工作方式。 您能否分享一些很好的链接来说明
我一直在寻找Hadoop mapreduce类的类路径。我正在使用Hortonworks 2.2.4版沙箱。我需要这样的类路径来运行我的javac编译器: javac -cp (CLASS_PATH)
我是一名优秀的程序员,十分优秀!