- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.solr.common.cloud.ZkStateReader
类的一些代码示例,展示了ZkStateReader
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZkStateReader
类的具体详情如下:
包路径:org.apache.solr.common.cloud.ZkStateReader
类名称:ZkStateReader
暂无
代码示例来源:origin: thinkaurelius/titan
ZkStateReader zkStateReader = server.getZkStateReader();
try {
boolean cont = true;
zkStateReader.updateClusterState(true);
ClusterState clusterState = zkStateReader.getClusterState();
Map<String, Slice> slices = clusterState.getSlicesMap(collection);
Preconditions.checkNotNull("Could not find collection:" + collection, slices);
String state = shard.getValue().getStr(ZkStateReader.STATE_PROP);
if ((state.equals(Replica.State.RECOVERING) || state.equals(Replica.State.DOWN))
&& clusterState.liveNodesContain(shard.getValue().getStr(
ZkStateReader.NODE_NAME_PROP))) {
sawLiveRecovering = true;
代码示例来源:origin: BroadleafCommerce/BroadleafCommerce
CloudSolrClient reindexCloudClient = (CloudSolrClient) solrConfiguration.getReindexServer();
try {
primaryCloudClient.connect();
Aliases aliases = primaryCloudClient.getZkStateReader().getAliases();
Map<String, String> aliasCollectionMap = aliases.getCollectionAliasMap();
if (aliasCollectionMap == null || !aliasCollectionMap.containsKey(primaryCloudClient.getDefaultCollection())
|| !aliasCollectionMap.containsKey(reindexCloudClient.getDefaultCollection())) {
throw new IllegalStateException("Could not determine the PRIMARY or REINDEX "
代码示例来源:origin: thinkaurelius/titan
/**
* Checks if the collection has already been created in Solr.
*/
private static boolean checkIfCollectionExists(CloudSolrClient server, String collection) throws KeeperException, InterruptedException {
ZkStateReader zkStateReader = server.getZkStateReader();
zkStateReader.updateClusterState(true);
ClusterState clusterState = zkStateReader.getClusterState();
return clusterState.getCollectionOrNull(collection) != null;
}
代码示例来源:origin: thinkaurelius/titan
@Override
public void clearStorage() throws BackendException {
try {
if (mode!=Mode.CLOUD) throw new UnsupportedOperationException("Operation only supported for SolrCloud");
logger.debug("Clearing storage from Solr: {}", solrClient);
ZkStateReader zkStateReader = ((CloudSolrClient) solrClient).getZkStateReader();
zkStateReader.updateClusterState(true);
ClusterState clusterState = zkStateReader.getClusterState();
for (String collection : clusterState.getCollections()) {
logger.debug("Clearing collection [{}] in Solr",collection);
UpdateRequest deleteAll = newUpdateRequest();
deleteAll.deleteByQuery("*:*");
solrClient.request(deleteAll, collection);
}
} catch (SolrServerException e) {
logger.error("Unable to clear storage from index due to server error on Solr.", e);
throw new PermanentBackendException(e);
} catch (IOException e) {
logger.error("Unable to clear storage from index due to low-level I/O error.", e);
throw new PermanentBackendException(e);
} catch (Exception e) {
logger.error("Unable to clear storage from index due to general error.", e);
throw new PermanentBackendException(e);
}
}
代码示例来源:origin: org.apache.solr/solr-solrj
while (!success && System.nanoTime() < timeout) {
success = true;
ClusterState clusterState = zkStateReader.getClusterState();
if (clusterState != null) {
Map<String, DocCollection> collections = null;
if (collection != null) {
collections = Collections.singletonMap(collection, clusterState.getCollection(collection));
} else {
collections = clusterState.getCollectionsMap();
for (Replica replica : replicas) {
boolean live = clusterState.liveNodesContain(replica
.getNodeName());
if (live) {
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new SolrException(ErrorCode.SERVER_ERROR, "Interrupted");
代码示例来源:origin: com.hynnet/solr-solrj
throws SolrServerException, IOException {
connect(); // important to call this before you start working with the ZkStateReader
boolean isAdmin = ADMIN_PATHS.contains(request.getPath());
if (collection != null && !isAdmin) { // don't do _stateVer_ checking for admin requests
Set<String> requestedCollectionNames = getCollectionNames(getZkStateReader().getClusterState(), collection);
DocCollection coll = getDocCollection(getZkStateReader().getClusterState(), requestedCollection,null);
int collVer = coll.getZNodeVersion();
if (coll.getStateFormat()>1) {
resp = sendRequest(request, collection);
resp.remove(resp.size()-1);
Map invalidStates = (Map) o;
for (Object invalidEntries : invalidStates.entrySet()) {
Map.Entry e = (Map.Entry) invalidEntries;
getDocCollection(getZkStateReader().getClusterState(),(String)e.getKey(), (Integer)e.getValue());
Throwable rootCause = SolrException.getRootCause(exc);
((SolrException)rootCause).code() : SolrException.ErrorCode.UNKNOWN.code;
wasCommError) {
for (DocCollection ext : requestedCollections) {
DocCollection latestStateFromZk = getDocCollection(zkStateReader.getClusterState(), ext.getName(),null);
if (latestStateFromZk.getZNodeVersion() != ext.getZNodeVersion()) {
代码示例来源:origin: com.hynnet/solr-solrj
Aliases aliases = zkStateReader.getAliases();
if(aliases != null) {
Map<String, String> collectionAliases = aliases.getCollectionAliasMap();
DocCollection col = getDocCollection(clusterState, collection,null);
Map<String,List<String>> urlMap = buildUrlMap(col);
if (urlMap == null) {
NamedList<Throwable> exceptions = new NamedList<>();
NamedList<NamedList> shardResponses = new NamedList<>();
final Future<NamedList<?>> responseFuture = entry.getValue();
try {
shardResponses.add(url, responseFuture.get());
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
shardResponses.add(urlList.get(0), rsp.getResponse());
} catch (Exception e) {
throw new SolrException(ErrorCode.SERVER_ERROR, urlList.get(0), e);
RouteResponse rr = condenseResponse(shardResponses, (long)((end - start)/1000000));
rr.setRouteResponses(shardResponses);
rr.setRoutes(routes);
代码示例来源:origin: com.hynnet/solr-solrj
public Map getClusterProps(){
Map result = null;
try {
if(getZkClient().exists(ZkStateReader.CLUSTER_PROPS, true)){
result = (Map) Utils.fromJSON(getZkClient().getData(ZkStateReader.CLUSTER_PROPS, null, new Stat(), true)) ;
} else {
result= new LinkedHashMap();
}
return result;
} catch (Exception e) {
throw new SolrException(ErrorCode.SERVER_ERROR,"Error reading cluster properties",e) ;
}
}
代码示例来源:origin: org.apache.solr/solr-solrj
@Override
public String getDatabaseProductVersion() throws SQLException {
// Returns the version for the first live node in the Solr cluster.
SolrQuery sysQuery = new SolrQuery();
sysQuery.setRequestHandler("/admin/info/system");
CloudSolrClient cloudSolrClient = this.connection.getClient();
Set<String> liveNodes = cloudSolrClient.getZkStateReader().getClusterState().getLiveNodes();
SolrClient solrClient = null;
for (String node : liveNodes) {
try {
String nodeURL = cloudSolrClient.getZkStateReader().getBaseUrlForNodeName(node);
solrClient = new Builder(nodeURL).build();
QueryResponse rsp = solrClient.query(sysQuery);
return String.valueOf(((SimpleOrderedMap) rsp.getResponse().get("lucene")).get("solr-spec-version"));
} catch (SolrServerException | IOException ignore) {
return "";
} finally {
if (solrClient != null) {
try {
solrClient.close();
} catch (IOException ignore) {
// Don't worry about failing to close the Solr client
}
}
}
}
// If no version found just return empty string
return "";
}
代码示例来源:origin: com.hynnet/solr-solrj
protected NamedList<Object> sendRequest(SolrRequest request, String collection)
throws SolrServerException, IOException {
connect();
ClusterState clusterState = zkStateReader.getClusterState();
NamedList<Object> response = directUpdate((AbstractUpdateRequest) request, collection, clusterState);
if (response != null) {
return response;
Set<String> liveNodes = clusterState.getLiveNodes();
for (String liveNode : liveNodes) {
theUrlList.add(zkStateReader.getBaseUrlForNodeName(liveNode));
Set<String> collectionNames = getCollectionNames(clusterState, collection);
if (collectionNames.size() == 0) {
throw new SolrException(ErrorCode.BAD_REQUEST,
"Could not find collection: " + collection);
ClientUtils.addSlices(slices, collectionName, routeSlices, true);
Set<String> liveNodes = clusterState.getLiveNodes();
if(s!=null) collectionStateCache.remove(s);
throw new SolrException(SolrException.ErrorCode.INVALID_STATE, "Could not find a healthy node to handle the request.");
代码示例来源:origin: com.hynnet/solr-solrj
throw new SolrException(ErrorCode.BAD_REQUEST, "Not a known cluster property " + propertyName);
Stat s = new Stat();
try {
if (getZkClient().exists(CLUSTER_PROPS, true)) {
int v = 0;
Map properties = (Map) Utils.fromJSON(getZkClient().getData(CLUSTER_PROPS, null, s, true));
if (propertyValue == null) {
getZkClient().setData(CLUSTER_PROPS, Utils.toJSON(properties), s.getVersion(), true);
getZkClient().setData(CLUSTER_PROPS, Utils.toJSON(properties), s.getVersion(), true);
Map properties = new LinkedHashMap();
properties.put(propertyName, propertyValue);
getZkClient().create(CLUSTER_PROPS, Utils.toJSON(properties), CreateMode.PERSISTENT, true);
} catch (Exception ex) {
log.error("Error updating path " + CLUSTER_PROPS, ex);
throw new SolrException(ErrorCode.SERVER_ERROR, "Error updating cluster property " + propertyName, ex);
代码示例来源:origin: org.apache.solr/solr-solrj
private void getCheckpoints() throws IOException {
this.checkpoints = new HashMap<>();
ZkStateReader zkStateReader = cloudSolrClient.getZkStateReader();
Slice[] slices = CloudSolrStream.getSlices(this.collection, zkStateReader, false);
ClusterState clusterState = zkStateReader.getClusterState();
Set<String> liveNodes = clusterState.getLiveNodes();
for(Slice slice : slices) {
String sliceName = slice.getName();
long checkpoint;
if(initialCheckpoint > -1) {
checkpoint = initialCheckpoint;
} else {
checkpoint = getCheckpoint(slice, liveNodes);
}
this.checkpoints.put(sliceName, checkpoint);
}
}
代码示例来源:origin: com.hynnet/solr-solrj
public static DocCollection getCollectionLive(ZkStateReader zkStateReader,
String coll) {
String collectionPath = getCollectionPath(coll);
try {
Stat stat = new Stat();
byte[] data = zkStateReader.getZkClient().getData(collectionPath, null, stat, true);
ClusterState state = ClusterState.load(stat.getVersion(), data,
Collections.<String> emptySet(), collectionPath);
ClusterState.CollectionRef collectionRef = state.getCollectionStates().get(coll);
return collectionRef == null ? null : collectionRef.get();
} catch (KeeperException.NoNodeException e) {
log.warn("No node available : " + collectionPath, e);
return null;
} catch (KeeperException e) {
throw new SolrException(ErrorCode.BAD_REQUEST,
"Could not load collection from ZK:" + coll, e);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new SolrException(ErrorCode.BAD_REQUEST,
"Could not load collection from ZK:" + coll, e);
}
}
代码示例来源:origin: org.apache.solr/solr-test-framework
try (ZkStateReader zk = new ZkStateReader(zkServer.getZkAddress(), AbstractZkTestCase.TIMEOUT,
AbstractZkTestCase.TIMEOUT)) {
zk.createClusterStateWatchersAndUpdate();
clusterState = zk.getClusterState();
final DocCollection docCollection = clusterState.getCollectionOrNull(DEFAULT_COLLECTION);
slices = (docCollection != null) ? docCollection.getSlicesMap() : null;
+ DEFAULT_COLLECTION + " in " + clusterState.getCollectionsMap().keySet());
ZkStateReader zkStateReader = cloudClient.getZkStateReader();
long count = 0;
final Replica.State currentState = Replica.State.getState(cjetty.info.getStr(ZkStateReader.STATE_PROP));
if (currentState == Replica.State.ACTIVE
&& zkStateReader.getClusterState().liveNodesContain(cjetty.info.getStr(ZkStateReader.NODE_NAME_PROP))) {
SolrQuery query = new SolrQuery("*:*");
query.set("distrib", false);
SolrQuery query = new SolrQuery("*:*");
assertEquals("Doc Counts do not add up", controlCount,
cloudClient.query(query).getResults().getNumFound());
代码示例来源:origin: org.apache.jackrabbit/oak-solr-core
private void createCollectionIfNeeded(CloudSolrClient cloudSolrServer) throws SolrServerException {
String solrCollection = remoteSolrServerConfiguration.getSolrCollection();
ZkStateReader zkStateReader = cloudSolrServer.getZkStateReader();
SolrZkClient zkClient = zkStateReader.getZkClient();
log.debug("creating {} collection if needed", solrCollection);
try {
if (zkClient.isConnected() && !zkClient.exists("/configs/" + solrCollection, true)) {
String solrConfDir = remoteSolrServerConfiguration.getSolrConfDir();
Path dir;
cloudSolrServer.uploadConfig(dir, solrCollection);
cloudSolrServer.request(req);
代码示例来源:origin: com.ngdata/hbase-indexer-common
if (verbose) System.out.println("-");
boolean sawLiveRecovering = false;
ClusterState clusterState = zkStateReader.getClusterState();
final DocCollection docCollection = clusterState.getCollectionOrNull(collection);
if (docCollection == null) throw new IllegalStateException("Could not find collection:" + collection);
Map<String,Slice> slices = docCollection.getSlicesMap();
+ shard.getValue().getStr(ZkStateReader.STATE_PROP)
+ " live:"
+ clusterState.liveNodesContain(shard.getValue().getNodeName()));
final Replica.State state = shard.getValue().getState();
if ((state == Replica.State.RECOVERING || state == Replica.State.DOWN || state == Replica.State.RECOVERY_FAILED)
&& clusterState.liveNodesContain(shard.getValue().getStr(ZkStateReader.NODE_NAME_PROP))) {
sawLiveRecovering = true;
Diagnostics.logThreadDumps("Gave up waiting for recovery to finish. THREAD DUMP:");
try {
zkStateReader.getZkClient().printLayoutToStdOut();
} catch (KeeperException | InterruptedException e) {
throw new RuntimeException(e);
代码示例来源:origin: org.apache.solr/solr-solrj
final Stat stat = getZkClient().setData(ALIASES, modAliasesJson, curAliases.getZNodeVersion(), true);
setIfNewer(Aliases.fromJSON(modAliasesJson, stat.getVersion()));
return;
throw new SolrException(ErrorCode.SERVER_ERROR, "Timed out trying to update aliases! " +
"Either zookeeper or this node may be overloaded.");
throw new SolrException(ErrorCode.SERVER_ERROR, "Too many successive version failures trying to update aliases");
代码示例来源:origin: com.cloudera.search/search-mr
public DocCollection extractDocCollection(String zkHost, String collection) {
if (collection == null) {
throw new IllegalArgumentException("collection must not be null");
}
SolrZkClient zkClient = getZkClient(zkHost);
try (ZkStateReader zkStateReader = new ZkStateReader(zkClient)) {
try {
// first check for alias
collection = checkForAlias(zkClient, collection);
zkStateReader.createClusterStateWatchersAndUpdate();
} catch (Exception e) {
throw new IllegalArgumentException("Cannot find expected information for SolrCloud in ZooKeeper: " + zkHost, e);
}
try {
return zkStateReader.getClusterState().getCollection(collection);
} catch (SolrException e) {
throw new IllegalArgumentException("Cannot find collection '" + collection + "' in ZooKeeper: " + zkHost, e);
}
} finally {
zkClient.close();
}
}
代码示例来源:origin: org.apache.solr/solr-test-framework
ZkStateReader zkr = cloudClient.getZkStateReader();
zkr.forceUpdateCollection(testCollectionName); // force the state to be fresh
ClusterState cs = zkr.getClusterState();
Collection<Slice> slices = cs.getCollection(testCollectionName).getActiveSlices();
assertTrue(slices.size() == shards);
boolean allReplicasUp = false;
long maxWaitMs = maxWaitSecs * 1000L;
Replica leader = null;
ZkShardTerms zkShardTerms = new ZkShardTerms(testCollectionName, shardId, cloudClient.getZkStateReader().getZkClient());
while (waitMs < maxWaitMs && !allReplicasUp) {
cs = cloudClient.getZkStateReader().getClusterState();
assertNotNull(cs);
final DocCollection docCollection = cs.getCollectionOrNull(testCollectionName);
assertNotNull("No collection found for " + testCollectionName, docCollection);
Slice shard = docCollection.getSlice(shardId);
代码示例来源:origin: org.apache.solr/solr-test-framework
protected int getTotalReplicas(String collection) {
ZkStateReader zkStateReader = cloudClient.getZkStateReader();
DocCollection coll = zkStateReader.getClusterState().getCollectionOrNull(collection);
if (coll == null) return 0; // support for when collection hasn't been created yet
int cnt = 0;
for (Slice slices : coll.getSlices()) {
cnt += slices.getReplicas().size();
}
return cnt;
}
Schema.org、Goodrelations-vocabulary.org 和 Productontology.org 之间有什么关系? Schema.org 告知,“W3C schema.org
大家好,我想知道包 org.ietf、org.omg、org.w3c 和 org 是如何实现的.xml 已进入 "official" Java classes ? 例如,默认 JDK 不会包含 Apa
首先,我试图用来自 Schema.org 的属性定义数据库表,例如,例如,我有一个名为“JobPosting”的表,它或多或少具有与 http://schema.org/JobPosting 中定义的
我有一个 org.w3c.dom.Document 被 org.dom4j.io.DOMReader 解析。 我想通过 org.w3c.dom.Element 搜索 dom4j DOM 文档。 比方说
我正在将我的应用程序部署到 Tomcat 6.0.20。 应用程序使用 Hibernate 作为 Web 层的 ORM、Spring 和 JSF。 我还从 main() 方法制作了简单的运行器来测试
我有一个使用 hibernate > 4 的 gradle 项目。如果我在 Apache tomcat 中运行我的 war 文件,我不会收到任何错误。但是当我在 Wildfly 8.2 中部署它时,出
我正在尝试将 JaCoCo 添加到我的 Android 以覆盖 Sonar Qube。但是在运行命令 ./gradlew jacocoTestReport 时,我收到以下错误。 Task :app:
如何在 emacs 组织模式中格式化日期? 例如,在下表中,我希望日期显示为“Aug 29”或“Wed, Aug 29”而不是“” #+ATTR_HTML: border="2" rules="all
我想使用 org 模式来写一本技术书籍。我正在寻找一种将外部文件中的现有代码插入到 babel 代码块中的方法,该代码块在导出为 pdf 时会提供很好的格式。 例如 #+BEGIN_SRC pytho
用作引用:https://support.google.com/webmasters/answer/146750?hl=en 您会注意到在“产品”下有一个属性类别,此外页面下方还有一个示例: Too
我读了这个Google doc .它说我们不使用列表中的产品。 那么对于产品列表(具有多页的类似产品的类别,如“鞋子”),推荐使用哪种模式? 我用这个: { "@context": "htt
我目前在做DBpedia数据集,想通过wikidata实现schema.org和DBpedia的映射。因此我想知道 schema.org 和 wikidata 之间是否存在任何映射。 最佳答案 我认为
我爱org-tables ,我用它们来记录各种事情。我现在正在为 Nix 记录一些单行代码(在阅读了 Domen Kožar 的 excellent guide 后,在 this year's Eur
如果看一下 Movie在 schema.org 中输入,actor 和 actors 属性都是允许的(actor 取代 actors)。但是 author 和 contributor 属性没有等效项。
我们有一些餐厅有多个地点或分支机构。我想包含正确的 Schema.org 标记,但找不到任何允许列出多个餐厅的内容。 每家餐厅都有自己的地址、电子邮件、电话和营业时间,甚至可能是“分店名称”。 两个分
我在一个页面中有多个综合评分片段。 有没有办法让其中之一成为默认值?将显示在搜索引擎结果中的那个? 谢谢大家! 更新:该网页本质上是品牌的页面。它包含品牌评论的总评分及其产品列表(每个产品的总评分)。
我提到了一些相关的职位,但并没有解决我的问题。因为我正在使用maven-jar-plugin-2.4 jar。 我正在使用JBoss Developer Studio 7.1.1 GA IDE,并且正
网站的根页面(即 http://example.com/ )的特殊之处在于它是默认的着陆页。它可能包含许多不同的对象类型。 它可能被认为是一个网站,或者一个博客等... 但它是否也应该被标记为给定对象
我想将一些文本放入一个 org 文件中,当我将内容导出到其中一种目标类型(在本例中为 HTML)时,该文件不会发布。有什么方法可以实现这个目标吗? 最佳答案 您可能想要使用 :noexport: 标签
org-mode 是否有一个键绑定(bind)可以在编号/项目符号列表项之间移动,就像您可以对标题一样? 喜欢的功能: org-forward-heading-same-level 大纲下一个可见标题
我是一名优秀的程序员,十分优秀!