- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中io.druid.indexing.overlord.ZkWorker
类的一些代码示例,展示了ZkWorker
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZkWorker
类的具体详情如下:
包路径:io.druid.indexing.overlord.ZkWorker
类名称:ZkWorker
[英]Holds information about a worker and a listener for task status changes associated with the worker.
[中]保存有关工作人员的信息,以及与该工作人员关联的任务状态更改的侦听器。
代码示例来源:origin: io.druid/druid-indexing-service
public ImmutableWorkerInfo toImmutable()
{
return new ImmutableWorkerInfo(
worker.get(),
getCurrCapacityUsed(),
getAvailabilityGroups(),
getRunningTaskIds(),
lastCompletedTaskTime.get(),
blacklistedUntil.get()
);
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
public boolean isRunningTask(String taskId)
{
return getRunningTasks().containsKey(taskId);
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
@Override
public String apply(ZkWorker input)
{
return input.getWorker().getIp();
}
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
@Override
public int compare(
ZkWorker zkWorker, ZkWorker zkWorker2
)
{
int retVal = -Ints.compare(zkWorker.getCurrCapacityUsed(), zkWorker2.getCurrCapacityUsed());
if (retVal == 0) {
retVal = zkWorker.getWorker().getHost().compareTo(zkWorker2.getWorker().getHost());
}
return retVal;
}
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
@Override
public boolean apply(ZkWorker worker)
{
final boolean itHasBeenAWhile = System.currentTimeMillis() - worker.getLastCompletedTaskTime().getMillis()
>= config.getWorkerIdleTimeout().toStandardDuration().getMillis();
return worker.getRunningTasks().isEmpty() && (itHasBeenAWhile || !isValidWorker.apply(worker));
}
};
代码示例来源:origin: io.druid/druid-indexing-service
log.info(
"Worker[%s] completed task[%s] with status[%s]",
zkWorker.getWorker().getHost(),
taskStatus.getId(),
taskStatus.getStatusCode()
);
zkWorker.setLastCompletedTaskTime(DateTimes.nowUtc());
} else {
log.info("Workerless task[%s] completed with status[%s]", taskStatus.getId(), taskStatus.getStatusCode());
zkWorker.resetContinuouslyFailedTasksCount();
if (blackListedWorkers.remove(zkWorker)) {
zkWorker.setBlacklistedUntil(null);
log.info("[%s] removed from blacklist because a task finished with SUCCESS", zkWorker.getWorker());
zkWorker.incrementContinuouslyFailedTasksCount();
if (zkWorker.getContinuouslyFailedTasksCount() > config.getMaxRetriesBeforeBlacklist() &&
blackListedWorkers.size() <= zkWorkers.size() * (config.getMaxPercentageBlacklistWorkers() / 100.0) - 1) {
zkWorker.setBlacklistedUntil(DateTimes.nowUtc().plus(config.getWorkerBlackListBackoffTime()));
if (blackListedWorkers.add(zkWorker)) {
log.info(
"Blacklisting [%s] until [%s] after [%,d] failed tasks in a row.",
zkWorker.getWorker(),
zkWorker.getBlacklistedUntil(),
zkWorker.getContinuouslyFailedTasksCount()
);
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
final PathChildrenCache statusCache = pathChildrenCacheFactory.make(cf, workerStatusPath);
final SettableFuture<ZkWorker> retVal = SettableFuture.create();
final ZkWorker zkWorker = new ZkWorker(
worker,
statusCache,
zkWorker.addListener(
new PathChildrenCacheListener()
zkWorker.start();
return retVal;
代码示例来源:origin: io.druid/druid-indexing-service
/**
* A task will be run only if there is no current knowledge in the RemoteTaskRunner of the task.
*
* @param task task to run
*/
@Override
public ListenableFuture<TaskStatus> run(final Task task)
{
final RemoteTaskRunnerWorkItem completeTask, runningTask, pendingTask;
if ((pendingTask = pendingTasks.get(task.getId())) != null) {
log.info("Assigned a task[%s] that is already pending, not doing anything", task.getId());
return pendingTask.getResult();
} else if ((runningTask = runningTasks.get(task.getId())) != null) {
ZkWorker zkWorker = findWorkerRunningTask(task.getId());
if (zkWorker == null) {
log.warn("Told to run task[%s], but no worker has started running it yet.", task.getId());
} else {
log.info("Task[%s] already running on %s.", task.getId(), zkWorker.getWorker().getHost());
TaskAnnouncement announcement = zkWorker.getRunningTasks().get(task.getId());
if (announcement.getTaskStatus().isComplete()) {
taskComplete(runningTask, zkWorker, announcement.getTaskStatus());
}
}
return runningTask.getResult();
} else if ((completeTask = completeTasks.get(task.getId())) != null) {
return completeTask.getResult();
} else {
return addPendingTask(task).getResult();
}
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
public boolean canRunTask(Task task)
{
return (worker.getCapacity() - getCurrCapacityUsed() >= task.getTaskResource().getRequiredCapacity()
&& !getAvailabilityGroups().contains(task.getTaskResource().getAvailabilityGroup()));
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
private void taskComplete(
RemoteTaskRunnerWorkItem taskRunnerWorkItem,
ZkWorker zkWorker,
TaskStatus taskStatus
)
{
Preconditions.checkNotNull(taskRunnerWorkItem, "taskRunnerWorkItem");
Preconditions.checkNotNull(zkWorker, "zkWorker");
Preconditions.checkNotNull(taskStatus, "taskStatus");
log.info(
"Worker[%s] completed task[%s] with status[%s]",
zkWorker.getWorker().getHost(),
taskStatus.getId(),
taskStatus.getStatusCode()
);
// Worker is done with this task
zkWorker.setLastCompletedTaskTime(new DateTime());
// Move from running -> complete
completeTasks.put(taskStatus.getId(), taskRunnerWorkItem);
runningTasks.remove(taskStatus.getId());
// Notify interested parties
final ListenableFuture<TaskStatus> result = taskRunnerWorkItem.getResult();
if (result != null) {
((SettableFuture<TaskStatus>) result).set(taskStatus);
}
}
}
代码示例来源:origin: io.druid/druid-indexing-service
public ZkWorker findWorkerRunningTask(String taskId)
{
for (ZkWorker zkWorker : zkWorkers.values()) {
if (zkWorker.isRunningTask(taskId)) {
return zkWorker;
}
}
return null;
}
代码示例来源:origin: io.druid/druid-indexing-service
private boolean shouldRemoveNodeFromBlackList(ZkWorker zkWorker)
{
if (blackListedWorkers.size() > zkWorkers.size() * (config.getMaxPercentageBlacklistWorkers() / 100.0)) {
log.info(
"Removing [%s] from blacklist because percentage of blacklisted workers exceeds [%d]",
zkWorker.getWorker(),
config.getMaxPercentageBlacklistWorkers()
);
return true;
}
long remainingMillis = zkWorker.getBlacklistedUntil().getMillis() - getCurrentTimeMillis();
if (remainingMillis <= 0) {
log.info("Removing [%s] from blacklist because backoff time elapsed", zkWorker.getWorker());
return true;
}
log.info("[%s] still blacklisted for [%,ds]", zkWorker.getWorker(), remainingMillis / 1000);
return false;
}
代码示例来源:origin: io.druid/druid-indexing-service
ZkWorker zkWorker = zkWorkers.get(worker);
try {
if (getAssignedTasks(zkWorker.getWorker()).isEmpty() && isLazyWorker.apply(zkWorker.toImmutable())) {
log.info("Adding Worker[%s] to lazySet!", zkWorker.getWorker().getHost());
lazyWorkers.put(worker, zkWorker);
if (lazyWorkers.size() == maxWorkers) {
代码示例来源:origin: io.druid/druid-indexing-service
/**
* We allow workers to change their own capacities and versions. They cannot change their own hosts or ips without
* dropping themselves and re-announcing.
*/
private void updateWorker(final Worker worker)
{
final ZkWorker zkWorker = zkWorkers.get(worker.getHost());
if (zkWorker != null) {
log.info("Worker[%s] updated its announcement from[%s] to[%s].", worker.getHost(), zkWorker.getWorker(), worker);
zkWorker.setWorker(worker);
} else {
log.warn(
"WTF, worker[%s] updated its announcement but we didn't have a ZkWorker for it. Ignoring.",
worker.getHost()
);
}
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
@LifecycleStop
public void stop()
{
try {
if (!started) {
return;
}
started = false;
for (ZkWorker zkWorker : zkWorkers.values()) {
zkWorker.close();
}
workerPathCache.close();
}
catch (Exception e) {
throw Throwables.propagate(e);
}
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
public boolean isAtCapacity()
{
return getCurrCapacityUsed() >= worker.getCapacity();
}
代码示例来源:origin: io.druid/druid-indexing-service
final PathChildrenCache statusCache = workerStatusPathChildrenCacheFactory.make(cf, workerStatusPath);
final SettableFuture<ZkWorker> retVal = SettableFuture.create();
final ZkWorker zkWorker = new ZkWorker(
worker,
statusCache,
zkWorker.addListener(
new PathChildrenCacheListener()
zkWorker.start();
return retVal;
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
/**
* A task will be run only if there is no current knowledge in the RemoteTaskRunner of the task.
*
* @param task task to run
*/
@Override
public ListenableFuture<TaskStatus> run(final Task task)
{
final RemoteTaskRunnerWorkItem completeTask, runningTask, pendingTask;
if ((pendingTask = pendingTasks.get(task.getId())) != null) {
log.info("Assigned a task[%s] that is already pending, not doing anything", task.getId());
return pendingTask.getResult();
} else if ((runningTask = runningTasks.get(task.getId())) != null) {
ZkWorker zkWorker = findWorkerRunningTask(task.getId());
if (zkWorker == null) {
log.warn("Told to run task[%s], but no worker has started running it yet.", task.getId());
} else {
log.info("Task[%s] already running on %s.", task.getId(), zkWorker.getWorker().getHost());
TaskAnnouncement announcement = zkWorker.getRunningTasks().get(task.getId());
if (announcement.getTaskStatus().isComplete()) {
taskComplete(runningTask, zkWorker, announcement.getTaskStatus());
}
}
return runningTask.getResult();
} else if ((completeTask = completeTasks.get(task.getId())) != null) {
return completeTask.getResult();
} else {
return addPendingTask(task).getResult();
}
}
代码示例来源:origin: com.n3twork.druid/druid-indexing-service
public ZkWorker findWorkerRunningTask(String taskId)
{
for (ZkWorker zkWorker : zkWorkers.values()) {
if (zkWorker.isRunningTask(taskId)) {
return zkWorker;
}
}
return null;
}
代码示例来源:origin: io.druid/druid-indexing-service
zkWorker.close();
是否可以让 Druid 数据源包含 2 个(或多个)时间戳?我知道 Druid 是基于时间的数据库,我对这个概念没有问题,但我想添加另一个维度,我可以使用它来处理时间戳 例如用户保留:指标肯定是指定到
我们正在对 Druid 进行 POC,以检查它是否适合我们的用例。虽然我们能够摄取数据但不确定以下内容: druid 如何支持无模式输入:假设输入维度由最终用户自行决定。然后这里没有定义的模式。因此,
Druid 如何在将实时摄取的数据移交给深度存储之前保留它? 文档中Druid有intermedatepersistperiod和maxpendingpersists的配置。但它并没有太多说明什么是中
当我将 JSON 摄取规范发送到 Druid 霸主 API 时,我得到以下响应: HTTP/1.1 400 Bad Request Content-Type: application/json Dat
我正在尝试将以下德鲁伊 sql 查询转换为德鲁伊 json 查询,因为我拥有的列之一是多值维度,德鲁伊不支持 sql 样式查询。 我的sql查询: SELECT date_dt, source, ty
例如我有以下 Druid 查询: "filter": { "type": "and", "fields": [ { "type": "selector",
我正在尝试在超过 500k 数据限制的情况下运行 groupBy 查询。我收到此错误。 { "error": "Resource limit exceeded", "errorMess
我使用 Druid 来监控我网站中的事件。 数据可以表示如下: event_id | country | user_id | event_type ==================
我在历史节点上存储了大约 10GB 的数据。但是,该节点的内存消耗约为 2GB。 当我启动一个选择查询时,结果在 30 多秒内第一次返回。接下来,它们排在第二位(因为代理缓存)。 我关心的是将任何查询
我创建了一个架构,并向德鲁伊架构添加了 1TB 数据。然后升级了日志文件版本并添加了新的两列。然后我想将该数据添加到德鲁伊模式。但还不能。 最佳答案 为了向现有数据源添加新列,您需要按照以下步骤操作:
我有分析服务器(例如点击计数器)。我想使用一些 api 向德鲁伊发送数据。我该怎么做?我可以用它代替谷歌分析吗? 最佳答案 正如 se7entyse7en 所说: You can ingest you
我是德鲁伊新手。我想从我的 java 应用程序查询远程 druid 集群。我在 druid-user google group 中读到我们可以使用 io.druid.client.DirectDrui
Apache Druid简介 Apache Druid是一个实时分析型数据库,旨在对大型数据集进行快速的查询分析("OLAP"查询)。Druid最常被当做数据库来用以支持实时摄取、高性能查询和高稳定运
我似乎在 Druid 网站上找不到任何特定的教程/文档页面,其中列出了 Druid 中所有支持的维度数据类型。从我读了多少,我知道 long, float和 string绝对支持,但我对其他支持的类型
前言 在 上一篇文章 中,我们介绍了弹性数据库连接失效的背景,并探讨了HikariCP连接池探活策略的相关内容。在本文中,我们将会继续探讨另一个线上常用的连接池——Druid,并为您介绍如何在使
Druid是阿里巴巴开发的一个连接池,他提供了一个高效、功能强大、可扩展性好的数据库连接池,区别于hikari。如果选择高性能可以选hikari,如果要功能多就选,druid。 首先pom引入依赖
我们如何在德鲁伊中写 sum(distinct col) ?如果我尝试用德鲁伊编写,它说无法构建计划,但在德鲁伊中也是可能的。我尝试转换为子查询方法,但我的内部查询返回大量项目级数据,因此超时。 最佳
我正在使用 imply 来处理德鲁伊的集群。但是我的日志文件已增加到数百 GB 的存储空间。我说的是存在于 imply/var/sv/目录中的日志文件,其中有这 7 个日志文件,broker.log、
通过遵循 http://druid.io/docs/latest/tutorials/tutorial-loading-streaming-data.html 的教程,我能够通过 Kafka 控制台将
Druid 集群在数据摄取后,数据源的某些数据段显示不可用。例如:72.4% 可用(2352 个分段,647 个分段不可用)我们有一个集群部署 3 个节点:主节点(协调器和霸主)数据节点(历史和中间管
我是一名优秀的程序员,十分优秀!