- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.helix.task.WorkflowContext.getJobStates()
方法的一些代码示例,展示了WorkflowContext.getJobStates()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。WorkflowContext.getJobStates()
方法的具体详情如下:
包路径:org.apache.helix.task.WorkflowContext
类名称:WorkflowContext
方法名:getJobStates
暂无
代码示例来源:origin: apache/incubator-pinot
/**
* Get all tasks for the given task type.
*
* @param taskType Task type
* @return Set of task names
*/
@Nonnull
public synchronized Set<String> getTasks(@Nonnull String taskType) {
Set<String> helixJobs = _taskDriver.getWorkflowContext(getHelixJobQueueName(taskType)).getJobStates().keySet();
Set<String> tasks = new HashSet<>(helixJobs.size());
for (String helixJobName : helixJobs) {
tasks.add(getPinotTaskName(helixJobName));
}
return tasks;
}
代码示例来源:origin: apache/incubator-pinot
/**
* Get all task states for the given task type.
*
* @param taskType Task type
* @return Map from task name to task state
*/
@Nonnull
public synchronized Map<String, TaskState> getTaskStates(@Nonnull String taskType) {
Map<String, TaskState> helixJobStates =
_taskDriver.getWorkflowContext(getHelixJobQueueName(taskType)).getJobStates();
Map<String, TaskState> taskStates = new HashMap<>(helixJobStates.size());
for (Map.Entry<String, TaskState> entry : helixJobStates.entrySet()) {
taskStates.put(getPinotTaskName(entry.getKey()), entry.getValue());
}
return taskStates;
}
代码示例来源:origin: apache/helix
Map<String, TaskState> allJobStates = workflowContext.getJobStates();
for (Map.Entry<String, TaskState> jobState : allJobStates.entrySet()) {
if (!jobState.getValue().equals(TaskState.NOT_STARTED)) {
代码示例来源:origin: org.apache.helix/helix-core
Map<String, TaskState> allJobStates = workflowContext.getJobStates();
for (String job : allJobStates.keySet()) {
if (!allJobStates.get(job).equals(TaskState.NOT_STARTED)) {
代码示例来源:origin: apache/helix
Map<String, TaskState> jobStates = workflowContext.getJobStates();
for (String job : workflowConfig.getJobDag().getAllNodes()) {
JobConfig jobConfig = TaskUtil.getJobConfig(dataAccessor, job);
代码示例来源:origin: org.apache.helix/helix-core
/**
* Return all jobs that are COMPLETED and passes its expiry time.
* @param dataAccessor
* @param propertyStore
* @param workflowConfig
* @param workflowContext
* @return
*/
protected static Set<String> getExpiredJobs(HelixDataAccessor dataAccessor,
HelixPropertyStore propertyStore, WorkflowConfig workflowConfig,
WorkflowContext workflowContext) {
Set<String> expiredJobs = new HashSet<String>();
if (workflowContext != null) {
Map<String, TaskState> jobStates = workflowContext.getJobStates();
for (String job : workflowConfig.getJobDag().getAllNodes()) {
JobConfig jobConfig = TaskUtil.getJobConfig(dataAccessor, job);
JobContext jobContext = TaskUtil.getJobContext(propertyStore, job);
long expiry = jobConfig.getExpiry();
if (expiry == workflowConfig.DEFAULT_EXPIRY || expiry < 0) {
expiry = workflowConfig.getExpiry();
}
if (jobContext != null && jobStates.get(job) == TaskState.COMPLETED) {
if (System.currentTimeMillis() >= jobContext.getFinishTime() + expiry) {
expiredJobs.add(job);
}
}
}
}
return expiredJobs;
}
代码示例来源:origin: apache/helix
@Test
public void testAbortTaskForWorkflowFail() throws InterruptedException {
failTask = true;
String workflowName = TestHelper.getTestMethodName();
Workflow.Builder workflowBuilder = new Workflow.Builder(workflowName);
for (int i = 0; i < 5; i++) {
List<TaskConfig> taskConfigs = Lists.newArrayListWithCapacity(1);
Map<String, String> taskConfigMap = Maps.newHashMap();
taskConfigs.add(new TaskConfig("TaskOne", taskConfigMap));
JobConfig.Builder jobBuilder = new JobConfig.Builder().setCommand("DummyCommand")
.addTaskConfigs(taskConfigs).setJobCommandConfigMap(_jobCommandMap);
workflowBuilder.addJob("JOB" + i, jobBuilder);
}
_driver.start(workflowBuilder.build());
_driver.pollForWorkflowState(workflowName, TaskState.FAILED);
int abortedTask = 0;
for (TaskState jobState : _driver.getWorkflowContext(workflowName).getJobStates().values()) {
if (jobState == TaskState.ABORTED) {
abortedTask++;
}
}
Assert.assertEquals(abortedTask, 4);
}
代码示例来源:origin: apache/helix
Set<String> jobWithFinalStates = new HashSet<>(workflowCtx.getJobStates().keySet());
jobWithFinalStates.removeAll(workflowCfg.getJobDag().getAllNodes());
if (jobWithFinalStates.size() > 0) {
代码示例来源:origin: apache/helix
Assert.assertEquals(context.getJobStates().keySet(), remainJobs);
Assert.assertTrue(remainJobs.containsAll(context.getJobStartTimes().keySet()));
代码示例来源:origin: apache/helix
@Test
public void testJobStateOnCreation() {
Workflow.Builder builder = new Workflow.Builder(WORKFLOW_NAME);
JobConfig.Builder jobConfigBuilder = new JobConfig.Builder().setCommand(MockTask.TASK_COMMAND)
.setTargetResource(WORKFLOW_NAME).setTargetPartitionStates(Sets.newHashSet("SLAVE","MASTER"))
.setJobCommandConfigMap(WorkflowGenerator.DEFAULT_COMMAND_CONFIG);
String jobName = "job";
builder = builder.addJob(jobName, jobConfigBuilder);
Workflow workflow = builder.build();
WorkflowConfig workflowConfig = workflow.getWorkflowConfig();
JobConfig jobConfig = jobConfigBuilder.build();
workflowConfig.getRecord().merge(jobConfig.getRecord());
_cache.getJobConfigMap().put(WORKFLOW_NAME + "_" + jobName, jobConfig);
_cache.getWorkflowConfigMap().put(WORKFLOW_NAME, workflowConfig);
WorkflowRebalancer workflowRebalancer = new WorkflowRebalancer();
workflowRebalancer.init(_manager);
ResourceAssignment resourceAssignment = workflowRebalancer
.computeBestPossiblePartitionState(_cache, _idealState, _resource, _currStateOutput);
WorkflowContext workflowContext = _cache.getWorkflowContext(WORKFLOW_NAME);
Map<String, TaskState> jobStates = workflowContext.getJobStates();
for (String job : jobStates.keySet()) {
Assert.assertEquals(jobStates.get(job), TaskState.NOT_STARTED);
}
}
}
代码示例来源:origin: apache/helix
taskDataCache.updateJobContext(_testJobPrefix + "0", jbCtx0);
wfCtx.getJobStates().remove(_testJobPrefix + "1");
taskDataCache.removeContext(_testJobPrefix + "1");
这个问题在这里已经有了答案: Why use async and return await, when you can return Task directly? (8 个答案) 关闭 6 年前。
这个问题在这里已经有了答案: Are the days of passing const std::string & as a parameter over? (13 个答案) 关闭 8 年前。 我
我有一组标记为执行的通用任务。当任务完成时(使用 Task.WaitAny ),我将其添加到 ObservableCollection 中. 但是,问题出在 Task.WaitAny(...)行,上面
经过几个小时的努力,我在我的应用程序中发现了一个错误。我认为下面的 2 个函数具有相同的行为,但事实证明它们没有。 谁能告诉我引擎盖下到底发生了什么,以及为什么它们的行为方式不同? public as
这也与 Python 的导入机制有关,特别是与在函数内使用 import 有关。使用 Python 2.7.9 和 Fabric 1.10.0,创建以下三个文件: fabfile.py: from a
我有一个 Web API Controller (ASP.NET Core 5)。我的一些 API 是异步的,而其中一些不是。我接下来的问题是:使用 public **Task** WebApiMet
我们有类似下面的内容 List uncheckItems = new List(); for (int i = 0; i new Task(async () => await Process
我的代码没问题,但我想知道哪种风格更好,你会怎么看,我正在玩异步方法。 让我建立上下文: Parallel.ForEach(xmlAnimalList, async xml => {
这两种使用 await 的形式在功能上有什么区别吗? string x = await Task.Factory.StartNew(() => GetAnimal("feline")); Task m
我刚刚看到 3 个关于 TPL 使用的例程,它们做同样的工作;这是代码: public static void Main() { Thread.CurrentThread.Name = "Ma
考虑以下代码: public void CacheData() { Task.Run((Action)CacheExternalData); Task.Run(() => CacheE
Task> GetTaskDict() { return Task.FromResult(new Dictionary () ); } 此代码无法编译,因为我们无法在 Task> 到 Tas
我正在使用 ASP.NET 5 RC1 _MyPartial @model MyViewModel @using (Html.BeginForm())
当我尝试在 VS Code 中构建 C 任务时,它显示以下消息: 输出仅显示:The task provider for "C/C++" tasks unexpectedly provided a t
一些背景: 基本上归结为我希望能够在当前线程中“执行”任务。为什么? -我有一个任务创建程序例程,有一次我希望任务在后台任务中立即执行,而其他时候我希望使用 IOmniThreadPool 安排任务。
就目前而言,这个问题不适合我们的问答形式。我们希望答案得到事实、引用或专业知识的支持,但这个问题可能会引起辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the he
我试图将run-sequence添加到我的gulp工作流程中,但是每次尝试执行使用run-sequence的任务时,都会出现此错误: 任务未配置为gulp上的任务。 根据运行序列的来源,这是由以下te
此代码在VS2015中给出了编译时错误 Error CS0266 Cannot implicitly convert type 'System.Threading.Tasks.Task' to 'Sy
我正在尝试通过我的代码通过Google登出: suspend fun signOut(context: Context): Boolean = with(Dispatchers.IO) { t
谁能解释一下这两种说法的区别: Task bTask = backup.BackupCurrentDatabaseAsync() .ContinueWith(_ => CompressArch
我是一名优秀的程序员,十分优秀!