- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.helix.task.WorkflowConfig
类的一些代码示例,展示了WorkflowConfig
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。WorkflowConfig
类的具体详情如下:
包路径:org.apache.helix.task.WorkflowConfig
类名称:WorkflowConfig
[英]Provides a typed interface to workflow level configurations. Validates the configurations.
[中]为工作流级别的配置提供类型化界面。验证配置。
代码示例来源:origin: apache/helix
public WorkflowConfig(WorkflowConfig cfg, String workflowId) {
this(workflowId, cfg.getJobDag(), cfg.getParallelJobs(), cfg.getTargetState(), cfg.getExpiry(),
cfg.getFailureThreshold(), cfg.isTerminable(), cfg.getScheduleConfig(), cfg.getCapacity(),
cfg.getWorkflowType(), cfg.isJobQueue(), cfg.getJobTypes(), cfg.getJobPurgeInterval(),
cfg.isAllowOverlapJobAssignment(), cfg.getTimeout());
}
代码示例来源:origin: apache/helix
/**
* @return Resource configuration key/value map.
* @throws HelixException
*/
public Map<String, String> getResourceConfigMap() throws HelixException {
return _workflowConfig.getResourceConfigMap();
}
代码示例来源:origin: org.apache.helix/helix-core
/**
* Check if a workflow is ready to schedule.
* @param workflowCfg the workflow to check
* @return true if the workflow is ready for schedule, false if not ready
*/
protected boolean isWorkflowReadyForSchedule(WorkflowConfig workflowCfg) {
Date startTime = workflowCfg.getStartTime();
// Workflow with non-scheduled config or passed start time is ready to schedule.
return (startTime == null || startTime.getTime() <= System.currentTimeMillis());
}
代码示例来源:origin: org.apache.helix/helix-core
for (String ancestor : workflowCfg.getJobDag().getAncestors(resourceName)) {
TaskState jobState = workflowCtx.getJobState(ancestor);
if (jobState == null || jobState == TaskState.NOT_STARTED) {
if (notStartedCount > 0 || (workflowCfg.isJobQueue() && inCompleteCount >= workflowCfg
.getParallelJobs())) {
LOG.debug("Job is not ready to be scheduled due to pending dependent jobs " + resourceName);
return emptyAssignment(resourceName, currStateOutput);
TargetState targetState = workflowCfg.getTargetState();
if (targetState == TargetState.DELETE) {
LOG.info(
&& workflowCtx.getFinishTime() + workflowCfg.getExpiry() <= System.currentTimeMillis()) {
LOG.info("Workflow " + workflowResource
+ " is completed and passed expiry time, cleaning up the workflow context.");
if (!workflowCfg.isTerminable() && jobFinishTime != WorkflowContext.UNFINISHED
&& jobFinishTime + workflowCfg.getExpiry() <= System.currentTimeMillis()) {
LOG.info("Job " + resourceName
+ " is completed and passed expiry time, cleaning up the job context.");
代码示例来源:origin: org.apache.helix/helix-core
/**
* Clean up a workflow. This removes the workflow config, idealstate, externalview and workflow
* contexts associated with this workflow, and all jobs information, including their configs,
* context, IS and EV.
*/
private void cleanupWorkflow(String workflow, WorkflowConfig workflowcfg) {
LOG.info("Cleaning up workflow: " + workflow);
if (workflowcfg.isTerminable() || workflowcfg.getTargetState() == TargetState.DELETE) {
Set<String> jobs = workflowcfg.getJobDag().getAllNodes();
// Remove all pending timer tasks for this workflow if exists
_rebalanceScheduler.removeScheduledRebalance(workflow);
for (String job : jobs) {
_rebalanceScheduler.removeScheduledRebalance(job);
}
if (!TaskUtil.removeWorkflow(_manager.getHelixDataAccessor(),
_manager.getHelixPropertyStore(), workflow, jobs)) {
LOG.warn("Failed to clean up workflow " + workflow);
}
} else {
LOG.info("Did not clean up workflow " + workflow
+ " because neither the workflow is non-terminable nor is set to DELETE.");
}
}
代码示例来源:origin: org.apache.helix/helix-core
private void removeWorkflowFromZK(String workflow) {
Set<String> jobSet = new HashSet<>();
// Note that even WorkflowConfig is null, if WorkflowContext exists, still need to remove workflow
WorkflowConfig wCfg = TaskUtil.getWorkflowConfig(_accessor, workflow);
if (wCfg != null) {
jobSet.addAll(wCfg.getJobDag().getAllNodes());
}
boolean success = TaskUtil.removeWorkflow(_accessor, _propertyStore, workflow, jobSet);
if (!success) {
LOG.info("Failed to delete the workflow " + workflow);
throw new HelixException("Failed to delete the workflow " + workflow);
}
}
代码示例来源:origin: apache/helix
TargetState targetState = workflowCfg.getTargetState();
if (targetState == TargetState.DELETE) {
LOG.info("Workflow is marked as deleted " + workflow + " cleaning up the workflow context.");
if (!workflowCfg.isJobQueue() && !finalStates.contains(workflowCtx.getWorkflowState())) {
scheduleRebalanceForTimeout(workflow, workflowCtx.getStartTime(), workflowCfg.getTimeout());
if (!TaskState.TIMED_OUT.equals(workflowCtx.getWorkflowState()) && isTimeout(workflowCtx.getStartTime(), workflowCfg.getTimeout())) {
workflowCtx.setWorkflowState(TaskState.TIMED_OUT);
_taskDataCache.updateWorkflowContext(workflow, workflowCtx);
long expiryTime = workflowCfg.getExpiry();
if (!workflowCfg.isTerminable() || workflowCfg.isJobQueue()) {
Set<String> jobWithFinalStates = new HashSet<>(workflowCtx.getJobStates().keySet());
jobWithFinalStates.removeAll(workflowCfg.getJobDag().getAllNodes());
if (jobWithFinalStates.size() > 0) {
workflowCtx.setLastJobPurgeTime(System.currentTimeMillis());
代码示例来源:origin: org.apache.helix/helix-core
TargetState targetState = workflowCfg.getTargetState();
if (targetState == TargetState.DELETE) {
LOG.info("Workflow is marked as deleted " + workflow + " cleaning up the workflow context.");
if (!workflowCfg.isJobQueue() && !finalStates.contains(workflowCtx.getWorkflowState())) {
scheduleRebalanceForTimeout(workflow, workflowCtx.getStartTime(), workflowCfg.getTimeout());
&& isTimeout(workflowCtx.getStartTime(), workflowCfg.getTimeout())) {
workflowCtx.setWorkflowState(TaskState.TIMED_OUT);
clusterData.updateWorkflowContext(workflow, workflowCtx, _manager.getHelixDataAccessor());
long expiryTime = workflowCfg.getExpiry();
workflowCfg.getStartTime().getTime());
return buildEmptyAssignment(workflow, currStateOutput);
if (!workflowCfg.isTerminable() || workflowCfg.isJobQueue()) {
purgeExpiredJobs(workflow, workflowCfg, workflowCtx);
代码示例来源:origin: apache/helix
if (newWorkflowConfig.getWorkflowId() == null || newWorkflowConfig.getWorkflowId().isEmpty()) {
newWorkflowConfig.getRecord()
.setSimpleField(WorkflowConfig.WorkflowConfigProperty.WorkflowID.name(), workflow);
if (workflow == null || !workflow.equals(newWorkflowConfig.getWorkflowId())) {
throw new HelixException(String
.format("Workflow name {%s} does not match the workflow Id from WorkflowConfig {%s}",
workflow, newWorkflowConfig.getWorkflowId()));
if (currentConfig.isTerminable()) {
throw new HelixException(
"Workflow " + workflow + " is terminable, not allow to change its configuration!");
newWorkflowConfig.setJobDag(currentConfig.getJobDag());
if (!TaskUtil.setWorkflowConfig(_accessor, workflow, newWorkflowConfig)) {
LOG.error("Failed to update workflow configuration for workflow " + workflow);
代码示例来源:origin: org.apache.helix/helix-core
WorkflowContext workflowCtx, Map<String, JobConfig> jobConfigMap,
ClusterDataCache clusterDataCache) {
ScheduleConfig scheduleConfig = workflowCfg.getScheduleConfig();
if (scheduleConfig != null && scheduleConfig.isRecurring()) {
LOG.debug("Jobs from recurring workflow are not schedule-able");
int scheduledJobs = 0;
long timeToSchedule = Long.MAX_VALUE;
for (String job : workflowCfg.getJobDag().getAllNodes()) {
TaskState jobState = workflowCtx.getJobState(job);
if (jobState != null && !jobState.equals(TaskState.NOT_STARTED)) {
if (workflowCfg.isJobQueue() && scheduledJobs >= workflowCfg.getParallelJobs()) {
if (LOG.isDebugEnabled()) {
LOG.debug(String.format("Workflow %s already have enough job in progress, "
代码示例来源:origin: org.apache.helix/helix-core
for (String job : cfg.getJobDag().getAllNodes()) {
TaskState jobState = ctx.getJobState(job);
if (jobState == TaskState.FAILED || jobState == TaskState.TIMED_OUT) {
failedJobs++;
if (!cfg.isJobQueue() && failedJobs > cfg.getFailureThreshold()) {
ctx.setWorkflowState(TaskState.FAILED);
LOG.info("Workflow {} reached the failure threshold, so setting its state to FAILED.", cfg.getWorkflowId());
for (String jobToFail : cfg.getJobDag().getAllNodes()) {
if (ctx.getJobState(jobToFail) == TaskState.IN_PROGRESS) {
ctx.setJobState(jobToFail, TaskState.ABORTED);
if (!incomplete && cfg.isTerminable()) {
ctx.setWorkflowState(TaskState.COMPLETED);
return true;
代码示例来源:origin: apache/incubator-gobblin
private void cleanUpJobs(HelixManager helixManager) {
// Clean up existing jobs
TaskDriver taskDriver = new TaskDriver(helixManager);
Map<String, WorkflowConfig> workflows = taskDriver.getWorkflows();
boolean cleanupDistJobs = ConfigUtils.getBoolean(this.config,
GobblinClusterConfigurationKeys.CLEAN_ALL_DIST_JOBS,
GobblinClusterConfigurationKeys.DEFAULT_CLEAN_ALL_DIST_JOBS);
for (Map.Entry<String, WorkflowConfig> entry : workflows.entrySet()) {
String workflowName = entry.getKey();
if (workflowName.contains(GobblinClusterConfigurationKeys.PLANNING_CONF_PREFIX)
|| workflowName.contains(GobblinClusterConfigurationKeys.ACTUAL_JOB_NAME_PREFIX)) {
if (!cleanupDistJobs) {
log.info("Distributed job {} won't be deleted.", workflowName);
continue;
}
}
WorkflowConfig workflowConfig = entry.getValue();
// request delete if not already requested
if (workflowConfig.getTargetState() != TargetState.DELETE) {
taskDriver.delete(workflowName);
log.info("Requested delete of workflowName {}", workflowName);
}
}
}
代码示例来源:origin: apache/helix
String workflowResource, String jobResource, ClusterDataCache cache) {
if (workflowCfg == null || workflowCfg.getScheduleConfig() == null) {
return true;
ScheduleConfig scheduleConfig = workflowCfg.getScheduleConfig();
Date startTime = scheduleConfig.getStartTime();
long currentTime = new Date().getTime();
if (!workflowCfg.getTargetState().equals(TargetState.START)) {
LOG.debug(
"Skip scheduling since the workflow has not been started " + workflowResource);
代码示例来源:origin: apache/helix
for (String jobName : workflowConfig.getJobDag().getAllNodes()) {
TaskState jobState = workflowContext.getJobState(jobName);
if (jobState == TaskState.IN_PROGRESS) {
for (String jobName : workflowConfig.getJobDag().getAllNodes()) {
JobContext jobContext = driver.getJobContext(jobName);
if (jobContext != null) {
if (!workflowConfig.isAllowOverlapJobAssignment()) {
Set<String> instances = new HashSet<String>();
for (JobContext jobContext : jobContextList) {
return maxRunningCount > 1 && (workflowConfig.isJobQueue() ? maxRunningCount <= workflowConfig
.getParallelJobs() : true);
代码示例来源:origin: org.apache.helix/helix-core
int incompleteParentCount = 0;
for (String parent : workflowCfg.getJobDag().getDirectParents(job)) {
TaskState jobState = workflowCtx.getJobState(parent);
if (jobState == null || jobState == TaskState.NOT_STARTED) {
if (workflowCfg.isJobQueue()) {
if (incompleteAllCount >= workflowCfg.getParallelJobs()) {
if (LOG.isDebugEnabled()) {
LOG.debug(String.format("Job %s is not ready to schedule, inCompleteJobs(s)=%d.", job,
代码示例来源:origin: org.apache.helix/helix-core
/**
* Checks if the workflow has completed.
* @param ctx Workflow context containing job states
* @param cfg Workflow config containing set of jobs
* @return returns true if all tasks are {@link TaskState#COMPLETED}, false otherwise.
*/
private static boolean isWorkflowComplete(WorkflowContext ctx, WorkflowConfig cfg) {
if (!cfg.isTerminable()) {
return false;
}
for (String job : cfg.getJobDag().getAllNodes()) {
if (ctx.getJobState(job) != TaskState.COMPLETED) {
return false;
}
}
return true;
}
代码示例来源:origin: org.apache.helix/helix-core
WorkflowContext workflowCtx, JobContext jobCtx, Set<Integer> partitionsToDropFromIs,
ClusterDataCache cache) {
TargetState jobTgtState = workflowConfig.getTargetState();
long finishTime = currentTime;
workflowCtx.setJobState(jobResource, TaskState.FAILED);
if (workflowConfig.isTerminable()) {
workflowCtx.setWorkflowState(TaskState.FAILED);
workflowCtx.setFinishTime(finishTime);
代码示例来源:origin: apache/helix
Set<String> ret = new HashSet<>();
if (!workflowCfg.isAllowOverlapJobAssignment()) {
for (String jobName : workflowCfg.getJobDag().getAllNodes()) {
if (jobName.equals(currentJobName)) {
continue;
代码示例来源:origin: apache/helix
@Override
public void execute(ClusterEvent event) {
ClusterDataCache clusterDataCache = event.getAttribute(AttributeName.ClusterDataCache.name());
HelixManager manager = event.getAttribute(AttributeName.helixmanager.name());
if (clusterDataCache == null || manager == null) {
LOG.warn(
"ClusterDataCache or HelixManager is null for event {}({}) in cluster {}. Skip TaskGarbageCollectionStage.",
event.getEventId(), event.getEventType(), event.getClusterName());
return;
}
Set<WorkflowConfig> existingWorkflows =
new HashSet<>(clusterDataCache.getWorkflowConfigMap().values());
for (WorkflowConfig workflowConfig : existingWorkflows) {
// clean up the expired jobs if it is a queue.
if (workflowConfig != null && (!workflowConfig.isTerminable() || workflowConfig
.isJobQueue())) {
try {
TaskUtil.purgeExpiredJobs(workflowConfig.getWorkflowId(), workflowConfig,
clusterDataCache.getWorkflowContext(workflowConfig.getWorkflowId()), manager,
_rebalanceScheduler);
} catch (Exception e) {
LOG.warn(String.format("Failed to purge job for workflow %s with reason %s",
workflowConfig.getWorkflowId(), e.toString()));
}
}
}
}
}
代码示例来源:origin: apache/helix
for (String job : workflowConfig.getJobDag().getAllNodes()) {
JobConfig jobConfig = TaskUtil.getJobConfig(dataAccessor, job);
JobContext jobContext = TaskUtil.getJobContext(propertyStore, job);
LOG.error(String.format(
"Job %s exists in JobDAG but JobConfig is missing! Job might have been deleted manually from the JobQueue: %s, or left in the DAG due to a failed clean-up attempt from last purge.",
job, workflowConfig.getWorkflowId()));
expiry = workflowConfig.getExpiry();
我尝试理解[c代码 -> 汇编]代码 void node::Check( data & _data1, vector& _data2) { -> push ebp -> mov ebp,esp ->
我需要在当前表单(代码)的上下文中运行文本文件中的代码。其中一项要求是让代码创建新控件并将其添加到当前窗体。 例如,在Form1.cs中: using System.Windows.Forms; ..
我有此 C++ 代码并将其转换为 C# (.net Framework 4) 代码。有没有人给我一些关于 malloc、free 和 sprintf 方法的提示? int monate = ee; d
我的网络服务器代码有问题 #include #include #include #include #include #include #include int
给定以下 html 代码,将列表中的第三个元素(即“美丽”一词)以斜体显示的 CSS 代码是什么?当然,我可以给这个元素一个 id 或一个 class,但 html 代码必须保持不变。谢谢
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 我们不允许提问寻求书籍、工具、软件库等的推荐。您可以编辑问题,以便用事实和引用来回答。 关闭 7 年前。
我试图制作一个宏来避免重复代码和注释。 我试过这个: #define GrowOnPage(any Page, any Component) Component.Width := Page.Surfa
我正在尝试将我的旧 C++ 代码“翻译”成头条新闻所暗示的 C# 代码。问题是我是 C# 中的新手,并不是所有的东西都像 C++ 中那样。在 C++ 中这些解决方案运行良好,但在 C# 中只是不能。我
在 Windows 10 上工作,R 语言的格式化程序似乎没有在 Visual Studio Code 中完成它的工作。我试过R support for Visual Studio Code和 R-T
我正在处理一些报告(计数),我必须获取不同参数的计数。非常简单但乏味。 一个参数的示例查询: qCountsEmployee = ( "select count(*) from %s wher
最近几天我尝试从 d00m 调试网络错误。我开始用尽想法/线索,我希望其他 SO 用户拥有可能有用的宝贵经验。我希望能够提供所有相关信息,但我个人无法控制服务器环境。 整个事情始于用户注意到我们应用程
我有一个 app.js 文件,其中包含如下 dojo amd 模式代码: require(["dojo/dom", ..], function(dom){ dom.byId('someId').i
我对“-gencode”语句中的“code=sm_X”选项有点困惑。 一个例子:NVCC 编译器选项有什么作用 -gencode arch=compute_13,code=sm_13 嵌入库中? 只有
我为我的表格使用 X-editable 框架。 但是我有一些问题。 $(document).ready(function() { $('.access').editable({
我一直在通过本教程学习 flask/python http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-wo
我想将 Vim 和 EMACS 用于 CNC、G 代码和 M 代码。 Vim 或 EMACS 是否有任何语法或模式来处理这种类型的代码? 最佳答案 一些快速搜索使我找到了 this vim 和 thi
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 想改进这个问题?更新问题,使其成为 on-topic对于堆栈溢出。 7年前关闭。 Improve this
这个问题在这里已经有了答案: Enabling markdown highlighting in Vim (5 个回答) 6年前关闭。 当我在 Vim 中编辑包含 Markdown 代码的 READM
我正在 Swift3 iOS 中开发视频应用程序。基本上我必须将视频 Assets 和音频与淡入淡出效果合并为一个并将其保存到 iPhone 画廊。为此,我使用以下方法: private func d
pipeline { agent any stages { stage('Build') { steps { e
我是一名优秀的程序员,十分优秀!