gpt4 book ai didi

How to set parameters for a jenkins pipline scm job(如何设置Jenkins Pipline SCM作业的参数)

转载 作者:bug小助手 更新时间:2023-10-24 22:39:10 26 4
gpt4 key购买 nike



I have a particular jenkins file that i would like to use as a 'git pipeline scm' job for different environments. So i'll have a job for dev, qa, and prod all pulling the same jenkins script. The script has some parameters and defaults..

我有一个特殊的Jenkins文件,我想在不同的环境中使用它作为‘Git管道SCM’工作。因此,我将为dev、qa和prod提供一个工作,使用相同的Jenkins脚本。该脚本有一些参数和默认设置。


pipeline {
agent any
parameters{
string(name: 'environment', defaultValue: "dev")
string(name: 'email', defaultValue: "")
string(name: 'service_url', defaultValue: "http://dev.xyz_service")
}
....
....pipeline script code....
....
}

These jobs will be scheduled/time triggered (non-human/manual triggers).
While i want to create 2 jobs, one for lower environment, and one for production which can be secured....i dont want to duplicate the script. So i thought git pipeline scm to be a good solution.

But My question is, how can i set these parameters automatically after pulling the jenkins file?

这些作业将按计划/时间触发(非人工/手动触发)。虽然我想创建2个工作,一个为较低的环境,一个为生产,这是安全的…我不想重复脚本。所以我认为GIT流水线SCM是一个很好的解决方案。但我的问题是,在提取Jenkins文件后,我如何自动设置这些参数?


更多回答
优秀答案推荐

key:value pairs should be hardcoded in the parameters section.
If you want to use dynamic ones there is an option to set dynamic env vars:

键:值对应在参数部分中硬编码。如果要使用动态环境变量,可以选择设置动态环境变量:


pipeline {
agent {
label 'some_label'
}
environment {
EC2 = """${sh(
returnStdout: true,
script: 'aws ec2 describe-instances \
--filters "Name=tag:Name,Values=JENKINS_EC2_NAME_FROM_TAG"\
--query Reservations[*].Instances[*].InstanceId \
--output text'
).trim()}"""
}

Or set it in the stage:

或者把它放在舞台上:


stage('notify about start of daily scan') {
steps {
script {
env.mess_start=sh returnStdout: true, script: "date +%Y-%m-%d"
env.mess_start=sh(returnStdout: true, script: "date +%Y-%m-%d").trim()
}
}
}

As a result these envs will be available globally in the pipeline.

因此,这些环境将在全球范围内提供。



You are looking for the Parameterized Scheduler plugin.

您正在寻找参数化的Scheduler插件。


pipeline {
agent any
parameters{
string(name: 'environment', defaultValue: "dev")
string(name: 'email', defaultValue: "")
string(name: 'service_url', defaultValue: "http://dev.xyz_service")
}
triggers {
parameterizedCron('''
H 1 * * * %environment=dev
H 2 * * * %environment=qa
''')
}
stages {
...
}
}

更多回答

interesting - will look at this. so i could create a env file for each environment and read that. This will only affect this particular process and process it launches, not other processes on the jenkins server, right?

有趣--我会看看这个。这样我就可以为每个环境创建一个env文件并读取它。这只会影响这个特定的进程及其启动的进程,而不会影响Jenkins服务器上的其他进程,对吗?

correct, those envs will be used only on a single job run.

正确,这些环境将仅在单个作业运行中使用。

So what i ended up doing is putting an env specific properties file at the root workspaces for the /uat /dev/ /prod folders and im reading that. My script also loads a utils.groovy. What i wasnt able to do is read the variables loaded in the job itself, from the utils library, even though i declared them as job level variables.

因此,我最后做的是在/uat/dev//prod文件夹的根工作区中放置一个特定于env的属性文件,我正在阅读该文件。我的脚本还加载了一个utils.groovy。我不能做的是从utils库中读取加载到作业本身中的变量,即使我将它们声明为作业级变量。

marking your answer though..as it was close enough. I am mostly concerned about making sure that i dont have these variables scoped globally for all runs, as there will be some parrallel jobs with different values for same variables.

标记你的答案,因为它已经足够接近了。我最关心的是确保我没有将这些变量的作用域设置为所有运行的全局范围,因为将会有一些并行作业对相同的变量具有不同的值。

in the lower environment, i may not want it to be scheduled. is there an option to set those variables using a ''git pipeline scm' job?

在较低的环境中,我可能不希望它被安排。有没有使用‘’Git管道SCM‘’作业来设置这些变量的选项?

I'm not sure I know what you mean by "git pipeline scm" job. A declarative pipeline job that is stored in some scm? And what do you want the source of the values to be? Your options so far: 1) defaults hardcoded in the pipeline, 2) values set by the parameterized cron trigger, 3) values entered by a human when triggering manually. What will trigger the job in the lower environment?

我不知道你说的“Git管道供应链”是什么意思。存储在某个SCM中的声明性管道作业?你希望价值观的来源是什么?到目前为止,您可以选择:1)管道中硬编码的默认值,2)由参数化cron触发器设置的值,3)人工触发时输入的值。在较低的环境中,什么会触发作业?

what i mean is instead of pipeline script being embedded, its pulled from git each time. There really isnt an option to set parameters are variables before or after the pull. seems the code has to be written/designed to read from some properties file and we have to deploy that the relevant properties file for each environment.

我的意思是,不是嵌入管道脚本,而是每次都从GIT中提取。确实没有一个选项来设置参数,参数是在拉之前或之后的变量。似乎代码必须编写/设计为从某些属性文件中读取,我们必须为每个环境部署相关的属性文件。

Parameters are set by whatever triggers the job - think passing parameters to a function call. What would be your trigger?

参数是由作业的任何触发器设置的--想一想将参数传递给函数调用。你的导火索是什么?

these are scheduled jobs

这些是计划的作业

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com