I have a particular jenkins file that i would like to use as a 'git pipeline scm' job for different environments. So i'll have a job for dev, qa, and prod all pulling the same jenkins script. The script has some parameters and defaults..
我有一个特殊的jenkins文件,我想在不同的环境中使用它作为“git pipeline scm”作业。因此,我将为dev、qa和prod提供一份工作,他们都会编写相同的jenkins脚本。脚本有一些参数和默认值。。
pipeline {
agent any
parameters{
string(name: 'environment', defaultValue: "dev")
string(name: 'email', defaultValue: "")
string(name: 'service_url', defaultValue: "http://dev.xyz_service")
}
....
....pipeline script code....
....
}
These jobs will be scheduled/time triggered (non-human/manual triggers).
While i want to create 2 jobs, one for lower environment, and one for production which can be secured....i dont want to duplicate the script. So i thought git pipeline scm to be a good solution.
But My question is, how can i set these parameters automatically after pulling the jenkins file?
这些作业将被安排/时间触发(非人工/手动触发)。虽然我想创造两个工作岗位,一个用于较低的环境,另一个用于可以确保的生产。。。。我不想复制这个剧本。所以我认为gitpipelineSCM是一个很好的解决方案。但我的问题是,在提取詹金斯文件后,如何自动设置这些参数?
更多回答
key:value pairs should be hardcoded in the parameters section.
If you want to use dynamic ones there is an option to set dynamic env vars:
key:value对应该在parameters部分中进行硬编码。如果您想使用动态环境变量,可以选择设置动态环境变量:
pipeline {
agent {
label 'some_label'
}
environment {
EC2 = """${sh(
returnStdout: true,
script: 'aws ec2 describe-instances \
--filters "Name=tag:Name,Values=JENKINS_EC2_NAME_FROM_TAG"\
--query Reservations[*].Instances[*].InstanceId \
--output text'
).trim()}"""
}
Or set it in the stage:
或者把它放在舞台上:
stage('notify about start of daily scan') {
steps {
script {
env.mess_start=sh returnStdout: true, script: "date +%Y-%m-%d"
env.mess_start=sh(returnStdout: true, script: "date +%Y-%m-%d").trim()
}
}
}
As a result these envs will be available globally in the pipeline.
因此,这些env将在管道中全局可用。
You are looking for the Parameterized Scheduler plugin.
您正在寻找参数化调度程序插件。
pipeline {
agent any
parameters{
string(name: 'environment', defaultValue: "dev")
string(name: 'email', defaultValue: "")
string(name: 'service_url', defaultValue: "http://dev.xyz_service")
}
triggers {
parameterizedCron('''
H 1 * * * %environment=dev
H 2 * * * %environment=qa
''')
}
stages {
...
}
}
更多回答
interesting - will look at this. so i could create a env file for each environment and read that. This will only affect this particular process and process it launches, not other processes on the jenkins server, right?
有趣的-看看这个。所以我可以为每个环境创建一个env文件并读取它。这只会影响这个特定的进程及其启动的进程,而不会影响jenkins服务器上的其他进程,对吧?
correct, those envs will be used only on a single job run.
正确,这些env将仅在单个作业运行中使用。
So what i ended up doing is putting an env specific properties file at the root workspaces for the /uat /dev/ /prod folders and im reading that. My script also loads a utils.groovy. What i wasnt able to do is read the variables loaded in the job itself, from the utils library, even though i declared them as job level variables.
因此,我最终要做的是在/uat/dev//prod文件夹的根工作区中放置一个env特定的属性文件,并读取该文件。我的脚本还加载了一个utils.groovy。我不能做的是从utils库读取加载在作业本身中的变量,尽管我将它们声明为作业级变量。
marking your answer though..as it was close enough. I am mostly concerned about making sure that i dont have these variables scoped globally for all runs, as there will be some parrallel jobs with different values for same variables.
不过要标记你的答案。。因为它足够近。我最关心的是确保这些变量不会在全局范围内用于所有运行,因为对于相同的变量,会有一些具有不同值的并行作业。
in the lower environment, i may not want it to be scheduled. is there an option to set those variables using a ''git pipeline scm' job?
在较低的环境中,我可能不希望它被安排。有没有一个选项可以使用“it pipeline scm”作业来设置这些变量?
I'm not sure I know what you mean by "git pipeline scm" job. A declarative pipeline job that is stored in some scm? And what do you want the source of the values to be? Your options so far: 1) defaults hardcoded in the pipeline, 2) values set by the parameterized cron trigger, 3) values entered by a human when triggering manually. What will trigger the job in the lower environment?
我不知道你说的“gitpipelineSCM”工作是什么意思。存储在某个scm中的声明性管道作业?你希望这些价值的来源是什么?到目前为止,您的选项有:1)管道中的默认硬编码,2)参数化cron触发器设置的值,3)手动触发时由人工输入的值。什么会在较低的环境中触发作业?
what i mean is instead of pipeline script being embedded, its pulled from git each time. There really isnt an option to set parameters are variables before or after the pull. seems the code has to be written/designed to read from some properties file and we have to deploy that the relevant properties file for each environment.
我的意思是,不是嵌入管道脚本,而是每次从git中提取。确实没有设置参数的选项,参数是拉之前或之后的变量。似乎必须编写/设计代码以读取某些属性文件,并且我们必须为每个环境部署相关的属性文件。
Parameters are set by whatever triggers the job - think passing parameters to a function call. What would be your trigger?
参数是由触发作业的任何东西设置的——比如将参数传递给函数调用。你的导火索是什么?
these are scheduled jobs
这些是计划的作业
我是一名优秀的程序员,十分优秀!