This is file-old.txt
:
这是文件-old.txt:
nextduedate: '2023-09-06'
dedicatedip: 1.1.1.1
nextduedate: '2023-09-01'
dedicatedip: 2.2.2.2
nextduedate: '2023-09-08'
dedicatedip: 3.3.3.3
This is file-new.txt
:
这是FILE-New.txt:
nextduedate: '2023-09-06'
dedicatedip: 1.1.1.1
nextduedate: '2023-10-01'
dedicatedip: 2.2.2.2
nextduedate: '2023-10-08'
dedicatedip: 3.3.3.3
This is servers.txt
(maybe is necessary):
这是servers.txt(可能需要):
[
{
"server": {
"server_ip": "3.3.3.3",
"server_ipv6_net": "::1", # fake
"server_number": 12345678974,
"server_name": "some.host.name.com",
"product": "Server Auction",
"dc": "FSN1-DC11",
"traffic": "unlimited",
"status": "ready",
"cancelled": false,
"paid_until": "2023-10-08",
"ip": [
"3.3.3.3"
],
"subnet": [
{
"ip": "::1", # fake
"mask": "64" # fake
}
],
"linked_storagebox": null
}
},
# other server's details
]
I'm trying to do this:
我正试着这么做:
iterate over both files, check the dedicatedip in old file if it exists in new file, then compare the nextduedate of both, if nextduedate in new file is newer than the old file, then it should do something (an external command), else (which should be basically the same in both), it should do nothing.
Example and explanation:
示例和说明:
Check the IP 1.1.1.1 from old file and then find that line in new file, then compare both nextduedate. Since both are the same (in fact new is not greater than the old), then it should do nothing.
Check the IP 2.2.2.2 from old file and then find that line in new file, then compare both nextduedate. Since the nextduedate of new is greater than the old file, it should be stored as a variable.
Check the IP 3.3.3.3 from old file and then find that line in new file, then compare both nextduedate. Since the nextduedate of new is greater than the old file, it should be stored as a variable.
Then if the nextduedate is greater than the old one, it should check for the value of "server_number" in the servers.txt
to call an API.
The final result should be something like this:
最终结果应该是这样的:
# the awk code
# the api call, in the following line for example, server_number=12345678974
curl -u user:password https://url.com/$server_number
I know the for
loop in awk
, but my issue is that 1.1.1.1
is one time compared to 1.1.1.1
, then it is compared to 2.2.2.2
and then 3.3.3.3
, and then the 2.2.2.2
iterates all.
我知道awk中的for循环,但我的问题是,1.1.1.1是1.1.1.1的一次,然后与2.2.2.2和3.3.3.3进行比较,然后2.2.2.2迭代所有。
更多回答
please update the question with the code you've attempted and the (wrong) results generated by your code
请使用您尝试的代码和您的代码生成的(错误)结果更新问题
what's the expected max number of entries you need to test/compare? what are you doing with the curl
output? while it's possible to call curl
from within awk
, this may be overkill if you need to perform other processing; thinking it might be of benefit to have awk
do the compares and spit out a list of ips (and dates?) and then have a bash
script process the list of ips (eg, make curl
call, process curl
result) ...
您需要测试/比较的预计最大条目数是多少?您要对cURL输出做什么?虽然可以从awk内调用curl,但如果您需要执行其他处理,这可能有些过分;如果您认为让awk进行比较并列出一个IP列表(和日期?)可能会有好处。然后让bash脚本处理IP列表(例如,进行cURL调用、处理cURL结果)……
The first part is simple if that is all the files contain. For example, you can process them as records if you have an awk that supports multicharacter RS
(such as mawk
or gawk
):
如果这是文件包含的全部内容,则第一部分很简单。例如,如果您的awk支持多字符RS(如mawk或gawk),则可以将它们作为记录处理:
gawk -v RS=nextduedate: '
FNR==1 { next }
NR==FNR {
date = $1
gsub(/[^0-9]+/,"",date)
if (length(date)!=8)
printf "bogus date: %s (%s)\n", $3,$1
else if ($3 in n)
printf "duplicate old: %s (%s)\n", $3,$1
else
n[$3] = $1
next
}
{
if (!$3 in n)
printf "no old: %s\n", $3
else if ($3 in seen)
printf "duplicate new: %s (%s)\n", $3,$1
else if ($1>n[$3])
printf "process: %s (%s > %s)\n", $3,$1,n[$3]
else
printf "skip %s (%s <= %s)\n", $3,$1,n[$3]
seen[$3]
}
' file-old.txt file-new.txt
skip 1.1.1.1 ('2023-09-06' <= '2023-09-06')
process: 2.2.2.2 ('2023-10-01' > '2023-09-01')
process: 3.3.3.3 ('2023-10-08' > '2023-09-08')
If your server file is actually valid JSON (ie. doesn't contain those # fake
comments, you can query it easily with jq
. For example:
如果您的服务器文件实际上是有效的JSON(即。不包含那些#虚假评论,你可以用JQ轻松查询。例如:
jq --arg ip 3.3.3.3 '
.[] | .server | select(.server_ip==$ip) |
.server_number // empty
' servers.txt
Unless it's a specific requirement, there are "better" languages to simplify this type of task.
除非是一个特定的需求,否则有一些“更好”的语言可以简化这类任务。
e.g.
python, ruby, perl or jq as has already been suggested.
例如,已经建议的python、ruby、perl或JQ。
old=old.txt
new=new.txt
regex="nextduedate: '(?<date>[^']+)'\ndedicatedip: (?<server_ip>.+)"
jq --arg regex "$regex" --rawfile old "$old" --rawfile new "$new" '
def parse_ips(name):
capture($regex; "g")
| {(name): (.date | strptime("%Y-%m-%d") | mktime), server_ip};
[
(.[].server | {server_ip, server_number}),
($old | parse_ips("old")),
($new | parse_ips("new"))
]
| group_by(.server_ip)[]
| add
| select(.old < .new and .server_number != null)
| .server_number
' servers.txt
12345678974
Here we create JSON for old and new using .capture()
在这里,我们使用.capture()为旧版本和新版本创建JSON
{
"old": 1693958400,
"server_ip": "1.1.1.1"
}
{
"new": 1693958400,
"server_ip": "1.1.1.1"
}
Extract server ip and number
提取服务器IP和编号
{
"server_ip": "3.3.3.3",
"server_number": 12345678974
}
Combine the records with .group_by()
which we then filter.
将记录与.group_by()组合在一起,然后对其进行过滤。
更多回答
Thanks a lot my friend, at the moment I'm exhausted to check the answer and adapt it with my real data. But regarding jq
, why did you use ip 3.3.3.3
? Suppose We don't know the last one. I really don't know the last IP because that file is generated dynamically.
非常感谢我的朋友,目前我精疲力竭地检查答案,并根据我的真实数据进行调整。但是对于JQ,为什么要使用IP 3.3.3.3呢?假设我们不知道最后一个。我真的不知道最后一个IP,因为该文件是动态生成的。
replace it with an appropriate variable when writing the loop
在编写循环时将其替换为适当的变量
我是一名优秀的程序员,十分优秀!