gpt4 book ai didi

bash - 如何等待 wget 完成以获取更多资源

转载 作者:行者123 更新时间:2023-11-29 09:45:50 26 4
gpt4 key购买 nike

我是 bash 新手。

我想并行获取一些资源。

以下代码有什么问题:

for item in $list
do
if [ $i -le 10 ];then
wget -b $item
let "i++"
else
wait
i=1
fi

当我执行这个 shell 时。抛出错误:

fork: Resource temporarily unavailable

我的问题是如何正确使用 wget。

编辑:

我的问题是大约有四千个 url 要下载,如果我让所有这些工作并行工作, fork: Resource temporary unavailable 将抛出。不知道如何并行控制计数。

最佳答案

使用 jobs|grep 检查后台作业:

#!/bin/bash

urls=('www.cnn.com' 'www.wikipedia.org') ## input data

for ((i=-1;++i<${#urls[@]};)); do
curl -L -s ${urls[$i]} >file-$i.html & ## background jobs
done

until [[ -z `jobs|grep -E -v 'Done|Terminated'` ]]; do
sleep 0.05; echo -n '.' ## do something while waiting
done

echo; ls -l file*\.html ## list downloaded files

结果:

............................
-rw-r--r-- 1 xxx xxx 155421 Jan 20 00:50 file-0.html
-rw-r--r-- 1 xxx xxx 74711 Jan 20 00:50 file-1.html

另一个变化,简单并行的任务:

#!/bin/bash

urls=('www.yahoo.com' 'www.hotmail.com' 'stackoverflow.com')

_task1(){ ## task 1: download files
for ((i=-1;++i<${#urls[@]};)); do
curl -L -s ${urls[$i]} >file-$i.html &
done; wait
}
_task2(){ echo hello; } ## task 2: a fake task
_task3(){ echo hi; } ## task 3: a fake task

_task1 & _task2 & _task3 & ## run them in parallel
wait ## and wait for them

ls -l file*\.html ## list results of all tasks
echo done ## and do something

结果:

hello
hi
-rw-r--r-- 1 xxx xxx 320013 Jan 20 02:19 file-0.html
-rw-r--r-- 1 xxx xxx 3566 Jan 20 02:19 file-1.html
-rw-r--r-- 1 xxx xxx 253348 Jan 20 02:19 file-2.html
done

限制一次并行下载数量的示例(最大=3):

#!/bin/bash

m=3 ## max jobs (downloads) at a time
t=4 ## retries for each download

_debug(){ ## list jobs to see (debug)
printf ":: jobs running: %s\n" "$(echo `jobs -p`)"
}

## sample input data
## is redirected to filehandle=3
exec 3<<-EOF
www.google.com google.html
www.hotmail.com hotmail.html
www.wikipedia.org wiki.html
www.cisco.com cisco.html
www.cnn.com cnn.html
www.yahoo.com yahoo.html
EOF

## read data from filehandle=3, line by line
while IFS=' ' read -u 3 -r u f || [[ -n "$f" ]]; do
[[ -z "$f" ]] && continue ## ignore empty input line
while [[ $(jobs -p|wc -l) -ge "$m" ]]; do ## while $m or more jobs in running
_debug ## then list jobs to see (debug)
wait -n ## and wait for some job(s) to finish
done
curl --retry $t -Ls "$u" >"$f" & ## download in background
printf "job %d: %s => %s\n" $! "$u" "$f" ## print job info to see (debug)
done

_debug; wait; ls -l *\.html ## see final results

输出:

job 22992: www.google.com => google.html
job 22996: www.hotmail.com => hotmail.html
job 23000: www.wikipedia.org => wiki.html
:: jobs running: 22992 22996 23000
job 23022: www.cisco.com => cisco.html
:: jobs running: 22996 23000 23022
job 23034: www.cnn.com => cnn.html
:: jobs running: 23000 23022 23034
job 23052: www.yahoo.com => yahoo.html
:: jobs running: 23000 23034 23052
-rw-r--r-- 1 xxx xxx 61473 Jan 21 01:15 cisco.html
-rw-r--r-- 1 xxx xxx 155055 Jan 21 01:15 cnn.html
-rw-r--r-- 1 xxx xxx 12514 Jan 21 01:15 google.html
-rw-r--r-- 1 xxx xxx 3566 Jan 21 01:15 hotmail.html
-rw-r--r-- 1 xxx xxx 74711 Jan 21 01:15 wiki.html
-rw-r--r-- 1 xxx xxx 319967 Jan 21 01:15 yahoo.html

阅读您更新后的问题后,我认为 lftp 更容易使用, 可以记录和下载(自动follow-link + retry-download + continue-download);你永远不需要担心作业/fork 资源,因为你只运行几个 lftp 命令。只需将您的下载列表分成一些较小的列表,lftp 就会为您下载:

$ cat downthemall.sh 
#!/bin/bash

## run: lftp -c 'help get'
## to know how to use lftp to download files
## with automatically retry+continue

p=() ## pid list

for l in *\.lst; do
lftp -f "$l" >/dev/null & ## run proccesses in parallel
p+=("--pid=$!") ## record pid
done

until [[ -f d.log ]]; do sleep 0.5; done ## wait for the log file
tail -f d.log ${p[@]} ## print results when downloading

输出:

$ cat 1.lst 
set xfer:log true
set xfer:log-file d.log
get -c http://www.microsoft.com -o micro.html
get -c http://www.cisco.com -o cisco.html
get -c http://www.wikipedia.org -o wiki.html

$ cat 2.lst
set xfer:log true
set xfer:log-file d.log
get -c http://www.google.com -o google.html
get -c http://www.cnn.com -o cnn.html
get -c http://www.yahoo.com -o yahoo.html

$ cat 3.lst
set xfer:log true
set xfer:log-file d.log
get -c http://www.hp.com -o hp.html
get -c http://www.ibm.com -o ibm.html
get -c http://stackoverflow.com -o stack.html

$ rm *log *html;./downthemall.sh
2018-01-22 02:10:13 http://www.google.com.vn/?gfe_rd=cr&dcr=0&ei=leVkWqiOKfLs8AeBvqBA -> /tmp/1/google.html 0-12538 103.1 KiB/s
2018-01-22 02:10:13 http://edition.cnn.com/ -> /tmp/1/cnn.html 0-153601 362.6 KiB/s
2018-01-22 02:10:13 https://www.microsoft.com/vi-vn/ -> /tmp/1/micro.html 0-129791 204.0 KiB/s
2018-01-22 02:10:14 https://www.cisco.com/ -> /tmp/1/cisco.html 0-61473 328.0 KiB/s
2018-01-22 02:10:14 http://www8.hp.com/vn/en/home.html -> /tmp/1/hp.html 0-73136 92.2 KiB/s
2018-01-22 02:10:14 https://www.ibm.com/us-en/ -> /tmp/1/ibm.html 0-32700 131.4 KiB/s
2018-01-22 02:10:15 https://vn.yahoo.com/?p=us -> /tmp/1/yahoo.html 0-318657 208.4 KiB/s
2018-01-22 02:10:15 https://www.wikipedia.org/ -> /tmp/1/wiki.html 0-74711 60.7 KiB/s
2018-01-22 02:10:16 https://stackoverflow.com/ -> /tmp/1/stack.html 0-253033 180.8

关于bash - 如何等待 wget 完成以获取更多资源,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48342517/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com