gpt4 book ai didi

database - DBI 连接,失败 : FATAL: sorry, 已经有太多客户端

转载 作者:行者123 更新时间:2023-11-29 14:07:06 39 4
gpt4 key购买 nike

我正在运行一个 crontab,如下所述:

* 1 * * * /var/fdp/reportingscript/an_outgoing_tps_report.pl
* 1 * * * /var/fdp/reportingscript/an_processed_rule_report.pl
* 1 * * * /var/fdp/reportingscript/sdp_incoming_traffic_tps_report.pl
* 1 * * * /var/fdp/reportingscript/en_outgoing_tps_report.pl
* 1 * * * /var/fdp/reportingscript/en_processed_rule_report.pl
* 1 * * * /var/fdp/reportingscript/rs_incoming_traffic_report.pl
* 1 * * * /var/fdp/reportingscript/an_summary_report.pl
* 1 * * * /var/fdp/reportingscript/en_summary_report.pl
* 1 * * * /var/fdp/reportingscript/user_report.pl

并出现错误:(对于所有脚本,错误都是相同的)

DBI connect('dbname=scs;host=192.168.18.23;port=5432','postgres',...) 失败:致命:抱歉,/var/fdp/reportingscript/sdp_incoming_traffic_tps_report 中已有太多客户端。第 38 行。

此外,如果我一次一个地手动运行脚本,它不会显示任何错误。

为了您的引用,我还附上了显示上述错误的脚本:

#!/usr/bin/perl

use strict;
use FindBin;
use lib $FindBin::Bin;
use Time::Local;
use warnings;
use DBI;
use File::Basename;
use CONFIG;
use Getopt::Long;
use Data::Dumper;

my $channel;
my $circle;
my $daysbefore;
my $dbh;
my $processed;
my $discarded;
my $db_name = "scs";
my $db_vip = "192.168.18.23";
my $db_port = "5432";
my $db_user = "postgres";
my $db_password = "postgres";
#### code to redirect all console output in log file
my ( $seco_, $minu_, $hrr_, $moday_, $mont_, $years_ ) = localtime(time);
$years_ += 1900;
$mont_ += 1;
my $timestamp = sprintf( "%d%02d%02d", $years_, $mont_, $moday_ );
$timestamp .= "_" . $hrr_ . "_" . $minu_ . "_" . $seco_;
print "timestamp is $timestamp \n";
my $logfile = "/var/fdp/log/reportlog/sdp_incoming_report_$timestamp";
print "\n output files is " . $logfile . "\n";
open( STDOUT, ">", $logfile ) or die("$0:dup:$!");
open STDERR, ">&STDOUT" or die "$0: dup: $!";

my ( $sec_, $min_, $hr_, $mday_, $mon_, $year_ ) = localtime(time);

$dbh = DBI->connect( "DBI:Pg:dbname=$db_name;host=$db_vip;port=$db_port",
"$db_user", "$db_password", { 'RaiseError' => 1 } );
print "\n Dumper is " . $dbh . "\n";
my $sthcircle = $dbh->prepare("select id,name from circle");
$sthcircle->execute();

while ( my $refcircle = $sthcircle->fetchrow_hashref() ) {
print "\n dumper for circle is " . Dumper($refcircle);
my $namecircle = uc( $refcircle->{'name'} );
my $idcircle = $refcircle->{'id'};
$circle->{$namecircle} = $idcircle;
print "\n circle name : " . $namecircle . "id is " . $idcircle;
}

sub getDate {
my $daysago = shift;
$daysago = 0 unless ($daysago);
my @months = qw(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec);
my ( $sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst ) = localtime( time - ( 86400 * $daysago ) );
# YYYYMMDD, e.g. 20060126
$year_ = $year + 1900;
$mday_ = $mday;
$mon_ = $mon + 1;
return sprintf( "%d-%02d-%02d", $year + 1900, $mon + 1, $mday );
}

GetOptions( "d=i" => \$daysbefore );

my $filedate = getDate($daysbefore);
print "\n filedate is $filedate \n";
my @basedir = CONFIG::getBASEDIR();
print "\n array has basedir" . Dumper(@basedir);
$mon_ = "0" . $mon_ if ( defined $mon_ && $mon_ <= 9 );
$mday_ = "0" . $mday_ if ( defined $mday_ && $mday_ <= 9 );

foreach (@basedir) {
my $both = $_;
print "\n dir is $both \n";
for ( keys %{$circle} ) {
my $path = $both;
my $circleid = $_;
print "\n circle is $circleid \n";
my $circleidvalue = $circle->{$_};
my $file_csv_path = "/opt/offline/reports/$circleid";
my %sdp_hash = ();
print "\n file is $file_csv_path csv file \n";
if ( -d "$file_csv_path" ) {
} else {
mkdir( "$file_csv_path", 0755 );
}

my $csv_new_file
= $file_csv_path
. "\/FDP_"
. $circleid
. "_SDPINCOMINGTPSREPORT_"
. $mday_ . "_"
. $mon_ . "_"
. $year_ . "\.csv";
print "\n file is $csv_new_file \n";
print "\n date:$year_-$mon_-$mday_ \n";

open( DATA, ">>", $csv_new_file );
$path = $path . $circleid . "/Reporting/EN/Sdp";
print "\n *****path is $path \n";
my @filess = glob("$path/*");

foreach my $file (@filess) {
print "\n Filedate ---------> $filedate file is $file \n";
if ( $file =~ /.*_sdp.log.$filedate-*/ ) {
print "\n found file for $circleid \n";
my $x;
my $log = $file;
my @a = split( "-", $file );
my $starttime = $a[3];
my $endtime = $starttime;
my $sdpid;
my $sdpid_value;
$starttime = "$filedate $starttime:00:00";
$endtime = "$filedate $endtime:59:59";
open( FH, "<", "$log" ) or die "cannot open < $log: $!";

while (<FH>) {
my $line = $_;
print "\n line is $line \n";
chomp($line);
$line =~ s/\s+$//;
my @a = split( ";", $line );
$sdpid = $a[4];
my $stat = $a[3];
$x->{$sdpid}->{$stat}++;
}
close(FH);
print "\n Dumper is x:" . Dumper($x) . "\n";
foreach my $sdpidvalue ( keys %{$x} ) {
print "\n sdpvalue us: $sdpidvalue \n";
if ( exists( $x->{$sdpidvalue}->{processed} ) ) {
$processed = $x->{$sdpidvalue}->{processed};
} else {
$processed = 0;
}
if ( exists( $x->{$sdpidvalue}->{discarded} ) ) {
$discarded = $x->{$sdpidvalue}->{discarded};
} else {
$discarded = 0;
}
my $sth_new1 = $dbh->prepare("select id from sdp_details where sdp_name='$sdpid' ");
print "\n sth new is " . Dumper($sth_new1);
$sth_new1->execute();
while ( my $row1 = $sth_new1->fetchrow_hashref ) {
$sdpid_value = $row1->{'id'};
print "\n in hash rowref from sdp_details table " . Dumper($sdpid_value);
}
my $sth_check
= $dbh->prepare(
"select processed,discarded from sdp_incoming_tps where circle_id='$circleidvalue' and sdp_id='$sdpid_value' and start_time='$starttime' and end_time='$endtime'"
);
print "\n Dumper for bhdatabase statement is " . Dumper($sth_check);
$sth_check->execute();
my $duplicate_row = 0;
my ( $success_, $failure_ );
while ( my $row_dup = $sth_check->fetchrow_hashref ) {
print "\n row_dup is " . Dumper($row_dup);
$duplicate_row = 1;
$success_ += $row_dup->{'processed'};
$failure_ += $row_dup->{'discarded'};
}
if ( $duplicate_row == 0 ) {
my $sth
= $dbh->prepare(
"insert into sdp_incoming_tps (id,circle_id,start_time,end_time,processed,discarded,sdp_id) select nextval('sdp_incoming_tps_id'),'$circleidvalue','$starttime','$endtime','$processed','$discarded','$sdpid_value' "
);
$sth->execute();
} else {
$success_ += $processed;
$failure_ += $discarded;
my $sth
= $dbh->prepare(
"update sdp_incoming_tps set processed=$success_,discarded=$failure_ where circle_id='$circleidvalue' and sdp_id='$sdpid_value' and start_time='$starttime' and end_time='$endtime'"
);
$sth->execute();
}
# my $file_csv_path = "/opt/offline/reports/$circleid";
# my %sdp_hash = ();
# if ( -d "$file_csv_path" ) {
# } else {
# mkdir( "$file_csv_path", 0755 );
# }
# my $csv_new_file = $file_csv_path . "\/FDP_" . $circleid . "_SDPINCOMINGTPSREPORT_". $mday_ . "_" . $mon_ . "_" . $year_ . "\.csv";
print "\n file is $csv_new_file \n";
print "\n date:$year_-$mon_-$mday_ \n";
close(DATA);
open( DATA, ">>", $csv_new_file ) or die("cant open file : $! \n");
print "\n csv new file is $csv_new_file \n";
my $sth_new2 = $dbh->prepare("select * from sdp_details");
$sth_new2->execute();

while ( my $row1 = $sth_new2->fetchrow_hashref ) {
my $sdpid = $row1->{'id'};
$sdp_hash{$sdpid} = $row1->{'sdp_name'};
}
#print "\n resultant sdp hash".Dumper(%sdp_hash);
#$mon_="0".$mon_;
print "\n timestamp being matched is $year_-$mon_-$mday_ \n";
print "\n circle id value is $circleidvalue \n";
my $sth_new
= $dbh->prepare(
"select * from sdp_incoming_tps where date_trunc('day',start_time)='$year_-$mon_-$mday_' and circle_id='$circleidvalue'"
);
$sth_new->execute();
print "\n final db line is " . Dumper($sth_new);
my $str = $sth_new->{NAME};
my @str_arr = @$str;
shift(@str_arr);
shift(@str_arr);
my @upper = map { ucfirst($_) } @str_arr;
$upper[4] = "Sdp-Name";
my $st = join( ",", @upper );
$st = $st . "\n";
$st =~ s/\_/\-/g;
#print $fh "sep=,"; print $fh "\n";

print DATA $st;
while ( my $row = $sth_new->fetchrow_hashref ) {

print "\n found matching row \n";
my $row_line
= $row->{'start_time'} . ","
. $row->{'end_time'} . ","
. $row->{'processed'} . ","
. $row->{'discarded'} . ","
. $sdp_hash{ $row->{'sdp_id'} } . "\n";
print "\n row line matched is " . $row_line . "\n";
print DATA $row_line;
}
close(DATA);
}
} else {
next;
}
}
}
}

$dbh->disconnect;

请帮忙,我怎样才能避免这个错误。

感谢副词。

最佳答案

如错误消息所示,当前的问题是同时运行所有这些脚本需要的数据库连接数超过服务器允许的数量。如果它们单独运行良好,那么单独运行它们将解决这个问题。

根本的问题是你的crontab不对。 * 1 * * * 将在每天 0100 到 0159 期间每分钟运行所有脚本。如果它们需要一分钟以上的时间才能完成,那么新的一组将在前一组完成之前开始,这就需要一组额外的数据库连接,这将很快通过可用连接池运行。

我假设您每天只需运行一次每日脚本,而不是六十次,因此将其更改为 5 1 * * * 以仅在 0105 运行一次。

如果仍然存在问题,请在不同的时间运行每个问题(无论如何这可能是个好主意):

5 1 * * * /var/fdp/reportingscript/an_outgoing_tps_report.pl
10 1 * * * /var/fdp/reportingscript/an_processed_rule_report.pl
15 1 * * * /var/fdp/reportingscript/sdp_incoming_traffic_tps_report.pl
20 1 * * * /var/fdp/reportingscript/en_outgoing_tps_report.pl
25 1 * * * /var/fdp/reportingscript/en_processed_rule_report.pl
30 1 * * * /var/fdp/reportingscript/rs_incoming_traffic_report.pl
35 1 * * * /var/fdp/reportingscript/an_summary_report.pl
40 1 * * * /var/fdp/reportingscript/en_summary_report.pl
45 1 * * * /var/fdp/reportingscript/user_report.pl

关于database - DBI 连接,失败 : FATAL: sorry, 已经有太多客户端,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26114322/

39 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com