gpt4 book ai didi

PowerShell脚本读取文件的性能太慢

转载 作者:行者123 更新时间:2023-12-04 03:12:46 25 4
gpt4 key购买 nike

我目前正在编写一个 PowerShell 脚本,该脚本将作为构建步骤的一部分在 TeamCity 中使用。脚本必须:

  • 递归检查文件夹中具有特定扩展名(.item)的所有文件,
  • 读取每个文件的第三行(其中包含一个 GUID)并检查这些行中是否有任何重复项,
  • 记录包含重复 GUID 的文件的路径并记录 GUID 本身,
  • 如果发现一个或多个重复项,则使 TeamCity 构建失败

我对 PowerShell 脚本完全陌生,但到目前为止,我已经做了一些我期望它做的事情:

Write-Host "Start checking for Unicorn serialization errors."

$files = get-childitem "%system.teamcity.build.workingDir%\Sitecore\serialization" -recurse -include *.item | where {! $_.PSIsContainer} | % { $_.FullName }
$arrayOfItemIds = @()
$NrOfFiles = $files.Length
[bool] $FoundDuplicates = 0

Write-Host "There are $NrOfFiles Unicorn item files to check."

foreach ($file in $files)
{
$thirdLineOfFile = (Get-Content $file)[2 .. 2]

if ($arrayOfItemIds -contains $thirdLineOfFile)
{
$FoundDuplicates = 1
$itemId = $thirdLineOfFile.Split(":")[1].Trim()

Write-Host "Duplicate item ID found!"
Write-Host "Item file path: $file"
Write-Host "Detected duplicate ID: $itemId"
Write-Host "-------------"
Write-Host ""
}
else
{
$arrayOfItemIds += $thirdLineOfFile
}
}

if ($foundDuplicates)
{
"##teamcity[buildStatus status='FAILURE' text='One or more duplicate ID's were detected in Sitecore serialised items. Check the build log to see which files and ID's are involved.']"
exit 1
}

Write-Host "End script checking for Unicorn serialization errors."

问题是:它很慢!该脚本必须检查的文件夹当前包含超过 14.000 个 .item-files,并且该数量很可能在未来只会继续增加。我知道打开和读取这么多文件是一项繁重的操作,但没想到大约需要半个小时才能完成。这太长了,因为这意味着每个(快照)构建的构建时间都会延长半小时,这是 Not Acceptable 。我曾希望脚本能在几分钟内完成。

我简直不敢相信没有更快的方法可以做到这一点。因此,非常感谢您在这方面的任何帮助!

解决方案

好吧,我不得不说,到目前为止,我收到的所有 3 个答案都对我有所帮助。我首先直接使用 .NET 框架类,然后也使用字典来解决不断增长的数组问题。运行我自己的脚本大约需要 30 分钟,然后使用 .NET 框架类缩短到 2 分钟。使用 Dictionary 解决方案后,它只需要 6 或 7 秒!我使用的最终脚本:

Write-Host "Start checking for Unicorn serialization errors."

[String[]] $allFilePaths = [System.IO.Directory]::GetFiles("%system.teamcity.build.workingDir%\Sitecore\serialization", "*.item", "AllDirectories")
$IdsProcessed = New-Object 'system.collections.generic.dictionary[string,string]'
[bool] $FoundDuplicates = 0
$NrOfFiles = $allFilePaths.Length

Write-Host "There are $NrOfFiles Unicorn item files to check."
Write-Host ""

foreach ($filePath in $allFilePaths)
{
[System.IO.StreamReader] $sr = [System.IO.File]::OpenText($filePath)
$unused1 = $sr.ReadLine() #read the first unused line
$unused2 = $sr.ReadLine() #read the second unused line
[string]$thirdLineOfFile = $sr.ReadLine()
$sr.Close()

if ($IdsProcessed.ContainsKey($thirdLineOfFile))
{
$FoundDuplicates = 1
$itemId = $thirdLineOfFile.Split(":")[1].Trim()
$otherFileWithSameId = $IdsProcessed[$thirdLineOfFile]

Write-Host "---------------"
Write-Host "Duplicate item ID found!"
Write-Host "Detected duplicate ID: $itemId"
Write-Host "Item file path 1: $filePath"
Write-Host "Item file path 2: $otherFileWithSameId"
Write-Host "---------------"
Write-Host ""
}
else
{
$IdsProcessed.Add($thirdLineOfFile, $filePath)
}
}

if ($foundDuplicates)
{
"##teamcity[buildStatus status='FAILURE' text='One or more duplicate ID|'s were detected in Sitecore serialised items. Check the build log to see which files and ID|'s are involved.']"
exit 1
}

Write-Host "End script checking for Unicorn serialization errors. No duplicate ID's were found."

谢谢大家!

最佳答案

尝试将 Get-Content 替换为 [System.IO.File]::ReadLines。如果这仍然太慢,请考虑使用 System.IO.StreamReader - 这会导致您编写更多代码但允许您只阅读前 3 行。

关于PowerShell脚本读取文件的性能太慢,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36236389/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com