gpt4 book ai didi

ios - Swift:从 url 下载数据导致 semaphore_wait_trap 卡住

转载 作者:搜寻专家 更新时间:2023-10-31 22:12:03 25 4
gpt4 key购买 nike

在我的应用程序中,点击按钮从 Internet 站点下载数据。该站点是包含二进制数据的链接列表。有时,第一个链接可能不包含正确的数据。在这种情况下,应用程序获取数组中的下一个链接并从那里获取数据。链接是正确的。

我遇到的问题是,当我点击按钮时,应用程序经常(尽管并非总是)卡住几秒钟。 5-30秒后,解冻并正常下载工具。我明白,有什么东西阻塞了主线程。在 xCode 中停止进程时,我得到了这个(注意到 semaphore_wait_trap):

enter image description here

我是这样做的:

// Button Action
@IBAction func downloadWindNoaa(_ sender: UIButton)
{
// Starts activity indicator
startActivityIndicator()

// Starts downloading and processing data

// Either use this
DispatchQueue.global(qos: .default).async
{
DispatchQueue.main.async
{
self.downloadWindsAloftData()
}
}


// Or this - no difference.
//downloadWindsAloftData()
}
}

func downloadWindsAloftData()
{
// Creates a list of website addresses to request data: CHECKED.
self.listOfLinks = makeGribWebAddress()

// Extract and save the data
saveGribFile()
}

// This downloads the data and saves it in a required format. I suspect, this is the culprit

func saveGribFile()
{
// Check if the links have been created
if (!self.listOfLinks.isEmpty)
{
/// Instance of OperationQueue
queue = OperationQueue()

// Convert array of Strings to array of URL links
let urls = self.listOfLinks.map { URL(string: $0)! }

guard self.urlIndex != urls.count else
{
NSLog("report failure")
return
}

// Current link
let url = urls[self.urlIndex]

// Increment the url index
self.urlIndex += 1

// Add operation to the queue
queue.addOperation { () -> Void in

// Variables for Request, Queue, and Error
let request = URLRequest(url: url)
let session = URLSession.shared

// Array of bytes that will hold the data
var dataReceived = [UInt8]()

// Read data
let task = session.dataTask(with: request) {(data, response, error) -> Void in

if error != nil
{
print("Request transport error")
}
else
{
let response = response as! HTTPURLResponse
let data = data!

if response.statusCode == 200
{
//Converting data to String
dataReceived = [UInt8](data)
}
else
{
print("Request server-side error")
}
}

// Main thread
OperationQueue.main.addOperation(
{
// If downloaded data is less than 2 KB in size, repeat the operation
if dataReceived.count <= 2000
{
self.saveGribFile()
}

else
{
self.setWindsAloftDataFromGrib(gribData: dataReceived)

// Reset the URL Index back to 0
self.urlIndex = 0
}
}
)
}
task.resume()
}
}
}


// Processing data further
func setWindsAloftDataFromGrib(gribData: [UInt8])
{
// Stops spinning activity indicator
stopActivityIndicator()

// Other code to process data...
}

// Makes Web Address

let GRIB_URL = "http://xxxxxxxxxx"

func makeGribWebAddress() -> [String]
{
var finalResult = [String]()

// Main address site
let address1 = "http://xxxxxxxx"

// Address part with type of data
let address2 = "file=gfs.t";
let address4 = "z.pgrb2.1p00.anl&lev_250_mb=on&lev_450_mb=on&lev_700_mb=on&var_TMP=on&var_UGRD=on&var_VGRD=on"

let leftlon = "0"
let rightlon = "359"
let toplat = "90"
let bottomlat = "-90"

// Address part with coordinates
let address5 = "&leftlon="+leftlon+"&rightlon="+rightlon+"&toplat="+toplat+"&bottomlat="+bottomlat

// Vector that includes all Grib files available for download
let listOfFiles = readWebToString()

if (!listOfFiles.isEmpty)
{
for i in 0..<listOfFiles.count
{
// Part of the link that includes the file
let address6 = "&dir=%2F"+listOfFiles[i]

// Extract time: last 2 characters
let address3 = listOfFiles[i].substring(from:listOfFiles[i].index(listOfFiles[i].endIndex, offsetBy: -2))

// Make the link
let addressFull = (address1 + address2 + address3 + address4 + address5 + address6).trimmingCharacters(in: .whitespacesAndNewlines)

finalResult.append(addressFull)
}
}

return finalResult;
}


func readWebToString() -> [String]
{
// Final array to return
var finalResult = [String]()

guard let dataURL = NSURL(string: self.GRIB_URL)
else
{
print("IGAGribReader error: No URL identified")
return []
}

do
{
// Get contents of the page
let contents = try String(contentsOf: dataURL as URL)

// Regular expression
let expression : String = ">gfs\\.\\d+<"
let range = NSRange(location: 0, length: contents.characters.count)

do
{
// Match the URL content with regex expression
let regex = try NSRegularExpression(pattern: expression, options: NSRegularExpression.Options.caseInsensitive)
let contentsNS = contents as NSString
let matches = regex.matches(in: contents, options: [], range: range)

for match in matches
{
for i in 0..<match.numberOfRanges
{
let resultingNS = contentsNS.substring(with: (match.rangeAt(i))) as String
finalResult.append(resultingNS)
}
}

// Remove "<" and ">" from the strings
if (!finalResult.isEmpty)
{
for i in 0..<finalResult.count
{
finalResult[i].remove(at: finalResult[i].startIndex)
finalResult[i].remove(at: finalResult[i].index(before: finalResult[i].endIndex))
}
}
}
catch
{
print("IGAGribReader error: No regex match")
}

}
catch
{
print("IGAGribReader error: URL content is not read")
}


return finalResult;
}

过去几周我一直在尝试修复它,但徒劳无功。任何帮助将不胜感激!

最佳答案

enter image description here

        let contents = try String(contentsOf: dataURL as URL)

您正在主线程(主队列)上调用 String(contentsOf: url)。这会将 URL 的内容同步下载成字符串 主线程用于驱动 UI,运行同步网络代码将卡住 UI。 This is a big no-no .

永远不要在主队列中调用 readWebToString()。执行 DispatchQueue.main.async { self.downloadWindsAloftData() } 将 block 放入我们应该避免的主队列中。 (async 只是表示“稍后执行”,它仍然在 Dispatch.main 上执行。)

你应该只在全局队列而不是主队列中运行 downloadWindsAloftData

    DispatchQueue.global(qos: .default).async {
self.downloadWindsAloftData()
}

仅在您想要更新 UI 时运行 DispatchQueue.main.async

关于ios - Swift:从 url 下载数据导致 semaphore_wait_trap 卡住,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43160970/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com