gpt4 book ai didi

python - 如何在 python 中优化搜索两个元组中的大 tsv 文件?

转载 作者:太空宇宙 更新时间:2023-11-03 18:56:08 26 4
gpt4 key购买 nike

如何在 python 中优化搜索两个元组中的大型 tsv 文件?

你好。我是一个 python 新手,一直致力于使用两个单独的元组来搜索匹配的元组元素。我使用的文件最多有 3M 行,但速度非常慢。我已经阅读帖子数周了,但似乎没有正确地将代码拼凑在一起。这是我到目前为止所拥有的。 (为了清晰起见,数据已被编辑和简化)。举例来说,我有:

authList = (jennifer, 35, 20),(john, 20, 34), (fred, 34, 89)  # this is a tuple of
#unique tweet authors with their x, y coordinates exported from MS Access in the form
#of a txt file.

rtAuthors = (larry, 57, 24, simon), (jeremy, 24, 15, john), (sandra, 39, 24, fred)
# this is a tuple of tuples including the author, their x,y coordinates, and the
#author whom they are retweeting (taken from the "RT @ portion of their tweet)

我正在尝试创建一个新的元组 (rtAuthList),它从 rtAuthors 中任何转发的作者的 authList 中提取 x、y 坐标。

所以我会有一个像这样的新元组:

 rtAuthList = (jeremy, 24, 15, john, 20, 34),(sandra, 39, 24, fred, 34, 89)

我的问题确实有两个部分,所以我不确定是否应该发布两个问题或重新命名我的问题以包含两个问题。首先,按照我编写的方式运行此过程大约需要一个小时。一定有更快的方法。

我的问题的另一部分是为什么它只完成了最终元组的大约一半?对于我当前的数据集,经过这两个步骤后,authList 中有大约 250,000 行,rtAuthors 中有 500,000 行。但是当我处理第三步并在最后打开 rtAuthList 时,它只查看了我的前 10 天的数据,忽略了最后 20 天——我正在处理一个月的推文)。我不知道为什么它没有检查整个 rtAuthors 列表。

我在下面包含了我的整个代码,以便您了解我想要做什么,但在创建 authList 和 rtAuthors 元组之后,我确实在步骤 3 中寻求帮助。请理解,我对编程完全陌生,所以写下答案就好像我什么都不知道一样,尽管当您查看我的代码时这可能是显而易见的。

import csv
import sys
import os

authors= ""

class TwitterFields: ### associated with monthly tweets from Twitter API
def __init__(self, ID, COORD1, COORD2,TIME, AUTH, TEXT):
self.ID = ID
self.COORD1 = COORD1
self.COORD2 = COORD2
self.TIME = TIME
self.AUTH=AUTH
self.TEXT=TEXT
self.RTAUTH=""
self.RTX=""
self.RTY=""

description="Twitter Data Class: holds twitter data fields from API "
author=""

class AuthorFields: ## associated with the txt file exported from MS Access
def __init__(self, AUTH, COORD1, COORD2):
self.AUTH=AUTH
self.COORD1 = COORD1
self.COORD2 = COORD2
self.RTAUTH=""
self.RTX=""
self.RTY=""

description="Author Data Class: holds author data fields from MS Access export"
author=""


tw = [] #empty list to hold data from class TwitterFields
rt = [] #empty list to hold data from class AuthorFields


authList = () ## tuple for holding auth, x, and y from tw list
rtAuthors = () ## tuple for holding tuples from rt where "RT @" is in tweet text
rtAuthList =() ## tuple for holding results of set intersection

e = () # tuple for authList
b=() # tuple for rtAuthors
c=() # tuple for rtAuthList
bad_data = [] #A container for bad data

with open(r'C:\Users\Amy\Desktop\Code\Merge2.txt') as g: #open MS Access export file
for line in g:
strLine = line.rstrip('\r\n').split("\t")
tw.append(AuthorFields( str(strLine[0]), #reads author name
strLine[1], # x coordinate
strLine[2])) # y coordinate


## Step 1 ##
# Loop through the unique author dataset (tw) and make a list of all authors,x, y
try:
for i in range(1, len(tw)):
e=((tw[i].AUTH[:tw[i].AUTH.index(" (")], tw[i].COORD1,tw[i].COORD2))
authList = authList +(e,)
except:
bad_data.append(i)

print "length of authList = ", len(authList)


# Loop through tweet txt file from MS Access

with open(r'C:\Users\Amy\Desktop\Code\Syria_2012_08UTCedits3.txt') as f:
for line in f:
strLine=line.rstrip('\r\n').split('\t') # parse each line for tab spaces
rt.append(TwitterFields(str(strLine[0]) , #reads tweet ID
strLine[5], # x coordinate
strLine[6], # y coordinate
strLine[8], # time stamp
strLine[9], # author
strLine[12] )) # tweet text

## Step 2 ##
## Loop through new list (rt) to find all instances of "RT @" and retrieve author name

for i in range(1, len(rt)): # creates tuple of (authors, x, y, rtauth, rtx, rty)
if (rt[i].TEXT[:4] == 'RT @'): # finds author in tweet text between "RT @" and ":"
end = rt[i].TEXT.find(":")
rt[i].RTAUTH=rt[i].TEXT[4:end]
b = ((rt[i].AUTH, rt[i].COORD1, rt[i].COORD2, rt[i].TIME, rt[i].RTAUTH))
rtAuthors = rtAuthors + (b,)
else:
pass

print "length of rtAuthors = ", len(rtAuthors)


## Step 3 ##

## Loop through new rtAuthors tuple and find where rt[i].RTAUTH matches tw[i].AUTH in
## authList.


set1 = set(k[4] for k in rtAuthors).intersection(x[0] for x in authList)
#e = iter(set1).next()
set2 = list(set1)


print "Length of first set = ", len(set2)

# For each match, grab the x and y from authList and copy to rt[i].RTX and rt[i].RTY

for i in range(1, len(rtAuthors)):
if rt[i].RTAUTH in set2:
authListIndex = [x[0] for x in authList].index(rt[i].RTAUTH) #get record #
rt[i].RTX= authList[authListIndex][1] # grab the x
rt[i].RTY = authList[authListIndex][2] # grab the y
c = ((rt[i].AUTH, rt[i].COORD1, rt[i].COORD2, rt[i].TIME, rt[i].RTAUTH,
rt[i].RTX, rt[i].RTY))
rtAuthList = rtAuthList + (c,) # create new tuple of tuples with matches

else:
pass

print "length of rtAuthList = ", len(rtAuthList)

最佳答案

在第 3 步中,您将使用 O(n²) 算法来匹配元组。如果您为 authList 构建查找字典,则可以在 O(n) 时间内完成...

>>> authList = ('jennifer', 35, 20), ('john', 20, 34), ('fred', 34, 89)
>>> rtAuthors = ('larry', 57, 24, 'simon'), ('jeremy', 24, 15, 'john'), ('sandra', 39, 24, 'fred')
>>> authDict = {t[0]: t[1:] for t in authList}
>>> rtAuthList = [t + authDict[t[-1]] for t in rtAuthors if t[-1] in authDict]
>>> print rtAuthList
[('jeremy', 24, 15, 'john', 20, 34), ('sandra', 39, 24, 'fred', 34, 89)]

关于python - 如何在 python 中优化搜索两个元组中的大 tsv 文件?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17219831/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com