- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
"DF","00000000@11111.COM","FLTINT1000130394756","26JUL2010","B2C","6799.2"
"Rail","00000.POO@GMAIL.COM","NR251764697478","24JUN2011","B2C","2025"
"DF","0000650000@YAHOO.COM","NF2513521438550","01JAN2013","B2C","6792"
"Bus","00009.GAURAV@GMAIL.COM","NU27012932319739","26JAN2013","B2C","800"
"Rail","0000.ANU@GMAIL.COM","NR251764697526","24JUN2011","B2C","595"
"Rail","0000MANNU@GMAIL.COM","NR251277005737","29OCT2011","B2C","957"
"Rail","0000PRANNOY0000@GMAIL.COM","NR251297862893","21NOV2011","B2C","212"
"DF","0000PRANNOY0000@YAHOO.CO.IN","NF251327485543","26JUN2011","B2C","17080"
"Rail","0000RAHUL@GMAIL.COM","NR2512012069809","25OCT2012","B2C","5731"
"DF","0000SS0@GMAIL.COM","NF251355775967","10MAY2011","B2C","2000"
"DF","0001HARISH@GMAIL.COM","NF251352240086","22DEC2010","B2C","4006"
"DF","0001HARISH@GMAIL.COM","NF251742087846","12DEC2010","B2C","1000"
"DF","0001HARISH@GMAIL.COM","NF252022031180","09DEC2010","B2C","3439"
"Rail","000AYUSH@GMAIL.COM","NR2151120122283","25JAN2013","B2C","136"
"Rail","000AYUSH@GMAIL.COM","NR2151213260036","28NOV2012","B2C","41"
"Rail","000AYUSH@GMAIL.COM","NR2151313264432","29NOV2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2151413266728","29NOV2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2512912359037","08DEC2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2517612385569","12DEC2012","B2C","96"
以上是示例数据。数据按照邮箱地址排序,文件很大,1.5Gb左右
我想在另一个类似这样的 csv 文件中输出
"DF","00000000@11111.COM","FLTINT1000130394756","26JUL2010","B2C","6799.2",1,0 days
"Rail","00000.POO@GMAIL.COM","NR251764697478","24JUN2011","B2C","2025",1,0 days
"DF","0000650000@YAHOO.COM","NF2513521438550","01JAN2013","B2C","6792",1,0 days
"Bus","00009.GAURAV@GMAIL.COM","NU27012932319739","26JAN2013","B2C","800",1,0 days
"Rail","0000.ANU@GMAIL.COM","NR251764697526","24JUN2011","B2C","595",1,0 days
"Rail","0000MANNU@GMAIL.COM","NR251277005737","29OCT2011","B2C","957",1,0 days
"Rail","0000PRANNOY0000@GMAIL.COM","NR251297862893","21NOV2011","B2C","212",1,0 days
"DF","0000PRANNOY0000@YAHOO.CO.IN","NF251327485543","26JUN2011","B2C","17080",1,0 days
"Rail","0000RAHUL@GMAIL.COM","NR2512012069809","25OCT2012","B2C","5731",1,0 days
"DF","0000SS0@GMAIL.COM","NF251355775967","10MAY2011","B2C","2000",1,0 days
"DF","0001HARISH@GMAIL.COM","NF251352240086","09DEC2010","B2C","4006",1,0 days
"DF","0001HARISH@GMAIL.COM","NF251742087846","12DEC2010","B2C","1000",2,3 days
"DF","0001HARISH@GMAIL.COM","NF252022031180","22DEC2010","B2C","3439",3,10 days
"Rail","000AYUSH@GMAIL.COM","NR2151213260036","28NOV2012","B2C","41",1,0 days
"Rail","000AYUSH@GMAIL.COM","NR2151313264432","29NOV2012","B2C","96",2,1 days
"Rail","000AYUSH@GMAIL.COM","NR2151413266728","29NOV2012","B2C","96",3,0 days
"Rail","000AYUSH@GMAIL.COM","NR2512912359037","08DEC2012","B2C","96",4,9 days
"Rail","000AYUSH@GMAIL.COM","NR2512912359037","08DEC2012","B2C","96",5,0 days
"Rail","000AYUSH@GMAIL.COM","NR2517612385569","12DEC2012","B2C","96",6,4 days
"Rail","000AYUSH@GMAIL.COM","NR2517612385569","12DEC2012","B2C","96",7,0 days
"Rail","000AYUSH@GMAIL.COM","NR2151120122283","25JAN2013","B2C","136",8,44 days
"Rail","000AYUSH@GMAIL.COM","NR2151120122283","25JAN2013","B2C","136",9,0 days
即如果条目第一次出现我需要附加 1 如果它第二次出现我需要附加 2 同样我的意思是我需要计算文件中电子邮件地址的出现次数以及电子邮件是否存在两次或更多次我想要日期之间的差异并记住日期未排序所以我们也必须根据特定的电子邮件地址对它们进行排序我正在寻找使用 numpy 或 pandas 库或任何其他库的 python 解决方案处理这种类型的巨大数据而不会出现超出限制的内存异常我有双核处理器和 centos 6.3 并且有 4GB 的 ram
最佳答案
确保你有 0.11,阅读这些文档:http://pandas.pydata.org/pandas-docs/dev/io.html#hdf5-pytables ,以及这些食谱:http://pandas.pydata.org/pandas-docs/dev/cookbook.html#hdfstore (特别是“合并数百万行”
这是一个似乎有效的解决方案。这是工作流程:
本质上,我们是从表中取出一个 block ,然后与文件其他部分的 block 组合。 combiner 函数不会减少,而是计算该 block 中所有元素之间的函数(以天为单位的差异),同时消除重复项,并在每次循环后获取最新数据。有点像递归归约。
这应该是O(num_of_chunks**2)内存和计算时间在你的情况下,chunksize 可以是 1m(或更多)
processing [0] [datastore.h5]
processing [1] [datastore_0.h5]
count date diff email
4 1 2011-06-24 00:00:00 0 0000.ANU@GMAIL.COM
1 1 2011-06-24 00:00:00 0 00000.POO@GMAIL.COM
0 1 2010-07-26 00:00:00 0 00000000@11111.COM
2 1 2013-01-01 00:00:00 0 0000650000@YAHOO.COM
3 1 2013-01-26 00:00:00 0 00009.GAURAV@GMAIL.COM
5 1 2011-10-29 00:00:00 0 0000MANNU@GMAIL.COM
6 1 2011-11-21 00:00:00 0 0000PRANNOY0000@GMAIL.COM
7 1 2011-06-26 00:00:00 0 0000PRANNOY0000@YAHOO.CO.IN
8 1 2012-10-25 00:00:00 0 0000RAHUL@GMAIL.COM
9 1 2011-05-10 00:00:00 0 0000SS0@GMAIL.COM
12 1 2010-12-09 00:00:00 0 0001HARISH@GMAIL.COM
11 2 2010-12-12 00:00:00 3 0001HARISH@GMAIL.COM
10 3 2010-12-22 00:00:00 13 0001HARISH@GMAIL.COM
14 1 2012-11-28 00:00:00 0 000AYUSH@GMAIL.COM
15 2 2012-11-29 00:00:00 1 000AYUSH@GMAIL.COM
17 3 2012-12-08 00:00:00 10 000AYUSH@GMAIL.COM
18 4 2012-12-12 00:00:00 14 000AYUSH@GMAIL.COM
13 5 2013-01-25 00:00:00 58 000AYUSH@GMAIL.COM
import pandas as pd
import StringIO
import numpy as np
from time import strptime
from datetime import datetime
# your data
data = """
"DF","00000000@11111.COM","FLTINT1000130394756","26JUL2010","B2C","6799.2"
"Rail","00000.POO@GMAIL.COM","NR251764697478","24JUN2011","B2C","2025"
"DF","0000650000@YAHOO.COM","NF2513521438550","01JAN2013","B2C","6792"
"Bus","00009.GAURAV@GMAIL.COM","NU27012932319739","26JAN2013","B2C","800"
"Rail","0000.ANU@GMAIL.COM","NR251764697526","24JUN2011","B2C","595"
"Rail","0000MANNU@GMAIL.COM","NR251277005737","29OCT2011","B2C","957"
"Rail","0000PRANNOY0000@GMAIL.COM","NR251297862893","21NOV2011","B2C","212"
"DF","0000PRANNOY0000@YAHOO.CO.IN","NF251327485543","26JUN2011","B2C","17080"
"Rail","0000RAHUL@GMAIL.COM","NR2512012069809","25OCT2012","B2C","5731"
"DF","0000SS0@GMAIL.COM","NF251355775967","10MAY2011","B2C","2000"
"DF","0001HARISH@GMAIL.COM","NF251352240086","22DEC2010","B2C","4006"
"DF","0001HARISH@GMAIL.COM","NF251742087846","12DEC2010","B2C","1000"
"DF","0001HARISH@GMAIL.COM","NF252022031180","09DEC2010","B2C","3439"
"Rail","000AYUSH@GMAIL.COM","NR2151120122283","25JAN2013","B2C","136"
"Rail","000AYUSH@GMAIL.COM","NR2151213260036","28NOV2012","B2C","41"
"Rail","000AYUSH@GMAIL.COM","NR2151313264432","29NOV2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2151413266728","29NOV2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2512912359037","08DEC2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2517612385569","12DEC2012","B2C","96"
"""
# read in and create the store
data_store_file = 'datastore.h5'
store = pd.HDFStore(data_store_file,'w')
def dp(x, **kwargs):
return [ datetime(*strptime(v,'%d%b%Y')[0:3]) for v in x ]
chunksize=5
reader = pd.read_csv(StringIO.StringIO(data),names=['x1','email','x2','date','x3','x4'],
header=0,usecols=['email','date'],parse_dates=['date'],
date_parser=dp, chunksize=chunksize)
for i, chunk in enumerate(reader):
chunk['indexer'] = chunk.index + i*chunksize
# create the global index, and keep it in the frame too
df = chunk.set_index('indexer')
# need to set a minimum size for the email column
store.append('data',df,min_itemsize={'email' : 100})
store.close()
# define the combiner function
def combiner(x):
# given a group of emails (the same), return a combination
# with the new data
# sort by the date
y = x.sort('date')
# calc the diff in days (an integer)
y['diff'] = (y['date']-y['date'].iloc[0]).apply(lambda d: float(d.item().days))
y['count'] = pd.Series(range(1,len(y)+1),index=y.index,dtype='float64')
return y
# reduce the store (and create a new one by chunks)
in_store_file = data_store_file
in_store1 = pd.HDFStore(in_store_file)
# iter on the store 1
for chunki, df1 in enumerate(in_store1.select('data',chunksize=2*chunksize)):
print "processing [%s] [%s]" % (chunki,in_store_file)
out_store_file = 'datastore_%s.h5' % chunki
out_store = pd.HDFStore(out_store_file,'w')
# iter on store 2
in_store2 = pd.HDFStore(in_store_file)
for df2 in in_store2.select('data',chunksize=chunksize):
# concat & drop dups
df = pd.concat([df1,df2]).drop_duplicates(['email','date'])
# group and combine
result = df.groupby('email').apply(combiner)
# remove the mi (that we created in the groupby)
result = result.reset_index('email',drop=True)
# only store those rows which are in df2!
result = result.reindex(index=df2.index).dropna()
# store to the out_store
out_store.append('data',result,min_itemsize={'email' : 100})
in_store2.close()
out_store.close()
in_store_file = out_store_file
in_store1.close()
# show the reduced store
print pd.read_hdf(out_store_file,'data').sort(['email','diff'])
关于python - 需要在 python 中比较 1.5GB 左右的非常大的文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16110252/
更新:随意给我反对票,因为问题是我将文件命名为 _stylesheet.html.erb 而不是 _stylesheets.html.erb。我以为我检查了拼写,但显然我没有。我很抱歉浪费了大家的时间
我有一个 Inno Script istaller 在其中运行子 setup.exe 。当向主安装程序提供静默安装参数时,我必须向 setup.exe 提供静默安装参数。 Inno脚本运行命令: [R
我正在尝试在大型数据库中搜索长的、近似的子字符串。例如,一个查询可能是一个 1000 个字符的子字符串,它可能与匹配项相差数百个编辑的 Levenshtein 距离。我听说索引 q-gram 可以做到
我正在尝试在我的应用程序中实现一个非常简单的绘图 View 。这只是我的应用程序的一小部分,但它正在变成一个真正的麻烦。这是我到目前为止所拥有的,但它现在显示的只是莫尔斯电码,如点和线。 - (v
我有一个运行非常慢的 sql 查询,我很困惑为什么。查询是: SELECT DISTINCT(c.ID),c.* FROM `content` c LEFT JOIN `content_meta`
我搜索过这个,但我发现的所有结果对我来说都毫无意义,而且似乎太复杂了。我希望使用 json 或 simplejson 模块来获取对象中字符串的值。 string = '{"name": "Alex"}
我想编写一个流量生成器来复制正在运行的计算机对内存进行的原始读写需求。 但是正在运行的计算机在其内存引用中也显示出(非常强的)局部性,并且在 64 位地址空间中,只会引用非常小范围的地址(事实上,我已
我正在尝试做一个 Project Euler问题,但它涉及添加一个非常大的数字的数字。 (100!) 用Java的int和long太小了。 谢谢你的建议 最佳答案 类 BigInteger看起来它可能
我想在游戏中实现一个物理引擎,以便计算物体在受力时的轨迹。该引擎将根据对象的先前状态计算对象的每个状态。当然,这意味着要在两个时间单位之间进行大量计算才能足够精确。 为了正确地做到这一点,我首先想知道
Edit3:通过将数组的初始化限制为仅奇数进行优化。谢谢@Ronnie! Edit2:谢谢大家,看来我也无能为力了。 编辑:我知道 Python 和 Haskell 是用其他语言实现的,并且或多或少地
背景 我有一个我编写的简单媒体客户端/服务器,我想生成一个非显而易见的时间值,我随每个命令从客户端发送到服务器。时间戳将包含相当多的数据(纳秒分辨率,即使由于现代操作系统中定时器采样的限制,它并不真正
一位招聘软件工程师的 friend 希望我为他开发一个应用。 他希望能够根据技能搜索候选人的简历。 正如您想象的那样,可能有数百、可能数千种技能。 在表格中表示候选人的最佳方式是什么?我在想 skil
我的意思是“慢”,回调类型等待远程服务器超时以有效触发(调用 vimeo 提要,解析它,然后在场景中显示 uiviews) 我大多不明白它是如何工作的。我希望在返回响应后立即从回调中填充我的 View
您好,我正在研究使用快速可靠的生产者消费者队列进行线程切换。我正在使用 VC++ 在 Windows 上工作。 我的设计基于 Anthony Williams队列,基本上就是一个带有 boost::c
我只是想知道您使用 resharper 的经验。我们有一个非常重的 dbml 文件,因为我们的数据库有很多表,每次我需要打开该文件时,我都会收到来自 resharper 的大量异常。以前有人遇到过这个
我目前正在使用 jQuery 中的隐藏/显示功能来帮助从选择框中将表格过滤成组。 实际代码运行良好,但速度非常慢,有时需要一两分钟才能执行。 我切换了代码,所以它使用 css({'display':'
我按顺序调用了以下两个方法(按顺序使用适当的类级别字段) public const string ProcessName = "This is" public const string WindowT
我很难理解描述反射包的文档/示例。我是一名命令式编程老手,但也是一名 Haskell 新手。你能引导我完成一个非常简单的介绍吗? 包裹:https://hackage.haskell.org/pack
已关闭。此问题不符合Stack Overflow guidelines 。目前不接受答案。 要求我们推荐或查找工具、库或最喜欢的场外资源的问题对于 Stack Overflow 来说是偏离主题的,因为
我正在尝试编写一段代码来操作一个很长的文档(超过一百万行)。在这个文本文件中,有固定间隔(每 1003 行)和之间的某些时间戳有我需要的数据,它有 1000 行长,还有一个标题和两个空行,但我不需要。
我是一名优秀的程序员,十分优秀!