- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
1-问题
我在python上使用“spacy”对文本文档进行词素化。
有500,000个文档,其最大大小为20 Mb的纯文本。
问题如下:在使用整个内存之前,spacy内存消耗随着时间的推移而增长。
2-背景
我的硬件配置:
CPU:Intel I7-8700K 3.7 GHz(12核心)
内存体:16 Gb
固态硬盘:1 TB
板载GPU,但不用于此任务
我正在使用“多处理”将任务划分为多个流程(工作人员)。
每个工作人员都会收到一份要处理的文件 list 。
主进程执行子进程的监视。
我在每个子进程中启动一次“spacy”,并使用这个spacy实例处理工作程序中的整个文档列表。
内存跟踪显示以下内容:
[ Memory trace - Top 10 ]
/opt/develop/virtualenv/lib/python3.6/site-packages/thinc/neural/mem.py:68: size=45.1 MiB, count=99, average=467 KiB
/opt/develop/virtualenv/lib/python3.6/posixpath.py:149: size=40.3 MiB, count=694225, average=61 B
:487: size=9550 KiB, count=77746, average=126 B
/opt/develop/virtualenv/lib/python3.6/site-packages/dawg_python/wrapper.py:33: size=7901 KiB, count=6, average=1317 KiB
/opt/develop/virtualenv/lib/python3.6/site-packages/spacy/lang/en/lemmatizer/_nouns.py:7114: size=5273 KiB, count=57494, average=94 B
prepare_docs04.py:372: size=4189 KiB, count=1, average=4189 KiB
/opt/develop/virtualenv/lib/python3.6/site-packages/dawg_python/wrapper.py:93: size=3949 KiB, count=5, average=790 KiB
/usr/lib/python3.6/json/decoder.py:355: size=1837 KiB, count=20456, average=92 B
/opt/develop/virtualenv/lib/python3.6/site-packages/spacy/lang/en/lemmatizer/_adjectives.py:2828: size=1704 KiB, count=20976, average=83 B
prepare_docs04.py:373: size=1633 KiB, count=1, average=1633 KiB
import os, subprocess, spacy, sys, tracemalloc
from multiprocessing import Pipe, Process, Lock
from time import sleep
# START: memory trace
tracemalloc.start()
# Load spacy
spacyMorph = spacy.load("en_core_web_sm")
#
# Get word's lemma
#
def getLemma(word):
global spacyMorph
lemmaOutput = spacyMorph(str(word))
return lemmaOutput
#
# Worker's logic
#
def workerNormalize(lock, conn, params):
documentCount = 1
for filenameRaw in params[1]:
documentTotal = len(params[1])
documentID = int(os.path.basename(filenameRaw).split('.')[0])
# Send to the main process the worker's current progress
if not lock is None:
lock.acquire()
try:
statusMessage = "WORKING:{:d},{:d},".format(documentID, documentCount)
conn.send(statusMessage)
documentCount += 1
finally:
lock.release()
else:
print(statusMessage)
# ----------------
# Some code is excluded for clarity sake
# I've got a "wordList" from file "filenameRaw"
# ----------------
wordCount = 1
wordTotalCount = len(wordList)
for word in wordList:
lemma = getLemma(word)
wordCount += 1
# ----------------
# Then I collect all lemmas and save it to another text file
# ----------------
# Here I'm trying to reduce memory usage
del wordList
del word
gc.collect()
if __name__ == '__main__':
lock = Lock()
processList = []
# ----------------
# Some code is excluded for clarity sake
# Here I'm getting full list of files "fileTotalList" which I need to lemmatize
# ----------------
while cursorEnd < (docTotalCount + stepSize):
fileList = fileTotalList[cursorStart:cursorEnd]
# ----------------
# Create workers and populate it with list of files to process
# ----------------
processData = {}
processData['total'] = len(fileList) # worker total progress
processData['count'] = 0 # worker documents done count
processData['currentDocID'] = 0 # current document ID the worker is working on
processData['comment'] = '' # additional comment (optional)
processData['con_parent'], processData['con_child'] = Pipe(duplex=False)
processName = 'worker ' + str(count) + " at " + str(cursorStart)
processData['handler'] = Process(target=workerNormalize, name=processName, args=(lock, processData['con_child'], [processName, fileList]))
processList.append(processData)
processData['handler'].start()
cursorStart = cursorEnd
cursorEnd += stepSize
count += 1
# ----------------
# Run the monitor to look after the workers
# ----------------
while True:
runningCount = 0
#Worker communication format:
#STATUS:COMMENTS
#STATUS:
#- WORKING - worker is working
#- CLOSED - worker has finished his job and closed pipe-connection
#COMMENTS:
#- for WORKING status:
#DOCID,COUNT,COMMENTS
#DOCID - current document ID the worker is working on
#COUNT - count of done documents
#COMMENTS - additional comments (optional)
# ----------------
# Run through the list of workers ...
# ----------------
for i, process in enumerate(processList):
if process['handler'].is_alive():
runningCount += 1
# ----------------
# .. and check if there is somethng in the PIPE
# ----------------
if process['con_parent'].poll():
try:
message = process['con_parent'].recv()
status = message.split(':')[0]
comment = message.split(':')[1]
# ----------------
# Some code is excluded for clarity sake
# Update worker's information and progress in "processList"
# ----------------
except EOFError:
print("EOF----")
# ----------------
# Some code is excluded for clarity sake
# Here I draw some progress lines per workers
# ----------------
else:
# worker has finished his job. Close the connection.
process['con_parent'].close()
# Whait for some time and monitor again
sleep(PARAM['MONITOR_REFRESH_FREQUENCY'])
print("================")
print("**** DONE ! ****")
print("================")
# ----------------
# Here I'm measuring memory usage to find the most "gluttonous" part of the code
# ----------------
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')
print("[ Memory trace - Top 10 ]")
for stat in top_stats[:10]:
print(stat)
'''
最佳答案
内存泄漏带有杂乱无章
处理大量数据时的内存问题似乎是一个已知问题,请参阅一些相关的github问题:
nlp.pipe
进行操作。分批处理文件。如果您使用解析器或实体识别,Spacy将无法处理很长的文档,因此您需要以某种方式分解文本(或者仅用于词条修饰/标记,您可以根据需要增加
nlp.max_length
)。
from spacy.lemmatizer import Lemmatizer
from spacy.lang.en import LOOKUP
lemmatizer = Lemmatizer(lookup=LOOKUP)
print(lemmatizer(u"ducks", ''), lemmatizer(u"ducking", ''))
['duck'] ['duck']
nlp = spacy.load('en', disable=['parser', 'ner'])
# if needed: nlp.max_length = MAX_DOC_LEN_IN_CHAR
for doc in nlp.pipe(texts):
for token in doc:
print(token.lemma_)
nlp.pipe()
)而不是处理单个单词,则应该能够在一个线程中每秒标记(/标记)成千上万个单词。
关于python-3.x - Python : Spacy and memory consumption,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55841087/
问题故障解决记录 -- Java RMI Connection refused to host: x.x.x.x .... 在学习JavaRMI时,我遇到了以下情况 问题原因:可
我正在玩 Rank-N-type 并尝试输入 x x .但我发现这两个函数可以以相同的方式输入,这很不直观。 f :: (forall a b. a -> b) -> c f x = x x g ::
这个问题已经有答案了: How do you compare two version Strings in Java? (31 个回答) 已关闭 8 年前。 有谁知道如何在Java中比较两个版本字符串
这个问题已经有答案了: How do the post increment (i++) and pre increment (++i) operators work in Java? (14 个回答)
下面是带有 -n 和 -r 选项的 netstat 命令的输出,其中目标字段显示压缩地址 (127.1/16)。我想知道 netstat 命令是否有任何方法或选项可以显示整个目标 IP (127.1.
我知道要证明 : (¬ ∀ x, p x) → (∃ x, ¬ p x) 证明是: theorem : (¬ ∀ x, p x) → (∃ x, ¬ p x) := begin intro n
x * x 如何通过将其存储在“auto 变量”中来更改?我认为它应该仍然是相同的,并且我的测试表明类型、大小和值显然都是相同的。 但即使 x * x == (xx = x * x) 也是错误的。什么
假设,我们这样表达: someIQueryable.Where(x => x.SomeBoolProperty) someIQueryable.Where(x => !x.SomeBoolProper
我有一个字符串 1234X5678 我使用这个正则表达式来匹配模式 .X|..X|X. 我得到了 34X 问题是为什么我没有得到 4X 或 X5? 为什么正则表达式选择执行第二种模式? 最佳答案 这里
我的一个 friend 在面试时遇到了这个问题 找到使该函数返回真值的 x 值 function f(x) { return (x++ !== x) && (x++ === x); } 面试官
这个问题在这里已经有了答案: 10年前关闭。 Possible Duplicate: Isn't it easier to work with foo when it is represented b
我是 android 的新手,我一直在练习开发一个针对 2.2 版本的应用程序,我需要帮助了解如何将我的应用程序扩展到其他版本,即 1.x、2.3.x、3 .x 和 4.x.x,以及一些针对屏幕分辨率
为什么案例 1 给我们 :error: TypeError: x is undefined on line... //case 1 var x; x.push(x); console.log(x);
代码优先: # CASE 01 def test1(x): x += x print x l = [100] test1(l) print l CASE01 输出: [100, 100
我正在努力温习我的大计算。如果我有将所有项目移至 'i' 2 个空格右侧的函数,我有一个如下所示的公式: (n -1) + (n - 2) + (n - 3) ... (n - n) 第一次迭代我必须
给定 IP 字符串(如 x.x.x.x/x),我如何或将如何计算 IP 的范围最常见的情况可能是 198.162.1.1/24但可以是任何东西,因为法律允许的任何东西。 我要带198.162.1.1/
在我作为初学者努力编写干净的 Javascript 代码时,我最近阅读了 this article当我偶然发现这一段时,关于 JavaScript 中的命名空间: The code at the ve
我正在编写一个脚本,我希望避免污染 DOM 的其余部分,它将是一个用于收集一些基本访问者分析数据的第 3 方脚本。 我通常使用以下内容创建一个伪“命名空间”: var x = x || {}; 我正在
我尝试运行我的test_container_services.py套件,但遇到了以下问题: docker.errors.APIError:500服务器错误:内部服务器错误(“ b'{” message
是否存在这两个 if 语句会产生不同结果的情况? if(x as X != null) { // Do something } if(x is X) { // Do something } 编
我是一名优秀的程序员,十分优秀!