gpt4 book ai didi

python - 并行作业不会在 scikit-learn 的 GridSearchCV 中完成

转载 作者:太空狗 更新时间:2023-10-29 18:28:45 27 4
gpt4 key购买 nike

在以下脚本中,我发现 GridSearchCV 启动的作业似乎挂起。

import json
import pandas as pd
import numpy as np
import unicodedata
import re
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.decomposition import TruncatedSVD
from sklearn.linear_model import SGDClassifier
import sklearn.cross_validation as CV
from sklearn.grid_search import GridSearchCV
from nltk.stem import WordNetLemmatizer

# Seed for randomization. Set to some definite integer for debugging and set to None for production
seed = None


### Text processing functions ###

def normalize(string):#Remove diacritics and whatevs
return "".join(ch.lower() for ch in unicodedata.normalize('NFD', string) if not unicodedata.combining(ch))

wnl = WordNetLemmatizer()
def tokenize(string):#Ignores special characters and punct
return [wnl.lemmatize(token) for token in re.compile('\w\w+').findall(string)]

def ngrammer(tokens):#Gets all grams in each ingredient
max_n = 2
return [":".join(tokens[idx:idx+n]) for n in np.arange(1,1 + min(max_n,len(tokens))) for idx in range(len(tokens) + 1 - n)]

print("Importing training data...")
with open('/Users/josh/dev/kaggle/whats-cooking/data/train.json','rt') as file:
recipes_train_json = json.load(file)

# Build the grams for the training data
print('\nBuilding n-grams from input data...')
for recipe in recipes_train_json:
recipe['grams'] = [term for ingredient in recipe['ingredients'] for term in ngrammer(tokenize(normalize(ingredient)))]

# Build vocabulary from training data grams.
vocabulary = list({gram for recipe in recipes_train_json for gram in recipe['grams']})

# Stuff everything into a dataframe.
ids_index = pd.Index([recipe['id'] for recipe in recipes_train_json],name='id')
recipes_train = pd.DataFrame([{'cuisine': recipe['cuisine'], 'ingredients': " ".join(recipe['grams'])} for recipe in recipes_train_json],columns=['cuisine','ingredients'], index=ids_index)


# Extract data for fitting
fit_data = recipes_train['ingredients'].values
fit_target = recipes_train['cuisine'].values

# extracting numerical features from the ingredient text
feature_ext = Pipeline([('vect', CountVectorizer(vocabulary=vocabulary)),
('tfidf', TfidfTransformer(use_idf=True)),
('svd', TruncatedSVD(n_components=1000))
])
lsa_fit_data = feature_ext.fit_transform(fit_data)

# Build SGD Classifier
clf = SGDClassifier(random_state=seed)
# Hyperparameter grid for GRidSearchCV.
parameters = {
'alpha': np.logspace(-6,-2,5),
}

# Init GridSearchCV with k-fold CV object
cv = CV.KFold(lsa_fit_data.shape[0], n_folds=3, shuffle=True, random_state=seed)
gs_clf = GridSearchCV(
estimator=clf,
param_grid=parameters,
n_jobs=-1,
cv=cv,
scoring='accuracy',
verbose=2
)
# Fit on training data
print("\nPerforming grid search over hyperparameters...")
gs_clf.fit(lsa_fit_data, fit_target)

控制台输出为:

Importing training data...

Building n-grams from input data...

Performing grid search over hyperparameters...
Fitting 3 folds for each of 5 candidates, totalling 15 fits
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=0.0001 ....................................................
[CV] alpha=0.0001 ....................................................

然后它就会挂起。如果我在 GridSearchCV 中设置 n_jobs=1,那么脚本将按预期完成并输出:

Importing training data...

Building n-grams from input data...

Performing grid search over hyperparameters...
Fitting 3 folds for each of 5 candidates, totalling 15 fits
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.5s
[Parallel(n_jobs=1)]: Done 1 jobs | elapsed: 6.6s
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.6s
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.6s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.6s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.7s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.7s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 7.0s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 6.8s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 6.6s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 6.7s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 7.3s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 7.1s
[Parallel(n_jobs=1)]: Done 15 out of 15 | elapsed: 1.7min finished

单线程执行完成得非常快,所以我确定我给了并行作业用例足够的时间来自行进行计算。

环境规范:MacBook Pro(15 英寸,2010 年中),2.4 GHz Intel Core i5,8 GB 1067 MHz DDR3,OSX 10.10.5,python 3.4.3,ipython 3.2.0,numpy v1.9.3,scipy 0.16.0,scikit-学习 v0.16.1(python 和包都来自 anaconda 发行版)

一些补充意见:

我一直在这台机器上使用 n_jobs=-1GridSearchCV 没有问题,所以我的平台确实支持该功能。它通常一次有 4 个作业,因为我在这台机器上有 4 个内核(2 个物理内核,但由于 Mac 超线程而有 4 个“虚拟内核”)。但是除非我误解了控制台输出,否则在这种情况下它有 8 个作业没有返回。在 Activity Monitor 中实时观察 CPU 使用率,启动 4 个作业,工作一点,然后完成(或死掉?),然后再启动 4 个作业,工作一点,然后完全空闲但坚持下去。

我从来没有看到显着的内存压力。主进程最高的实际内存约为 1GB,子进程约为 600MB。到它们挂起时,实际内存可以忽略不计。

如果从特征提取管道中删除 TruncatedSVD 步骤,该脚本可以很好地处理多个作业。但请注意,此管道在网格搜索之前起作用,并且不是 GridSearchCV 作业的一部分。

此脚本用于kaggle 比赛What's Cooking?所以如果你想尝试在我正在使用的相同数据上运行它,你可以从那里获取它。数据以对象的 JSON 数组形式出现。每个对象代表一个食谱并包含一个文本片段列表,这些文本片段是配料。由于每个示例都是文档的集合而不是单个文档,我最终不得不编写一些自己的 n-gramming 和标记化逻辑,因为我无法弄清楚如何让 scikit-learn 的内置转换器做我想做的事。我怀疑这些都不重要,但仅供引用。

我通常使用 %run 在 iPython CLI 中运行脚本,但我直接从 OSX bash 终端使用 python (3.4.3) 运行它时得到了相同的行为。

最佳答案

如果 njob>1,这可能是 GridSearchCV 使用的多处理的问题。因此,您可以尝试多线程而不是使用多处理,看看它是否工作正常。

from sklearn.externals.joblib import parallel_backend

clf = GridSearchCV(...)
with parallel_backend('threading'):
clf.fit(x_train, y_train)

我在使用 GSV 和 njob >1 的估算器时遇到了同样的问题,并且使用它在 njob 值上效果很好。

PS:我不确定“线程”是否对所有估算器都具有与“多处理”相同的优势。但从理论上讲,如果您的估算器受到 GIL 的限制,“线程”将不是一个很好的选择,但如果估算器是基于 cython/numpy 的,它会比“多处理”更好

系统试用:

MAC OS: 10.12.6
Python: 3.6
numpy==1.13.3
pandas==0.21.0
scikit-learn==0.19.1

关于python - 并行作业不会在 scikit-learn 的 GridSearchCV 中完成,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33042527/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com