gpt4 book ai didi

python - 增加Azure ML Studio中/dev/shm的大小

转载 作者:太空宇宙 更新时间:2023-11-03 16:49:46 26 4
gpt4 key购买 nike

我尝试在 Azure ML Studio 笔记本中执行以下代码:

from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.cross_validation import KFold, cross_val_score

for C in np.linspace(0.01, 0.2, 30):
cv = KFold(n=X_train.shape[0], n_folds=7, shuffle=True, random_state=12345)
clf = LogisticRegression(C=C, random_state=12345)
print C, sum(cross_val_score(clf, X_train_scaled, y_train, scoring='roc_auc', cv=cv, n_jobs=2)) / 7.0

我收到此错误:

Failed to save <type 'numpy.ndarray'> to .npy file:
Traceback (most recent call last):
File "/home/nbcommon/env/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 271, in save
obj, filename = self._write_array(obj, filename)
File "/home/nbcommon/env/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 231, in _write_array
self.np.save(filename, array)
File "/home/nbcommon/env/lib/python2.7/site-packages/numpy/lib/npyio.py", line 491, in save
pickle_kwargs=pickle_kwargs)
File "/home/nbcommon/env/lib/python2.7/site-packages/numpy/lib/format.py", line 585, in write_array
array.tofile(fp)
IOError: 19834920 requested and 8384502 written

---------------------------------------------------------------------------
IOError Traceback (most recent call last)
<ipython-input-29-9740e9942629> in <module>()
6 cv = KFold(n=X_train.shape[0], n_folds=7, shuffle=True, random_state=12345)
7 clf = LogisticRegression(C=C, random_state=12345)
----> 8 print C, sum(cross_val_score(clf, X_train_scaled, y_train, scoring='roc_auc', cv=cv, n_jobs=2)) / 7.0

/home/nbcommon/env/lib/python2.7/site-packages/sklearn/cross_validation.pyc in cross_val_score(estimator, X, y, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch)
1431 train, test, verbose, None,
1432 fit_params)
-> 1433 for train, test in cv)
1434 return np.array(scores)[:, 0]
1435

/home/nbcommon/env/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable)
808 # consumption.
809 self._iterating = False
--> 810 self.retrieve()
811 # Make sure that we get a last message telling us we are done
812 elapsed_time = time.time() - self._start_time

/home/nbcommon/env/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in retrieve(self)
725 job = self._jobs.pop(0)
726 try:
--> 727 self._output.extend(job.get())
728 except tuple(self.exceptions) as exception:
729 # Stop dispatching any new job in the async callback thread

/home/nbcommon/env/lib/python2.7/multiprocessing/pool.pyc in get(self, timeout)
565 return self._value
566 else:
--> 567 raise self._value
568
569 def _set(self, i, obj):

IOError: [Errno 28] No space left on device

使用n_jobs=1它工作得很好。

我认为这是因为 joblib 库尝试将我的数据保存到 /dev/shm。问题是它只有64M容量:

Filesystem         Size  Used Avail Use% Mounted on
none 786G 111G 636G 15% /
tmpfs 56G 0 56G 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 56G 0 56G 0% /sys/fs/cgroup
/dev/mapper/crypt 786G 111G 636G 15% /etc/hosts

我无法通过设置 JOBLIB_TEMP_FOLDER 环境变量来更改此文件夹(export 不起作用)。

In [35]: X_train_scaled.nbytes

Out[35]: 158679360

感谢您的建议!

最佳答案

/dev/shm 是一个虚拟文件系统,用于在 Linux 上实现传统共享内存的程序之间传递数据。

因此您无法通过在应用程序布局上设置某些选项来增加它。

但是,例如,您可以使用 root 等管理员权限在 Linux Shell 中重新挂载 8G 大小的 /dev/shm,如下所示。

mount -o remount,size=8G/dev/shm

但是,Azure ML studio 似乎不支持通过 SSH 协议(protocol)进行远程访问,因此如果目前使用免费套餐,可行的计划是升级标准套餐。

关于python - 增加Azure ML Studio中/dev/shm的大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35970126/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com