- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
在没有修改的情况下运行 Elephas 示例时出错:(即使使用 git 版本 pip install --no-cache-dir git+git://github.com/maxpumperla/elephas.git@master 也会出现该错误)
我用过的例子:https://github.com/maxpumperla/elephas/blob/master/examples/ml_pipeline_otto.py
(我试图启用 tf.compat.v1.enable_eager_execution() 但其他代码不适用于该设置)
pyspark_1 | 19/10/25 10:23:03 INFO SparkContext: Created broadcast 12 from broadcast at NativeMethodAccessorImpl.java:0
pyspark_1 | Traceback (most recent call last):
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/serializers.py", line 590, in dumps
pyspark_1 | return cloudpickle.dumps(obj, 2)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 863, in dumps
pyspark_1 | cp.dump(obj)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 260, in dump
pyspark_1 | return Pickler.dump(self, obj)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 224, in dump
pyspark_1 | self.save(obj)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 568, in save_tuple
pyspark_1 | save(element)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 406, in save_function
pyspark_1 | self.save_function_tuple(obj)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 549, in save_function_tuple
pyspark_1 | save(state)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
pyspark_1 | self._batch_setitems(obj.iteritems())
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 687, in _batch_setitems
pyspark_1 | save(v)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 606, in save_list
pyspark_1 | self._batch_appends(iter(obj))
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 642, in _batch_appends
pyspark_1 | save(tmp[0])
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 660, in save_instancemethod
pyspark_1 | obj=obj)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 401, in save_reduce
pyspark_1 | save(args)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 554, in save_tuple
pyspark_1 | save(element)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 331, in save
pyspark_1 | self.save_reduce(obj=obj, *rv)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 425, in save_reduce
pyspark_1 | save(state)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
pyspark_1 | self._batch_setitems(obj.iteritems())
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 687, in _batch_setitems
pyspark_1 | save(v)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 606, in save_list
pyspark_1 | self._batch_appends(iter(obj))
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 642, in _batch_appends
pyspark_1 | save(tmp[0])
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 331, in save
pyspark_1 | self.save_reduce(obj=obj, *rv)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 425, in save_reduce
pyspark_1 | save(state)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1 | f(self, obj) # Call unbound method with explicit self
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
pyspark_1 | self._batch_setitems(obj.iteritems())
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 687, in _batch_setitems
pyspark_1 | save(v)
pyspark_1 | File "/usr/lib/python2.7/pickle.py", line 306, in save
pyspark_1 | rv = reduce(self.proto)
pyspark_1 | File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1152, in __reduce__
pyspark_1 | initial_value=self.numpy(),
pyspark_1 | File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 906, in numpy
pyspark_1 | "numpy() is only available when eager execution is enabled.")
pyspark_1 | NotImplementedError: numpy() is only available when eager execution is enabled.
pyspark_1 | Traceback (most recent call last):
pyspark_1 | File "/home/ubuntu/./spark.py", line 169, in <module>
pyspark_1 | fitted_pipeline = pipeline.fit(train_df)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/ml/base.py", line 132, in fit
pyspark_1 | return self._fit(dataset)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/ml/pipeline.py", line 109, in _fit
pyspark_1 | model = stage.fit(dataset)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/ml/base.py", line 132, in fit
pyspark_1 | return self._fit(dataset)
pyspark_1 | File "/usr/local/lib/python2.7/dist-packages/elephas/ml_model.py", line 92, in _fit
pyspark_1 | validation_split=self.get_validation_split())
pyspark_1 | File "/usr/local/lib/python2.7/dist-packages/elephas/spark_model.py", line 151, in fit
pyspark_1 | self._fit(rdd, epochs, batch_size, verbose, validation_split)
pyspark_1 | File "/usr/local/lib/python2.7/dist-packages/elephas/spark_model.py", line 188, in _fit
pyspark_1 | gradients = rdd.mapPartitions(worker.train).collect()
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/rdd.py", line 816, in collect
pyspark_1 | sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/rdd.py", line 2532, in _jrdd
pyspark_1 | self._jrdd_deserializer, profiler)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/rdd.py", line 2434, in _wrap_function
pyspark_1 | pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/rdd.py", line 2420, in _prepare_for_python_RDD
pyspark_1 | pickled_command = ser.dumps(command)
pyspark_1 | File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/serializers.py", line 600, in dumps
pyspark_1 | raise pickle.PicklingError(msg)
pyspark_1 | cPickle.PicklingError: Could not serialize object: NotImplementedError: numpy() is only available when eager execution is enabled.
最佳答案
问题似乎围绕着 spark_model.py
的 _fit
中 RDD 和 SparkWorker-s 的使用,关于 this line在交换到 TF 的 resource_variable_ops.py
之前:
gradients = rdd.mapPartitions(worker.train).collect()
无论是每个多线程还是使用其他抽象数据结构,TF 运行时都是拦截的,TF 认为它在 Eager 中并调用 Eager 方法 (.numpy()
) ,但它不是 - 因此是错误。我非常怀疑对此是否存在“外部”解决方法,但以下对 TF 源的编辑可以解决问题(如下)。
基本上,它的工作方式是强制执行几乎所有可能的急切和非急切操作组合,以在图模式内外评估张量。
让我知道它是否有效。
# "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py"
# line 1152
def __reduce__(self):
# The implementation mirrors that of __deepcopy__.
def K_eval(x, K):
try:
return K.get_value(K.to_dense(x))
except:
try:
eval_fn = K.function([], [x])
return eval_fn([])[0]
except:
return K.eval(x)
try:
import keras.backend as K
initial_value = K_eval(self, K)
except:
import tensorflow.keras.backend as K
initial_value = K_eval(self, K)
return functools.partial(
ResourceVariable,
initial_value=initial_value,
trainable=self.trainable,
name=self._shared_name,
dtype=self.dtype,
constraint=self.constraint,
distribute_strategy=self._distribute_strategy), ()
关于python - cPickle.PicklingError : Could not serialize object: NotImplementedError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58558004/
如何让此代码在重新打开时记住秤的最后位置? import Tkinter import cPickle root = Tkinter.Tk() root.sclX = Tkinter.Scale(ro
我正在尝试使用 cPickle 加载一个文件,如下所示, import cPickle cPickle.load(open('test', 'rb')) 但是,我得到这个错误, -----------
我有一个列表列表,每个子列表看起来像这样: a = [datetime.datetime(2012, 2, 1), datetime.datetime(2012, 2, 2), 'string', 4
我浏览了相关 cPickle 问题的各种回复,但似乎没有一个有帮助。使用 cloudpickle 代替 cPickle 也没有效果。 我有一个名为 MyModule 的模块,它定义了一个 MyClas
我有大约 80 万张 256x256 RGB 图像,总计超过 7GB。 我想将它们用作卷积神经网络中的训练数据,并想将它们连同它们的标签一起放入 cPickle 文件中。 现在,这占用了大量内存,以至
我正在使用 cPickle 序列化用于日志记录的数据。 我希望能够将我想要的任何东西放入对象中,然后将其序列化。通常这对 cPickle 没问题,但遇到了一个问题,我想序列化的对象之一包含一个函数。这
我正在使用 cPickle 将字典对象存储到文件中,并且无法获取除第一个条目之外的任何其他条目。最初,文件 tweets.pkl 为空,并且引发 EOFError。我确信这与它有关。谢谢 #!/usr
在我的脚本中,我尝试使用cPickle保存字典。一切工作正常,除了加载的字典已经修改每个键。我的字典看起来像:{'a':[45,155856,26,98536], 'b':[88,68,9454,78
我希望能够有一系列使用相同 pickle 文件的嵌套循环。见下文: def pickleRead(self): try: with open(r'myfile', 'rb')
cPickle.dump(object,file) 总是在文件末尾转储。有没有办法在文件中的特定位置转储?我希望下面的代码片段能够工作 file = open("test","ab") file.se
我有一个 python 程序,我想从 cmd 提示符运行,但每次我尝试时,它总是给我一个错误“NameError: name cPickle is not defined”。 我使用的是 python
你能帮我实现这个例子吗? 如果序列化字典存在,我想加载它,修改它并再次转储它。我认为我用于打开文件的模式有问题,但我不知道正确的方法。 import os import cPickle as pick
python 对象是字符串和数字的嵌套列表。 文件被打开以写入“w”(不是“wb”)但 cPickle 被告知使用 protocol=1(所以二进制)。 序列化和反序列化代码在 Linux 上运行良好
我意识到答案很可能是“否”! 基本上,我有一个表示正方形网格的图(节点和边类型);每个节点对象都包含对该节点有边的每个其他节点的引用,这似乎意味着当使用 cPickle.dump 序列化图形时,它以深
我有一个名为 classifier.pkl 的 pickled 文件,我正试图将其加载到另一个模块中。但是,我收到一个我不明白的错误。 我要 pickle 的代码: features = ['bob'
我将运行代码,将大量(~1000)个相对较小(50 个键:值对字符串)的字典写入日志文件。我将通过一个自动执行此操作的程序来完成此操作。我正在考虑运行如下命令: import random impor
我想在文件中保存一个 pickle.dumps() 结果,每行一个结果,如下例所示。然后当我读取文件时加载行时,我发现相同的结果包含一个换行符。 数据,由制表符分隔: 20120305\txxxxx\
我正在考虑创建像 dbms 引擎这样的持久存储的想法,与直接 cPickling 对象和/或使用搁置模块相比,创建自定义二进制格式有什么好处? 最佳答案 Pickling 是一个双面硬币。 一方面,您
有没有人能解释testLookups() 下的注释 code snippet ? 我已经运行了代码,确实评论说的是真的。但是我想了解为什么它是真的,即为什么 cPickle 根据引用方式为同一对象输出
我是 Python 新手。我正在将别人的代码从 Python 2.X 改编到 3.5。该代码通过 cPickle 加载文件。我将所有出现的“cPickle”更改为“pickle”,因为我知道 pick
我是一名优秀的程序员,十分优秀!