- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
此问题已在其他论坛中提出,我尝试了他们的变体但无济于事:class_weight for imbalanced data - Keras
how to set class-weights for imbalanced classes in keras
然而,由于没有人回答这个问题,它似乎过时了。有谁知道在使用 categorical_crossentropy
时如何在 Keras 中实现 class_weight
参数?我一直在尝试在 Keras 中使用 class_weight
参数,但一直收到此错误:
ValueError: Expected
class_weight
to be a dict with keys from 0 to one less than the number of classes, found {'prediction': {0: 1.217169570760731, 1: 5.323420074349443, 2: 0.5023680056130504}
每个样本将被分类为 0、1 或 2 (softmax)。该数据集中的偏差很大。我的模型使用 Keras 函数式 API。
class_weights 是使用 Sklearn 计算的:
class_weights = class_weight.compute_class_weight('balanced', np.unique(np.array(y_trn_labels_HB_2_pd['labels'])), y_trn_labels_HB_2_pd['labels'])
class_weight_dict = dict(enumerate(class_weights))
class_weight_dict
这是我的最后一层:
prediction = Dense(3, activation="softmax", name = 'prediction')(x)
这是我的模型:
tf.__version__ = 2.3.0
model = Model(inputs = [sequence_input_head, sequence_input_body, semantic_feat,
wordOL_feat, avg_subj_feat], outputs = [prediction])
model.compile(loss = 'categorical_crossentropy',
optimizer='adam',
metrics = ['accuracy'])
model.summary()
这是我的 class_weight 参数:
class_weight= {'prediction': {0:1.217169570760731, 1:5.323420074349443, 2:0.5023680056130504} })
这是完整的错误:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-271-e3bb78b84171> in <module>()
26 y_val_2_cat),
27 callbacks = [es],
---> 28 class_weight= {'prediction': class_weights})
29
30 modeled = model.save(os.path.join(save_path, path_model))
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1061 use_multiprocessing=use_multiprocessing,
1062 model=self,
-> 1063 steps_per_execution=self._steps_per_execution)
1064
1065 # Container that configures and calls `tf.keras.Callback`s.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution)
1120 dataset = self._adapter.get_dataset()
1121 if class_weight:
-> 1122 dataset = dataset.map(_make_class_weight_map_fn(class_weight))
1123 self._inferred_steps = self._infer_steps(steps_per_epoch, dataset)
1124
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in _make_class_weight_map_fn(class_weight)
1299 "Expected `class_weight` to be a dict with keys from 0 to one less "
1300 "than the number of classes, found {}").format(class_weight)
-> 1301 raise ValueError(error_msg)
1302
1303 class_weight_tensor = ops.convert_to_tensor_v2(
ValueError: Expected `class_weight` to be a dict with keys from 0 to one less than the number of classes, found {'prediction': {0: 1.217169570760731, 1: 5.323420074349443}}
编辑1:
我试过你的建议@Prateek Bhatt
history = model.fit({'headline': hl_pd_tr, 'articleBody':bd_pd_train, 'semantic': semantic_sim_180_train_x, 'wordOverlap': wrd_OvLp_train_x, 'avg_subjectivity': avg_subj_hb_train_x}, #@param ["model.fit({'headline': hl_pd_tr, 'articleBody':bd_pd_train},", "model.fit({'headline': hl_pd_tr, 'articleBody':bd_pd_train, 'semantic': semantic_x_tr},", "model.fit({'headline': hl_pd_tr, 'articleBody':bd_pd_train, 'semantic': semantic_x_tr, 'wordOverlap': wrd_OvLp_x_tr},", "model.fit({'headline': hl_pd_tr, 'articleBody':bd_pd_train, 'semantic': semantic_x_tr, 'wordOverlap': wrd_OvLp_x_tr, 'avgsubj': avg_subj_x_tr},"] {type:"raw", allow-input: true}
{'prediction':y_train_2_cat},
epochs=100,
batch_size= BATCH__SIZE,
shuffle= True,
validation_data = ([hl_pd_val, bd_pd_val, semantic_sim_180_val_x, wrd_OvLp_val_x, avg_subj_hb_val_x], y_val_2_cat),
callbacks = [es],
class_weight= {0:1.217169570760731, 1:5.323420074349443, 2:0.5023680056130504})
但是,我得到这个错误:
ValueError: `class_weight` is only supported for Models with a single output.
完整错误:
ValueError Traceback (most recent call last)
<ipython-input-272-bfbab936a723> in <module>()
26 y_val_2_cat),
27 callbacks = [es],
---> 28 class_weight= {0:1.217169570760731, 1:5.323420074349443, 2:0.5023680056130504})
29
30 modeled = model.save(os.path.join(save_path, path_model))
16 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1061 use_multiprocessing=use_multiprocessing,
1062 model=self,
-> 1063 steps_per_execution=self._steps_per_execution)
1064
1065 # Container that configures and calls `tf.keras.Callback`s.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution)
1120 dataset = self._adapter.get_dataset()
1121 if class_weight:
-> 1122 dataset = dataset.map(_make_class_weight_map_fn(class_weight))
1123 self._inferred_steps = self._infer_steps(steps_per_epoch, dataset)
1124
/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py in map(self, map_func, num_parallel_calls, deterministic)
1693 """
1694 if num_parallel_calls is None:
-> 1695 return MapDataset(self, map_func, preserve_cardinality=True)
1696 else:
1697 return ParallelMapDataset(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py in __init__(self, input_dataset, map_func, use_inter_op_parallelism, preserve_cardinality, use_legacy_function)
4043 self._transformation_name(),
4044 dataset=input_dataset,
-> 4045 use_legacy_function=use_legacy_function)
4046 variant_tensor = gen_dataset_ops.map_dataset(
4047 input_dataset._variant_tensor, # pylint: disable=protected-access
/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py in __init__(self, func, transformation_name, dataset, input_classes, input_shapes, input_types, input_structure, add_to_graph, use_legacy_function, defun_kwargs)
3369 with tracking.resource_tracker_scope(resource_tracker):
3370 # TODO(b/141462134): Switch to using garbage collection.
-> 3371 self._function = wrapper_fn.get_concrete_function()
3372 if add_to_graph:
3373 self._function.add_to_graph(ops.get_default_graph())
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in get_concrete_function(self, *args, **kwargs)
2937 """
2938 graph_function = self._get_concrete_function_garbage_collected(
-> 2939 *args, **kwargs)
2940 graph_function._garbage_collector.release() # pylint: disable=protected-access
2941 return graph_function
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
2904 args, kwargs = None, None
2905 with self._lock:
-> 2906 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
2907 seen_names = set()
2908 captured = object_identity.ObjectIdentitySet(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3211
3212 self._function_cache.missed.add(call_context_key)
-> 3213 graph_function = self._create_graph_function(args, kwargs)
3214 self._function_cache.primary[cache_key] = graph_function
3215 return graph_function, args, kwargs
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3073 arg_names=arg_names,
3074 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3075 capture_by_value=self._capture_by_value),
3076 self._function_attributes,
3077 function_spec=self.function_spec,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
984 _, original_func = tf_decorator.unwrap(python_func)
985
--> 986 func_outputs = python_func(*func_args, **func_kwargs)
987
988 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py in wrapper_fn(*args)
3362 attributes=defun_kwargs)
3363 def wrapper_fn(*args): # pylint: disable=missing-docstring
-> 3364 ret = _wrapper_helper(*args)
3365 ret = structure.to_tensor_list(self._output_structure, ret)
3366 return [ops.convert_to_tensor(t) for t in ret]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py in _wrapper_helper(*args)
3297 nested_args = (nested_args,)
3298
-> 3299 ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
3300 # If `func` returns a list of tensors, `nest.flatten()` and
3301 # `ops.convert_to_tensor()` would conspire to attempt to stack
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
253 try:
254 with conversion_ctx:
--> 255 return converted_call(f, args, kwargs, options=options)
256 except Exception as e: # pylint:disable=broad-except
257 if hasattr(e, 'ag_error_metadata'):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
530
531 if not options.user_requested and conversion.is_whitelisted(f):
--> 532 return _call_unconverted(f, args, kwargs, options)
533
534 # internal_convert_user_code is for example turned off when issuing a dynamic
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs, options, update_cache)
337
338 if kwargs is not None:
--> 339 return f(*args, **kwargs)
340 return f(*args)
341
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in _class_weights_map_fn(*data)
1310 if nest.is_sequence(y):
1311 raise ValueError(
-> 1312 "`class_weight` is only supported for Models with a single output.")
1313
1314 if y.shape.rank > 2:
ValueError: `class_weight` is only supported for Models with a single output.
最佳答案
如果您有一个 pandas 数据框,您可以先使用 compute_class_weight 函数计算 class_weight 参数并传递目标列,例如:
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',np.unique(df['target']),df['target'])
class_weights = dict(enumerate(class_weights))
然后在拟合时传递 class_weights:
history = model.fit(X, Y,epochs=n,class_weight=class_weights)
然后确保使用 one-hot-encoded 标签使用 categorical_crossentropy 进行损失计算。如果您使用的是索引值,请改用 sparse_categorical_crossentropy。
关于python - 在多类分类问题中将 class_weights 参数与 Keras 一起使用时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65151225/
SQLite、Content provider 和 Shared Preference 之间的所有已知区别。 但我想知道什么时候需要根据情况使用 SQLite 或 Content Provider 或
警告:我正在使用一个我无法完全控制的后端,所以我正在努力解决 Backbone 中的一些注意事项,这些注意事项可能在其他地方更好地解决......不幸的是,我别无选择,只能在这里处理它们! 所以,我的
我一整天都在挣扎。我的预输入搜索表达式与远程 json 数据完美配合。但是当我尝试使用相同的 json 数据作为预取数据时,建议为空。点击第一个标志后,我收到预定义消息“无法找到任何内容...”,结果
我正在制作一个模拟 NHL 选秀彩票的程序,其中屏幕右侧应该有一个 JTextField,并且在左侧绘制弹跳的选秀球。我创建了一个名为 Ball 的类,它实现了 Runnable,并在我的主 Draf
这个问题已经有答案了: How can I calculate a time span in Java and format the output? (18 个回答) 已关闭 9 年前。 这是我的代码
我有一个 ASP.NET Web API 应用程序在我的本地 IIS 实例上运行。 Web 应用程序配置有 CORS。我调用的 Web API 方法类似于: [POST("/API/{foo}/{ba
我将用户输入的时间和日期作为: DatePicker dp = (DatePicker) findViewById(R.id.datePicker); TimePicker tp = (TimePic
放宽“邻居”的标准是否足够,或者是否有其他标准行动可以采取? 最佳答案 如果所有相邻解决方案都是 Tabu,则听起来您的 Tabu 列表的大小太长或您的释放策略太严格。一个好的 Tabu 列表长度是
我正在阅读来自 cppreference 的代码示例: #include #include #include #include template void print_queue(T& q)
我快疯了,我试图理解工具提示的行为,但没有成功。 1. 第一个问题是当我尝试通过插件(按钮 1)在点击事件中使用它时 -> 如果您转到 Fiddle,您会在“内容”内看到该函数' 每次点击都会调用该属
我在功能组件中有以下代码: const [ folder, setFolder ] = useState([]); const folderData = useContext(FolderContex
我在使用预签名网址和 AFNetworking 3.0 从 S3 获取图像时遇到问题。我可以使用 NSMutableURLRequest 和 NSURLSession 获取图像,但是当我使用 AFHT
我正在使用 Oracle ojdbc 12 和 Java 8 处理 Oracle UCP 管理器的问题。当 UCP 池启动失败时,我希望关闭它创建的连接。 当池初始化期间遇到 ORA-02391:超过
关闭。此题需要details or clarity 。目前不接受答案。 想要改进这个问题吗?通过 editing this post 添加详细信息并澄清问题. 已关闭 9 年前。 Improve
引用这个plunker: https://plnkr.co/edit/GWsbdDWVvBYNMqyxzlLY?p=preview 我在 styles.css 文件和 src/app.ts 文件中指定
为什么我的条形这么细?我尝试将宽度设置为 1,它们变得非常厚。我不知道还能尝试什么。默认厚度为 0.8,这是应该的样子吗? import matplotlib.pyplot as plt import
当我编写时,查询按预期执行: SELECT id, day2.count - day1.count AS diff FROM day1 NATURAL JOIN day2; 但我真正想要的是右连接。当
我有以下时间数据: 0 08/01/16 13:07:46,335437 1 18/02/16 08:40:40,565575 2 14/01/16 22:2
一些背景知识 -我的 NodeJS 服务器在端口 3001 上运行,我的 React 应用程序在端口 3000 上运行。我在 React 应用程序 package.json 中设置了一个代理来代理对端
我面临着一个愚蠢的问题。我试图在我的 Angular 应用程序中延迟加载我的图像,我已经尝试过这个2: 但是他们都设置了 src attr 而不是 data-src,我在这里遗漏了什么吗?保留 d
我是一名优秀的程序员,十分优秀!