- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在准备一个玩具spark.ml
示例。 Spark version 1.6.0
,在Oracle JDK version 1.8.0_65
,pyspark和ipython笔记本上运行。
首先,它与Spark, ML, StringIndexer: handling unseen labels几乎没有关系。在将管道拟合到数据集而不进行转换时引发异常。并且抑制异常可能不是解决方案,因为在这种情况下,恐怕数据集会变得很糟。
我的数据集未压缩约为800Mb,因此可能难以复制(较小的子集似乎可以避免此问题)。
数据集如下所示:
+--------------------+-----------+-----+-------+-----+--------------------+
| url| ip| rs| lang|label| txt|
+--------------------+-----------+-----+-------+-----+--------------------+
|http://3d-detmold...|217.160.215|378.0| de| 0.0|homwillkommskip c...|
| http://3davto.ru/| 188.225.16|891.0| id| 1.0|оформить заказ пе...|
| http://404.szm.com/| 85.248.42| 58.0| cs| 0.0|kliknite tu alebo...|
| http://404.xls.hu/| 212.52.166|168.0| hu| 0.0|honlapkészítés404...|
|http://a--m--a--t...| 66.6.43|462.0| en| 0.0|back top archiv r...|
|http://a-wrf.ru/c...| 78.108.80|126.0|unknown| 1.0| |
|http://a-wrf.ru/s...| 78.108.80|214.0| ru| 1.0|установк фаркопна...|
+--------------------+-----------+-----+-------+-----+--------------------+
label
。整个管道适用于它:
from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler, StringIndexer, OneHotEncoder, Tokenizer, HashingTF
from pyspark.ml.classification import LogisticRegression
train, test = munge(src_dataframe).randomSplit([70., 30.], seed=12345)
pipe_stages = [
StringIndexer(inputCol='lang', outputCol='lang_idx'),
OneHotEncoder(inputCol='lang_idx', outputCol='lang_onehot'),
Tokenizer(inputCol='ip', outputCol='ip_tokens'),
HashingTF(numFeatures=2**10, inputCol='ip_tokens', outputCol='ip_vector'),
Tokenizer(inputCol='txt', outputCol='txt_tokens'),
HashingTF(numFeatures=2**18, inputCol='txt_tokens', outputCol='txt_vector'),
VectorAssembler(inputCols=['lang_onehot', 'ip_vector', 'txt_vector'], outputCol='features'),
LogisticRegression(labelCol='label', featuresCol='features')
]
pipe = Pipeline(stages=pipe_stages)
pipemodel = pipe.fit(train)
Py4JJavaError: An error occurred while calling o10793.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 18 in stage 627.0 failed 1 times, most recent failure: Lost task 18.0 in stage 627.0 (TID 23259, localhost): org.apache.spark.SparkException: Unseen label: pl-PL.
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:157)
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:153)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalExpr2$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:282)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1113)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:271)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:159)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:90)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Unseen label: pl-PL.
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:157)
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:153)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalExpr2$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:282)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
org.apache.spark.SparkException: Unseen label: pl-PL.
pl-PL
列中的值
lang
与
label
列(而不是
float
编辑)混合在一起:某些草率的结论,已通过@ zero323进行了更正
string
是来自数据集测试部分的值,而不是训练的值。因此,现在我什至不知道在哪里寻找罪魁祸首:很可能是
pl-PL
代码,而不是
randomSplit
,还有谁知道。
最佳答案
Unseen label
is a generic message which doesn't correspond to a specific column。最可能的问题在于以下阶段:
StringIndexer(inputCol='lang', outputCol='lang_idx')
pl-PL
中存在
train("lang")
,而
test("lang")
中不存在。
setHandleInvalid
和
skip
对其进行更正:
from pyspark.ml.feature import StringIndexer
train = sc.parallelize([(1, "foo"), (2, "bar")]).toDF(["k", "v"])
test = sc.parallelize([(3, "foo"), (4, "foobar")]).toDF(["k", "v"])
indexer = StringIndexer(inputCol="v", outputCol="vi")
indexer.fit(train).transform(test).show()
## Py4JJavaError: An error occurred while calling o112.showString.
## : org.apache.spark.SparkException: Job aborted due to stage failure:
## ...
## org.apache.spark.SparkException: Unseen label: foobar.
indexer.setHandleInvalid("skip").fit(train).transform(test).show()
## +---+---+---+
## | k| v| vi|
## +---+---+---+
## | 3|foo|1.0|
## +---+---+---+
keep
:
indexer.setHandleInvalid("keep").fit(train).transform(test).show()
## +---+------+---+
## | k| v| vi|
## +---+------+---+
## | 3| foo|0.0|
## | 4|foobar|2.0|
## +---+------+---+
关于apache-spark - spark.ml StringIndexer在fit()上抛出 'Unseen label',我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35224675/
Apache Spark StringIndexerModel 在对某一特定列进行转换后返回空数据集。我正在使用成人数据集:http://mlr.cs.umass.edu/ml/datasets/Ad
我正在尝试对一列句子执行 StringIndexer 操作,即将单词列表转换为整数列表。 例如: 输入数据集: (1, ["I", "like", "Spark"]) (2, ["I", "h
我的 java 脚本有以下问题。 我有jQuery对象。在本地获取其值的第 i 个符号,我使用以下代码 $(this).val()[i]; 当我在服务器中部署此代码时,此行开始抛出异常,表示 $(th
我正在使用 PySpark 通过 ALS 进行协同过滤。我的原始用户和项目 ID 是字符串,所以我使用了 StringIndexer将它们转换为数字索引(PySpark 的 ALS 模型要求我们这样做
我的 PipelinedRDD 中有一列标称值,我希望将其转换为索引编码以用于分类目的。 我曾经在pyspark.ml中使用StringIndexer,它非常容易使用。不过,这次我正在学习如何处理 r
PySpark - v2.4.0 我尝试将 String 列 Country 转换为 Interger 列 Country_ID,结果看起来不错。但是当我尝试访问 Country_ID 列时,我得到了
Dataset dataFrame = ... ; StringIndexerModel labelIndexer = new StringIndexer() .se
我有格式化为以下示例的大数据记录: // +---+------+------+ // |cid|itemId|bought| // +---+------+------+ // |abc| 12
我收到了 StringIndex我正在处理的 10,000 个字符串中的一个特定字符串的错误。我真的不知道这个字符串有什么问题。我想这可能是一个特殊的性格问题。 如果我 println然后将该字符串分
我的目标是建立一个multicalss分类器。 我已经建立了用于特征提取的管道,并且第一步包括StringIndexer转换器,将每个类名称映射到标签,该标签将在分类器训练步骤中使用。 管道已安装培训
如何从经过训练的 Spark MLlib StringIndexerModel 中获取映射? val stringIndexer = new StringIndexer() .setInput
如何通过从 labelIndexer 获取标签,使用 IndexToString 进行转换? labelIndexer = StringIndexer(inputCol="shutdown_reaso
我有一个包含一些分类字符串列的数据集,我想用 double 类型表示它们。我使用 StringIndexer 进行此转换并且它有效,但是当我在另一个具有 NULL 值的数据集中尝试它时,它给出了 ja
当我使用 StringIndexer 和 OneHot Encoder 为我的矩阵准备数据时,我现在如何知道重要特征的名称/来源是什么? randomForest 分类器只会给我索引,我看不到原始数据
我正在使用 Scala 并使用 StringIndexer 为训练集中的每个类别分配索引。它根据每个类别的频率分配索引。 问题是在我的测试数据中,类别的频率不同,因此 StringIndexer 为类
我正在使用 Spark 和 pyspark 并且我有一个 pipeline 设置了一堆 StringIndexer 对象,我用它来将字符串列编码为索引列: indexers = [StringInde
我有一个 PySpark 数据框 +-------+--------------+----+----+ |address| date|name|food| +-------+----
我正在尝试在具有大约 15.000.000 个唯一字符串值的列上使用 Spark 的 StringIndexer 特征转换器。无论我投入多少资源,Spark 总是会因某种内存不足异常而死在我身上。 f
我是一名优秀的程序员,十分优秀!