- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我正在尝试使用 HDFS 中的文件加载 MongoDB 表。我一直在寻找类似的问题,我发现 HDFS URI 必须有三个栏。但是,即使我将 HDFS URI 设置为三个条,我也会遇到同样的错误:
这是我的代码:
from pyspark.sql import functions
from pyspark.sql import Row
from pyspark.sql import SparkSession
def parseInput(line):
fields=line.split('|')
return Row(userId=int(fields[0]), age=int(fields[1]), gender=fields[2], occupation=fields[3], zip=fields)
if __name__ == '__main__':
spark=SparkSession.builder.appName('Monark').getOrCreate()
lines=spark.sparkContext.textFile('hdfs:///user/hadoop/u.user')
users=lines.map(parseInput)
usersDataset=spark.createDataFrame(users)
usersDataset.write\
.format("com.mongodb.spark.sql.DefaultSource")\
.mode("append")\
.options("uri","mongodb://127.0.0.1/movielens.users")\
.save()
readUsers.read=spark.read\
.format("uri","mongodb://127.0.0.1/movielens.users")\
.mode("append")\
.options()\
.load()
readUsers.readOrReplaceTempView("users")
sqlDf=spark.sql("SELECT * FROM users where age < 20")
sqlDf.show()
spark.stop()
执行日志:
Ivy Default Cache set to: /home/hadoop/.ivy2/cache
The jars for the packages stored in: /home/hadoop/.ivy2/jars
:: loading settings :: url = jar:file:/usr/local/lib/python2.7/dist-packages/pyspark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.mongodb.spark#mongo-spark-connector_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
found org.mongodb.spark#mongo-spark-connector_2.11;2.0.0 in central
found org.mongodb#mongo-java-driver;3.2.2 in central
:: resolution report :: resolve 653ms :: artifacts dl 21ms
:: modules in use:
org.mongodb#mongo-java-driver;3.2.2 from central in [default]
org.mongodb.spark#mongo-spark-connector_2.11;2.0.0 from central in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 2 | 0 | 0 | 0 || 2 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
confs: [default]
0 artifacts copied, 2 already retrieved (0kB/42ms)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/03/24 03:45:48 WARN Utils: Your hostname, ubuntu resolves to a loopback address: 127.0.0.1; using 192.168.56.145 instead (on interface ens33)
18/03/24 03:45:48 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/03/24 03:45:49 INFO SparkContext: Running Spark version 2.2.1
18/03/24 03:45:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/03/24 03:45:50 INFO SparkContext: Submitted application: Monark
18/03/24 03:45:50 INFO SecurityManager: Changing view acls to: hadoop
18/03/24 03:45:50 INFO SecurityManager: Changing modify acls to: hadoop
18/03/24 03:45:50 INFO SecurityManager: Changing view acls groups to:
18/03/24 03:45:50 INFO SecurityManager: Changing modify acls groups to:
18/03/24 03:45:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/03/24 03:45:51 INFO Utils: Successfully started service 'sparkDriver' on port 35259.
18/03/24 03:45:52 INFO SparkEnv: Registering MapOutputTracker
18/03/24 03:45:52 INFO SparkEnv: Registering BlockManagerMaster
18/03/24 03:45:52 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/03/24 03:45:52 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/03/24 03:45:52 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-082ae340-d5bd-4e90-973c-425f52e0ba71
18/03/24 03:45:52 INFO MemoryStore: MemoryStore started with capacity 413.9 MB
18/03/24 03:45:52 INFO SparkEnv: Registering OutputCommitCoordinator
18/03/24 03:45:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
18/03/24 03:45:53 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.145:4040
18/03/24 03:45:53 INFO SparkContext: Added JAR file:/home/hadoop/.ivy2/jars/org.mongodb.spark_mongo-spark-connector_2.11-2.0.0.jar at spark://192.168.56.145:35259/jars/org.mongodb.spark_mongo-spark-connector_2.11-2.0.0.jar with timestamp 1521888353694
18/03/24 03:45:53 INFO SparkContext: Added JAR file:/home/hadoop/.ivy2/jars/org.mongodb_mongo-java-driver-3.2.2.jar at spark://192.168.56.145:35259/jars/org.mongodb_mongo-java-driver-3.2.2.jar with timestamp 1521888353701
18/03/24 03:45:54 INFO SparkContext: Added file file:/home/hadoop/Monark.py at file:/home/hadoop/Monark.py with timestamp 1521888354652
18/03/24 03:45:54 INFO Utils: Copying /home/hadoop/Monark.py to /tmp/spark-5826c09b-085e-467a-8a60-43814d565820/userFiles-01e51ac3-60ec-4044-935b-b4df7f9e5732/Monark.py
18/03/24 03:45:54 INFO SparkContext: Added file file:/home/hadoop/.ivy2/jars/org.mongodb.spark_mongo-spark-connector_2.11-2.0.0.jar at file:/home/hadoop/.ivy2/jars/org.mongodb.spark_mongo-spark-connector_2.11-2.0.0.jar with timestamp 1521888354792
18/03/24 03:45:54 INFO Utils: Copying /home/hadoop/.ivy2/jars/org.mongodb.spark_mongo-spark-connector_2.11-2.0.0.jar to /tmp/spark-5826c09b-085e-467a-8a60-43814d565820/userFiles-01e51ac3-60ec-4044-935b-b4df7f9e5732/org.mongodb.spark_mongo-spark-connector_2.11-2.0.0.jar
18/03/24 03:45:54 INFO SparkContext: Added file file:/home/hadoop/.ivy2/jars/org.mongodb_mongo-java-driver-3.2.2.jar at file:/home/hadoop/.ivy2/jars/org.mongodb_mongo-java-driver-3.2.2.jar with timestamp 1521888354848
18/03/24 03:45:54 INFO Utils: Copying /home/hadoop/.ivy2/jars/org.mongodb_mongo-java-driver-3.2.2.jar to /tmp/spark-5826c09b-085e-467a-8a60-43814d565820/userFiles-01e51ac3-60ec-4044-935b-b4df7f9e5732/org.mongodb_mongo-java-driver-3.2.2.jar
18/03/24 03:45:55 INFO Executor: Starting executor ID driver on host localhost
18/03/24 03:45:55 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39795.
18/03/24 03:45:55 INFO NettyBlockTransferService: Server created on 192.168.56.145:39795
18/03/24 03:45:55 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/03/24 03:45:55 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.145, 39795, None)
18/03/24 03:45:55 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.56.145:39795 with 413.9 MB RAM, BlockManagerId(driver, 192.168.56.145, 39795, None)
18/03/24 03:45:55 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.145, 39795, None)
18/03/24 03:45:55 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.145, 39795, None)
18/03/24 03:45:56 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/hadoop/spark-warehouse/').
18/03/24 03:45:56 INFO SharedState: Warehouse path is 'file:/home/hadoop/spark-warehouse/'.
18/03/24 03:45:57 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
18/03/24 03:45:59 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 238.6 KB, free 413.7 MB)
18/03/24 03:45:59 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.9 KB, free 413.7 MB)
18/03/24 03:45:59 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.56.145:39795 (size: 22.9 KB, free: 413.9 MB)
18/03/24 03:45:59 INFO SparkContext: Created broadcast 0 from textFile at NativeMethodAccessorImpl.java:0
Traceback (most recent call last):
File "/home/hadoop/Monark.py", line 16, in <module>
usersDataset=spark.createDataFrame(users)
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/session.py", line 582, in createDataFrame
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/session.py", line 380, in _createFromRDD
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/session.py", line 351, in _inferSchema
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/pyspark.zip/pyspark/rdd.py", line 1361, in first
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/pyspark.zip/pyspark/rdd.py", line 1313, in take
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/pyspark.zip/pyspark/rdd.py", line 2440, in getNumPartitions
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/usr/local/lib/python2.7/dist-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o30.partitions.
: java.io.IOException: Incomplete HDFS URI, no host: hdfs:/user/hadoop/u.user
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:143)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
18/03/24 03:46:00 INFO SparkContext: Invoking stop() from shutdown hook
18/03/24 03:46:00 INFO SparkUI: Stopped Spark web UI at http://192.168.56.145:4040
18/03/24 03:46:00 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/03/24 03:46:00 INFO MemoryStore: MemoryStore cleared
18/03/24 03:46:00 INFO BlockManager: BlockManager stopped
18/03/24 03:46:00 INFO BlockManagerMaster: BlockManagerMaster stopped
18/03/24 03:46:00 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/03/24 03:46:00 INFO SparkContext: Successfully stopped SparkContext
18/03/24 03:46:00 INFO ShutdownHookManager: Shutdown hook called
18/03/24 03:46:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-5826c09b-085e-467a-8a60-43814d565820
18/03/24 03:46:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-5826c09b-085e-467a-8a60-43814d565820/pyspark-0bece44d-4127-47bd-8391-98d4da0cf695
查看执行日志,可以看到另外两个bar都被去掉了,不知道为什么会这样:
py4j.protocol.Py4JJavaError:调用 o30.partitions 时出错。: java.io.IOException: 不完整的 HDFS URI,没有主机:hdfs:/user/hadoop/u.user
谢谢大家
最佳答案
打开core-site.xml文件并在其中输入以下内容。
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config $
</property>
并更新 lines=spark.sparkContext.textFile('hdfs://localhost:54310/user/hadoop/u.user')
而不是 lines=spark.sparkContext.textFile ('hdfs:///user/hadoop/u.user')
关于python - pyspark:返回不完整的 URI 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49463847/
我正在处理一组标记为 160 个组的 173k 点。我想通过合并最接近的(到 9 或 10 个组)来减少组/集群的数量。我搜索过 sklearn 或类似的库,但没有成功。 我猜它只是通过 knn 聚类
我有一个扁平数字列表,这些数字逻辑上以 3 为一组,其中每个三元组是 (number, __ignored, flag[0 or 1]),例如: [7,56,1, 8,0,0, 2,0,0, 6,1,
我正在使用 pipenv 来管理我的包。我想编写一个 python 脚本来调用另一个使用不同虚拟环境(VE)的 python 脚本。 如何运行使用 VE1 的 python 脚本 1 并调用另一个 p
假设我有一个文件 script.py 位于 path = "foo/bar/script.py"。我正在寻找一种在 Python 中通过函数 execute_script() 从我的主要 Python
这听起来像是谜语或笑话,但实际上我还没有找到这个问题的答案。 问题到底是什么? 我想运行 2 个脚本。在第一个脚本中,我调用另一个脚本,但我希望它们继续并行,而不是在两个单独的线程中。主要是我不希望第
我有一个带有 python 2.5.5 的软件。我想发送一个命令,该命令将在 python 2.7.5 中启动一个脚本,然后继续执行该脚本。 我试过用 #!python2.7.5 和http://re
我在 python 命令行(使用 python 2.7)中,并尝试运行 Python 脚本。我的操作系统是 Windows 7。我已将我的目录设置为包含我所有脚本的文件夹,使用: os.chdir("
剧透:部分解决(见最后)。 以下是使用 Python 嵌入的代码示例: #include int main(int argc, char** argv) { Py_SetPythonHome
假设我有以下列表,对应于及时的股票价格: prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11] 我想确定以下总体上最
所以我试图在选择某个单选按钮时更改此框架的背景。 我的框架位于一个类中,并且单选按钮的功能位于该类之外。 (这样我就可以在所有其他框架上调用它们。) 问题是每当我选择单选按钮时都会出现以下错误: co
我正在尝试将字符串与 python 中的正则表达式进行比较,如下所示, #!/usr/bin/env python3 import re str1 = "Expecting property name
考虑以下原型(prototype) Boost.Python 模块,该模块从单独的 C++ 头文件中引入类“D”。 /* file: a/b.cpp */ BOOST_PYTHON_MODULE(c)
如何编写一个程序来“识别函数调用的行号?” python 检查模块提供了定位行号的选项,但是, def di(): return inspect.currentframe().f_back.f_l
我已经使用 macports 安装了 Python 2.7,并且由于我的 $PATH 变量,这就是我输入 $ python 时得到的变量。然而,virtualenv 默认使用 Python 2.6,除
我只想问如何加快 python 上的 re.search 速度。 我有一个很长的字符串行,长度为 176861(即带有一些符号的字母数字字符),我使用此函数测试了该行以进行研究: def getExe
list1= [u'%app%%General%%Council%', u'%people%', u'%people%%Regional%%Council%%Mandate%', u'%ppp%%Ge
这个问题在这里已经有了答案: Is it Pythonic to use list comprehensions for just side effects? (7 个答案) 关闭 4 个月前。 告
我想用 Python 将两个列表组合成一个列表,方法如下: a = [1,1,1,2,2,2,3,3,3,3] b= ["Sun", "is", "bright", "June","and" ,"Ju
我正在运行带有最新 Boost 发行版 (1.55.0) 的 Mac OS X 10.8.4 (Darwin 12.4.0)。我正在按照说明 here构建包含在我的发行版中的教程 Boost-Pyth
学习 Python,我正在尝试制作一个没有任何第 3 方库的网络抓取工具,这样过程对我来说并没有简化,而且我知道我在做什么。我浏览了一些在线资源,但所有这些都让我对某些事情感到困惑。 html 看起来
我是一名优秀的程序员,十分优秀!