- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我想在 3 个 Amazon 实例的 CDH4 集群上运行 pig 脚本(非嵌入式和嵌入式)。我为位于/home/ubuntu/core-site.xml 的 pig 创建了一个伪造的配置文件(但地址是正确的),如下所示:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://server1:8020</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>server1:8021</value>
</property>
</configuration>
当我尝试运行时:
ubuntu@server1:~$ export HADOOP_CONF_DIR=/home/ubuntu
ubuntu@server1:~$ export PIG_CLASSPATH=/home/ubuntu
ubuntu@server1:~$ pig -x mapreduce -f test.pig
脚本运行并返回了正确的结果,但在控制台中我发现了很多“LocalJobRunner”,并且在 MapReduce 作业跟踪器 Web 界面中没有报告任何作业。任何人都可以告诉我为什么它不在 mapreduce 模式下运行,为什么它不报告那种情况的任何错误?如何在 mapreduce 模式下运行它?
我的集群是 4.1.3(全新安装,所有配置都是默认配置),Pig 0.10.0-cdh4.1.3。
2013-02-20 18:57:15,843 [main] INFO org.apache.pig.Main - Apache Pig version 0.10.0-cdh4.1.3 (rexported) compiled Jan 26 2013, 17:35:45
2013-02-20 18:57:15,843 [main] INFO org.apache.pig.Main - Logging error messages to: /home/ubuntu/epi-tre/pig_1361386635821.log
2013-02-20 18:57:16,519 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://server1:8020
2013-02-20 18:57:16,524 [main] WARN org.apache.hadoop.conf.Configuration - fs.default.name is deprecated. Instead, use fs.defaultFS
2013-02-20 18:57:17,249 [main] WARN org.apache.hadoop.conf.Configuration - fs.default.name is deprecated. Instead, use fs.defaultFS
2013-02-20 18:57:17,826 [main] WARN org.apache.hadoop.conf.Configuration - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2013-02-20 18:57:17,827 [main] WARN org.apache.hadoop.conf.Configuration - dfs.permissions.supergroup is deprecated. Instead, use dfs.permissions.superusergroup
2013-02-20 18:57:17,827 [main] WARN org.apache.hadoop.conf.Configuration - dfs.max.objects is deprecated. Instead, use dfs.namenode.max.objects
2013-02-20 18:57:17,827 [main] WARN org.apache.hadoop.conf.Configuration - dfs.replication.interval is deprecated. Instead, use dfs.namenode.replication.interval
2013-02-20 18:57:17,827 [main] WARN org.apache.hadoop.conf.Configuration - dfs.data.dir is deprecated. Instead, use dfs.datanode.data.dir
2013-02-20 18:57:17,827 [main] WARN org.apache.hadoop.conf.Configuration - dfs.access.time.precision is deprecated. Instead, use dfs.namenode.accesstime.precision
2013-02-20 18:57:17,827 [main] WARN org.apache.hadoop.conf.Configuration - dfs.replication.min is deprecated. Instead, use dfs.namenode.replication.min
2013-02-20 18:57:17,827 [main] WARN org.apache.hadoop.conf.Configuration - fs.checkpoint.dir is deprecated. Instead, use dfs.namenode.checkpoint.dir
2013-02-20 18:57:17,828 [main] WARN org.apache.hadoop.conf.Configuration - dfs.http.address is deprecated. Instead, use dfs.namenode.http-address
2013-02-20 18:57:17,828 [main] WARN org.apache.hadoop.conf.Configuration - dfs.replication.considerLoad is deprecated. Instead, use dfs.namenode.replication.considerLoad
2013-02-20 18:57:17,828 [main] WARN org.apache.hadoop.conf.Configuration - dfs.write.packet.size is deprecated. Instead, use dfs.client-write-packet-size
2013-02-20 18:57:17,828 [main] WARN org.apache.hadoop.conf.Configuration - dfs.permissions is deprecated. Instead, use dfs.permissions.enabled
2013-02-20 18:57:17,828 [main] WARN org.apache.hadoop.conf.Configuration - dfs.block.size is deprecated. Instead, use dfs.blocksize
2013-02-20 18:57:17,828 [main] WARN org.apache.hadoop.conf.Configuration - dfs.https.address is deprecated. Instead, use dfs.namenode.https-address
2013-02-20 18:57:17,828 [main] WARN org.apache.hadoop.conf.Configuration - dfs.name.dir.restore is deprecated. Instead, use dfs.namenode.name.dir.restore
2013-02-20 18:57:17,828 [main] WARN org.apache.hadoop.conf.Configuration - dfs.https.need.client.auth is deprecated. Instead, use dfs.client.https.need-auth
2013-02-20 18:57:17,829 [main] WARN org.apache.hadoop.conf.Configuration - topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl
2013-02-20 18:57:17,829 [main] WARN org.apache.hadoop.conf.Configuration - dfs.backup.http.address is deprecated. Instead, use dfs.namenode.backup.http-address
2013-02-20 18:57:17,829 [main] WARN org.apache.hadoop.conf.Configuration - dfs.secondary.http.address is deprecated. Instead, use dfs.namenode.secondary.http-address
2013-02-20 18:57:17,829 [main] WARN org.apache.hadoop.conf.Configuration - dfs.safemode.extension is deprecated. Instead, use dfs.namenode.safemode.extension
2013-02-20 18:57:17,829 [main] WARN org.apache.hadoop.conf.Configuration - dfs.df.interval is deprecated. Instead, use fs.df.interval
2013-02-20 18:57:17,829 [main] WARN org.apache.hadoop.conf.Configuration - fs.checkpoint.edits.dir is deprecated. Instead, use dfs.namenode.checkpoint.edits.dir
2013-02-20 18:57:17,829 [main] WARN org.apache.hadoop.conf.Configuration - dfs.https.client.keystore.resource is deprecated. Instead, use dfs.client.https.keystore.resource
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - dfs.datanode.max.xcievers is deprecated. Instead, use dfs.datanode.max.transfer.threads
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - dfs.backup.address is deprecated. Instead, use dfs.namenode.backup.address
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - topology.script.number.args is deprecated. Instead, use net.topology.script.number.args
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - dfs.balance.bandwidthPerSec is deprecated. Instead, use dfs.datanode.balance.bandwidthPerSec
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - dfs.name.edits.dir is deprecated. Instead, use dfs.namenode.edits.dir
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - dfs.safemode.threshold.pct is deprecated. Instead, use dfs.namenode.safemode.threshold-pct
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - dfs.name.dir is deprecated. Instead, use dfs.namenode.name.dir
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - fs.checkpoint.period is deprecated. Instead, use dfs.namenode.checkpoint.period
2013-02-20 18:57:17,830 [main] WARN org.apache.hadoop.conf.Configuration - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2013-02-20 18:57:18,102 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: GROUP_BY
2013-02-20 18:57:18,460 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2013-02-20 18:57:18,475 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.CombinerOptimizer - Choosing to move algebraic foreach to combiner
2013-02-20 18:57:18,507 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2013-02-20 18:57:18,508 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2013-02-20 18:57:18,530 [main] WARN org.apache.hadoop.conf.Configuration - session.id is deprecated. Instead, use dfs.metrics.session-id
2013-02-20 18:57:18,531 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
2013-02-20 18:57:18,553 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2013-02-20 18:57:18,563 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2013-02-20 18:57:18,566 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job2458140745309154844.jar
2013-02-20 18:57:22,423 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job2458140745309154844.jar created
2013-02-20 18:57:22,453 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2013-02-20 18:57:22,544 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2013-02-20 18:57:22,553 [Thread-4] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2013-02-20 18:57:22,581 [Thread-4] WARN org.apache.hadoop.mapred.JobClient - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
2013-02-20 18:57:22,735 [Thread-4] WARN org.apache.hadoop.conf.Configuration - fs.default.name is deprecated. Instead, use fs.defaultFS
2013-02-20 18:57:22,735 [Thread-4] WARN org.apache.hadoop.conf.Configuration - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2013-02-20 18:57:22,790 [Thread-4] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2013-02-20 18:57:22,790 [Thread-4] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2013-02-20 18:57:22,851 [Thread-4] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2013-02-20 18:57:23,045 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2013-02-20 18:57:23,055 [Thread-4] WARN org.apache.hadoop.conf.Configuration - fs.default.name is deprecated. Instead, use fs.defaultFS
2013-02-20 18:57:23,057 [Thread-4] WARN org.apache.hadoop.conf.Configuration - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2013-02-20 18:57:23,141 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null
2013-02-20 18:57:23,178 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.df.interval is deprecated. Instead, use fs.df.interval
2013-02-20 18:57:23,178 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.max.objects is deprecated. Instead, use dfs.namenode.max.objects
2013-02-20 18:57:23,187 [Thread-5] WARN org.apache.hadoop.conf.Configuration - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2013-02-20 18:57:23,187 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.data.dir is deprecated. Instead, use dfs.datanode.data.dir
2013-02-20 18:57:23,188 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.name.dir is deprecated. Instead, use dfs.namenode.name.dir
2013-02-20 18:57:23,188 [Thread-5] WARN org.apache.hadoop.conf.Configuration - fs.default.name is deprecated. Instead, use fs.defaultFS
2013-02-20 18:57:23,188 [Thread-5] WARN org.apache.hadoop.conf.Configuration - fs.checkpoint.dir is deprecated. Instead, use dfs.namenode.checkpoint.dir
2013-02-20 18:57:23,188 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.block.size is deprecated. Instead, use dfs.blocksize
2013-02-20 18:57:23,188 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.access.time.precision is deprecated. Instead, use dfs.namenode.accesstime.precision
2013-02-20 18:57:23,188 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.replication.min is deprecated. Instead, use dfs.namenode.replication.min
2013-02-20 18:57:23,188 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.name.edits.dir is deprecated. Instead, use dfs.namenode.edits.dir
2013-02-20 18:57:23,188 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.replication.considerLoad is deprecated. Instead, use dfs.namenode.replication.considerLoad
2013-02-20 18:57:23,189 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.balance.bandwidthPerSec is deprecated. Instead, use dfs.datanode.balance.bandwidthPerSec
2013-02-20 18:57:23,189 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.safemode.threshold.pct is deprecated. Instead, use dfs.namenode.safemode.threshold-pct
2013-02-20 18:57:23,189 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.http.address is deprecated. Instead, use dfs.namenode.http-address
2013-02-20 18:57:23,189 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.name.dir.restore is deprecated. Instead, use dfs.namenode.name.dir.restore
2013-02-20 18:57:23,189 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.https.client.keystore.resource is deprecated. Instead, use dfs.client.https.keystore.resource
2013-02-20 18:57:23,189 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.backup.address is deprecated. Instead, use dfs.namenode.backup.address
2013-02-20 18:57:23,189 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.backup.http.address is deprecated. Instead, use dfs.namenode.backup.http-address
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.permissions is deprecated. Instead, use dfs.permissions.enabled
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.safemode.extension is deprecated. Instead, use dfs.namenode.safemode.extension
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.datanode.max.xcievers is deprecated. Instead, use dfs.datanode.max.transfer.threads
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.https.need.client.auth is deprecated. Instead, use dfs.client.https.need-auth
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.https.address is deprecated. Instead, use dfs.namenode.https-address
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.replication.interval is deprecated. Instead, use dfs.namenode.replication.interval
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - fs.checkpoint.edits.dir is deprecated. Instead, use dfs.namenode.checkpoint.edits.dir
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.write.packet.size is deprecated. Instead, use dfs.client-write-packet-size
2013-02-20 18:57:23,190 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.permissions.supergroup is deprecated. Instead, use dfs.permissions.superusergroup
2013-02-20 18:57:23,191 [Thread-5] WARN org.apache.hadoop.conf.Configuration - topology.script.number.args is deprecated. Instead, use net.topology.script.number.args
2013-02-20 18:57:23,191 [Thread-5] WARN org.apache.hadoop.conf.Configuration - dfs.secondary.http.address is deprecated. Instead, use dfs.namenode.secondary.http-address
2013-02-20 18:57:23,191 [Thread-5] WARN org.apache.hadoop.conf.Configuration - fs.checkpoint.period is deprecated. Instead, use dfs.namenode.checkpoint.period
2013-02-20 18:57:23,191 [Thread-5] WARN org.apache.hadoop.conf.Configuration - topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl
2013-02-20 18:57:23,191 [Thread-5] WARN org.apache.hadoop.conf.Configuration - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2013-02-20 18:57:23,193 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputCommitter
2013-02-20 18:57:23,246 [Thread-5] WARN mapreduce.Counters - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-02-20 18:57:23,333 [Thread-5] INFO org.apache.hadoop.util.ProcessTree - setsid exited with exit code 0
2013-02-20 18:57:23,344 [Thread-5] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3bfc47
2013-02-20 18:57:23,362 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader - Current split being processed hdfs://server1:8020/tmp/cf_rating.10000.csv:0+676168
2013-02-20 18:57:23,374 [Thread-5] INFO org.apache.hadoop.mapred.MapTask - io.sort.mb = 100
2013-02-20 18:57:23,507 [Thread-5] INFO org.apache.hadoop.mapred.MapTask - data buffer = 79691776/99614720
2013-02-20 18:57:23,507 [Thread-5] INFO org.apache.hadoop.mapred.MapTask - record buffer = 262144/327680
2013-02-20 18:57:23,639 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_local_0001
2013-02-20 18:57:24,485 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-02-20 18:57:24,488 [Thread-5] INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output
2013-02-20 18:57:25,059 [Thread-5] INFO org.apache.hadoop.mapred.MapTask - Finished spill 0
2013-02-20 18:57:25,065 [Thread-5] INFO org.apache.hadoop.mapred.Task - Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
2013-02-20 18:57:25,087 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-02-20 18:57:25,098 [Thread-5] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local_0001_m_000000_0' done.
2013-02-20 18:57:25,104 [Thread-5] WARN mapreduce.Counters - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-02-20 18:57:25,130 [Thread-5] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@461d318f
2013-02-20 18:57:25,130 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-02-20 18:57:25,137 [Thread-5] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2013-02-20 18:57:25,164 [Thread-5] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 25 bytes
2013-02-20 18:57:25,164 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-02-20 18:57:25,336 [Thread-5] INFO org.apache.hadoop.mapred.Task - Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
2013-02-20 18:57:25,337 [Thread-5] WARN org.apache.hadoop.conf.Configuration - fs.default.name is deprecated. Instead, use fs.defaultFS
2013-02-20 18:57:25,338 [Thread-5] WARN org.apache.hadoop.conf.Configuration - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2013-02-20 18:57:25,339 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-02-20 18:57:25,341 [Thread-5] INFO org.apache.hadoop.mapred.Task - Task attempt_local_0001_r_000000_0 is allowed to commit now
2013-02-20 18:57:25,371 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://server1:8020/tmp/temp-1313079537/tmp188373172
2013-02-20 18:57:25,371 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
2013-02-20 18:57:25,373 [Thread-5] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local_0001_r_000000_0' done.
2013-02-20 18:57:28,144 [main] WARN org.apache.pig.tools.pigstats.PigStatsUtil - Failed to get RunningJob for job job_local_0001
2013-02-20 18:57:28,147 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2013-02-20 18:57:28,151 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.0.0-cdh4.1.3 0.10.0-cdh4.1.3 ubuntu 2013-02-20 18:57:18 2013-02-20 18:57:28 GROUP_BY
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTimeAvgReduceTime Alias Feature Outputs
job_local_0001 1 1 n/a n/a n/a n/a n/a n/a a,b,c GROUP_BY,COMBINER hdfs://server1:8020/tmp/temp-1313079537/tmp188373172,
Input(s):
Successfully read 0 records from: "/tmp/cf_rating.10000.csv"
Output(s):
Successfully stored 0 records in: "hdfs://server1:8020/tmp/temp-1313079537/tmp188373172"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local_0001
2013-02-20 18:57:28,151 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2013-02-20 18:57:28,158 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2013-02-20 18:57:28,159 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
(10000)
最佳答案
原来是我忘记部署 MapReduce“客户端配置”,所以我的整个集群都在本地模式下运行:"> 已解决。
关于ubuntu - 在 CDH4 集群上运行 Pig 时无法进入 mapreduce 模式 (Hadoop 2 + MapReduce v1),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14987996/
对此感到疯狂,真的缺少一些东西。 我有webpack 4.6.0,webpack-cli ^ 2.1.2,所以是最新的。 在文档(https://webpack.js.org/concepts/mod
object Host "os.google.com" { import "windows" address = "linux.google.com" groups = ["linux"] } obj
每当我安装我的应用程序时,我都可以将数据库从 Assets 文件夹复制到 /data/data/packagename/databases/ .到此为止,应用程序工作得很好。 但 10 或 15 秒后
我在 cc 模式缓冲区中使用 hideshow.el 来折叠我不查看的文件部分。 如果能够在 XML 文档中做到这一点就好了。我使用 emacs 22.2.1 和内置的 sgml-mode 进行 xm
已结束。此问题不符合 Stack Overflow guidelines .它目前不接受答案。 我们不允许提出有关书籍、工具、软件库等方面的建议的问题。您可以编辑问题,以便用事实和引用来回答它。 关闭
根据java: public Scanner useDelimiter(String pattern) Sets this scanner's delimiting pattern to a patt
我读过一些关于 PRG 模式以及它如何防止用户重新提交表单的文章。比如this post有一张不错的图: 我能理解为什么在收到 2xx 后用户刷新页面时不会发生表单提交。但我仍然想知道: (1) 如果
看看下面的图片,您可能会清楚地看到这一点。 那么如何在带有其他一些 View 的简单屏幕中实现没有任何弹出/对话框/模式的微调器日期选择器? 我在整个网络上进行了谷歌搜索,但没有找到与之相关的任何合适
我不知道该怎么做,我一直遇到问题。 以下是代码: rows = int(input()) for i in range(1,rows): for j in range(1,i+1):
我想为重写创建一个正则表达式。 将所有请求重写为 index.php(不需要匹配),它不是以/api 开头,或者不是以('.html',或'.js'或'.css'或'.png'结束) 我的例子还是这样
MVC模式代表 Model-View-Controller(模型-视图-控制器) 模式 MVC模式用于应用程序的分层开发 Model(模型) - 模型代表一个存取数据的对象或 JAVA PO
我想为组织模式创建一个 RDF 模式世界。您可能知道,组织模式文档基于层次结构大纲,其中标题是主要的分组实体。 * March auxiliary :PROPERTIES: :HLEVEL: 1 :E
我正在编写一个可以从文件中读取 JSON 数据的软件。该文件包含“person”——一个值为对象数组的对象。我打算使用 JSON 模式验证库来验证内容,而不是自己编写代码。符合代表以下数据的 JSON
假设我有 4 张 table 人 公司 团体 和 账单 现在bills/persons和bills/companys和bills/groups之间是多对多的关系。 我看到了 4 种可能的 sql 模式
假设您有这样的文档: doc1: id:1 text: ... references: Journal1, 2013, pag 123 references: Journal2, 2014,
我有这个架构。它检查评论,目前工作正常。 var schema = { id: '', type: 'object', additionalProperties: false, pro
这可能很简单,但有人可以解释为什么以下模式匹配不明智吗?它说其他规则,例如1, 0, _ 永远不会匹配。 let matchTest(n : int) = let ran = new Rand
我有以下选择序列作为 XML 模式的一部分。理想情况下,我想要一个序列: 来自 my:namespace 的元素必须严格解析。 来自任何其他命名空间的元素,不包括 ##targetNamespace和
我希望编写一个 json 模式来涵盖这个(简化的)示例 { "errorMessage": "", "nbRunningQueries": 0, "isError": Fals
首先,我是 f# 的新手,所以也许答案很明显,但我没有看到。所以我有一些带有 id 和值的元组。我知道我正在寻找的 id,我想从我传入的三个元组中选择正确的元组。我打算用两个 match 语句来做到这
我是一名优秀的程序员,十分优秀!