- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我已经配置了一个3节点Hadoop集群。我试图在其上使用Hive。 Hive似乎总是只在本地模式下运行。我听说Hive从Hadoop获得了有关集群的值(value)。因此,我在Hadoop中进行了一项工作,它似乎也在本地模式下运行。我也在所有三个节点上都安装了Hive,并附加了日志和配置文件。请问我是否需要其他详细信息。
hive 日志:
INFO : Number of reduce tasks determined at compile time: 1
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
INFO : number of splits:1
INFO : Submitting tokens for job: job_local49819314_0002
INFO : The url to track the job: http://localhost:8080/
INFO : Job running in-process (local Hadoop)
INFO : 2016-01-27 23:56:30,389 Stage-1 map = 100%, reduce = 100%
INFO : Ended Job = job_local49819314_0002
16/01/27 23:46:20 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
16/01/27 23:46:20 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
16/01/27 23:46:20 INFO input.FileInputFormat: Total input paths to process : 1
16/01/27 23:46:20 INFO mapreduce.JobSubmitter: number of splits:1
16/01/27 23:46:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local494116460_0001
16/01/27 23:46:20 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
16/01/27 23:46:20 INFO mapreduce.Job: Running job: job_local494116460_0001
16/01/27 23:46:20 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/01/27 23:46:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/01/27 23:46:20 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
16/01/27 23:46:20 INFO mapred.LocalJobRunner: Waiting for map tasks
16/01/27 23:46:20 INFO mapred.LocalJobRunner: Starting task: attempt_local494116460_0001_m_000000_0
16/01/27 23:46:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/01/27 23:46:20 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
16/01/27 23:46:20 INFO mapred.MapTask: Processing split: hdfs://master:9000/exercise3:0+18834811
16/01/27 23:46:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/01/27 23:46:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/01/27 23:46:20 INFO mapred.MapTask: soft limit at 83886080
16/01/27 23:46:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/01/27 23:46:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/01/27 23:46:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/01/27 23:46:21 INFO mapreduce.Job: Job job_local494116460_0001 running in uber mode : false
16/01/27 23:46:21 INFO mapreduce.Job: map 0% reduce 0%
16/01/27 23:46:26 INFO mapred.LocalJobRunner: map > map
16/01/27 23:46:27 INFO mapreduce.Job: map 13% reduce 0%
16/01/27 23:46:29 INFO mapred.LocalJobRunner: map > map
16/01/27 23:46:30 INFO mapreduce.Job: map 19% reduce 0%
16/01/27 23:46:32 INFO mapred.LocalJobRunner: map > map
16/01/27 23:46:33 INFO mapreduce.Job: map 29% reduce 0%
16/01/27 23:46:35 INFO mapred.LocalJobRunner: map > map
16/01/27 23:46:36 INFO mapreduce.Job: map 36% reduce 0%
16/01/27 23:46:38 INFO mapred.LocalJobRunner: map > map
16/01/27 23:46:39 INFO mapreduce.Job: map 45% reduce 0%
16/01/27 23:46:41 INFO mapred.LocalJobRunner: map > map
16/01/27 23:46:42 INFO mapreduce.Job: map 54% reduce 0%
16/01/27 23:46:44 INFO mapred.LocalJobRunner: map > map
16/01/27 23:46:45 INFO mapreduce.Job: map 62% reduce 0%
16/01/27 23:46:46 INFO mapred.LocalJobRunner: map > map
16/01/27 23:46:46 INFO mapred.MapTask: Starting flush of map output
16/01/27 23:46:46 INFO mapred.MapTask: Spilling map output
16/01/27 23:46:46 INFO mapred.MapTask: bufstart = 0; bufend = 21289849; bufvoid = 104857600
16/01/27 23:46:46 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 23806260(95225040); length = 2408137/6553600
16/01/27 23:46:47 INFO mapred.MapTask: Finished spill 0
16/01/27 23:46:47 INFO mapred.Task: Task:attempt_local494116460_0001_m_000000_0 is done. And is in the process of committing
16/01/27 23:46:47 INFO mapred.LocalJobRunner: map
16/01/27 23:46:47 INFO mapred.Task: Task 'attempt_local494116460_0001_m_000000_0' done.
16/01/27 23:46:47 INFO mapred.LocalJobRunner: Finishing task: attempt_local494116460_0001_m_000000_0
16/01/27 23:46:47 INFO mapred.LocalJobRunner: map task executor complete.
16/01/27 23:46:47 INFO mapred.LocalJobRunner: Waiting for reduce tasks
16/01/27 23:46:47 INFO mapred.LocalJobRunner: Starting task: attempt_local494116460_0001_r_000000_0
16/01/27 23:46:47 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/01/27 23:46:47 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
16/01/27 23:46:47 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@15602819
16/01/27 23:46:47 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=333971456, maxSingleShuffleLimit=83492864, mergeThreshold=220421168, ioSortFactor=10, memToMemMergeOutputsThreshold=10
16/01/27 23:46:47 INFO reduce.EventFetcher: attempt_local494116460_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
16/01/27 23:46:47 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local494116460_0001_m_000000_0 decomp: 13082052 len: 13082056 to MEMORY
16/01/27 23:46:47 INFO reduce.InMemoryMapOutput: Read 13082052 bytes from map-output for attempt_local494116460_0001_m_000000_0
16/01/27 23:46:47 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 13082052, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->13082052
16/01/27 23:46:47 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
16/01/27 23:46:47 INFO mapred.LocalJobRunner: 1 / 1 copied.
16/01/27 23:46:47 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
16/01/27 23:46:47 INFO mapred.Merger: Merging 1 sorted segments
16/01/27 23:46:47 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 13082040 bytes
16/01/27 23:46:47 INFO reduce.MergeManagerImpl: Merged 1 segments, 13082052 bytes to disk to satisfy reduce memory limit
16/01/27 23:46:47 INFO reduce.MergeManagerImpl: Merging 1 files, 13082056 bytes from disk
16/01/27 23:46:47 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
16/01/27 23:46:47 INFO mapred.Merger: Merging 1 sorted segments
16/01/27 23:46:47 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 13082040 bytes
16/01/27 23:46:47 INFO mapred.LocalJobRunner: 1 / 1 copied.
16/01/27 23:46:47 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
16/01/27 23:46:47 INFO mapreduce.Job: map 100% reduce 0%
16/01/27 23:46:53 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:46:53 INFO mapreduce.Job: map 100% reduce 85%
16/01/27 23:46:56 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:46:56 INFO mapreduce.Job: map 100% reduce 89%
16/01/27 23:46:59 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:46:59 INFO mapreduce.Job: map 100% reduce 92%
16/01/27 23:47:02 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:47:02 INFO mapreduce.Job: map 100% reduce 96%
16/01/27 23:47:05 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:47:05 INFO mapreduce.Job: map 100% reduce 99%
16/01/27 23:47:08 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:47:08 INFO mapreduce.Job: map 100% reduce 100%
16/01/27 23:47:11 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:47:18 INFO mapred.Task: Task:attempt_local494116460_0001_r_000000_0 is done. And is in the process of committing
16/01/27 23:47:18 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:47:18 INFO mapred.Task: Task attempt_local494116460_0001_r_000000_0 is allowed to commit now
16/01/27 23:47:18 INFO output.FileOutputCommitter: Saved output of task 'attempt_local494116460_0001_r_000000_0' to hdfs://master:9000/output/_temporary/0/task_local494116460_0001_r_000000
16/01/27 23:47:18 INFO mapred.LocalJobRunner: reduce > reduce
16/01/27 23:47:18 INFO mapred.Task: Task 'attempt_local494116460_0001_r_000000_0' done.
16/01/27 23:47:18 INFO mapred.LocalJobRunner: Finishing task: attempt_local494116460_0001_r_000000_0
16/01/27 23:47:18 INFO mapred.LocalJobRunner: reduce task executor complete.
16/01/27 23:47:18 INFO mapreduce.Job: Job job_local494116460_0001 completed successfully
16/01/27 23:47:18 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=26711328
FILE: Number of bytes written=40348644
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=37669622
HDFS: Number of bytes written=12758437
HDFS: Number of read operations=13
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
Map-Reduce Framework
Map input records=65535
Map output records=602035
Map output bytes=21289849
Map output materialized bytes=13082056
Input split bytes=93
Combine input records=602035
Combine output records=58349
Reduce input groups=58349
Reduce shuffle bytes=13082056
Reduce input records=58349
Reduce output records=58349
Spilled Records=116698
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=123
Total committed heap usage (bytes)=848297984
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=18834811
File Output Format Counters
Bytes Written=12758437
<configuration>
<property>
<name>mapreduce.job.tracker</name>
<value>master:5431</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/home/huser/hadoop-2.7.1/hadoop_tmp/history/intermediate</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/home/huser/hadoop-2.7.1/hadoop_tmp/history/done</value>
</property>
<property>
<name>mapred.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>master:54311</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>master:50030</value>
</property>
</configuration>
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/huser/hadoop-2.7.1/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/huser/hadoop-2.7.1/hadoop_tmp/hdfs/datanode</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
</configuration>
export JAVA_HOME=/opt/jdk/jdk1.8.0_66
export PATH=$PATH:$JAVA_HOME
# -- HADOOP ENVIRONMENT VARIABLES START -- #
export HADOOP_HOME=/home/huser/hadoop-2.7.1/
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_CONF=$HADOOP_HOME/etc/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
# -- HADOOP ENVIRONMENT VARIABLES END -- #
# -- Hive Variables Start --#
export HIVE_HOME=/home/huser/apache-hive-1.2.1-bin
export HIVE_CONF=$HIVE_HOME/conf
export PATH=$HIVE_HOME/bin:$PATH
export PATH=$HIVE_HOME/lib:$PATH
export ANT_LIB=/home/huser/apache-ant-1.9.6/lib
# -- Hive Variables End -- #
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value></value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://master:9083</value>
</property>
<property>
<name>mapreduce.job.tracker</name>
<value>master:5431</value>
</property>
</configuration>
Set hive.exec.mode.local.auto;
+----------------------------------+--+
| set |
+----------------------------------+--+
| hive.exec.mode.local.auto=false |
+----------------------------------+--+
set mapred.job.tracker;
+----------------------------------+--+
| set |
+----------------------------------+--+
| mapred.job.tracker=master:54311 |
+----------------------------------+--+
最佳答案
问题似乎出在mapred-site.xml上。
这是新文件
<configuration>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/home/huser/hadoop-2.7.1/hadoop_tmp/history/intermediate</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/home/huser/hadoop-2.7.1/hadoop_tmp/history/done</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>master:54311</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>master:50030</value>
</property>
</configuration>
mapreduce.job.tracker
似乎不是有效的属性。
HADOOP_HOME=/home/huser/hadoop-2.7.1/
到
HADOOP_HOME=/home/huser/hadoop-2.7.1
删除正斜杠
(/)
。
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>1</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://master:9083</value>
</property>
</configuration>
关于hadoop - Hive和Hadoop仅在本地运行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35055181/
好的,所以我想从批处理文件运行我的整个工作环境... 我想要实现什么...... 打开新的 powershell,打开我的 API 文件夹并从该文件夹运行 VS Code 编辑器(cd c:\xy;
我正在查看 Cocoa Controls 上的示例并下载了一些演示。我遇到的问题是一些例子,比如 BCTabBarController ,不会在我的设备上构建或启动。当我打开项目时,它看起来很正常,没
我刚刚开始学习 C 语言(擅长 Java 和 Python)。 当编写 C 程序(例如 hello world)时,我在 ubuntu cmd 行上使用 gcc hello.c -o hello 编译
我在 php 脚本从 cron 开始运行到超时后注意到了这个问题,但是当它从命令行手动运行时这不是问题。 (对于 CLI,PHP 默认的 max_execution_time 是 0) 所以我尝试运行
我可以使用命令行运行测试 > ./node_modules/.bin/wdio wdio.conf.js 但是如果我尝试从 IntelliJ 的运行/调试配置运行它,我会遇到各种不同的错误。 Fea
Error occurred during initialization of VM. Could not reserve enough space for object heap. Error: C
将 Anaconda 安装到 C:\ 后,我无法打开 jupyter 笔记本。无论是在带有 jupyter notebook 的 Anaconda Prompt 中还是在导航器中。我就是无法让它工作。
我遇到一个问题,如果我双击我的脚本 (.py),或者使用 IDLE 打开它,它将正确编译并运行。但是,如果我尝试在 Windows 命令行中运行脚本,请使用 C:\> "C:\Software_Dev
情况 我正在使用 mysql 数据库。查询从 phpmyadmin 和 postman 运行 但是当我从 android 发送请求时(它返回零行) 我已经记录了从 android 发送的电子邮件是正确
所以这个有点奇怪 - 为什么从 Java 运行 .exe 文件会给出不同的输出而不是直接运行 .exe。 当 java 在下面的行执行时,它会调用我构建的可与 3CX 电话系统配合使用的 .exe 文
这行代码 Environment.Is64BitProcess 当我的应用单独运行时评估为真。 但是当它在我的 Visual Studio 单元测试中运行时,相同的表达式的计算结果为 false。 我
关闭。这个问题是opinion-based .它目前不接受答案。 想要改进这个问题? 更新问题,以便 editing this post 可以用事实和引用来回答它. 关闭 8 年前。 Improve
我写了一个使用 libpq 连接到 PostgreSQL 数据库的演示。 我尝试通过包含将 C 文件连接到 PostgreSQL #include 在我将路径添加到系统变量 I:\Program F
如何从 Jenkins 运行 Android 模拟器来运行我的测试?当我在 Execiute Windows bath 命令中写入时,运行模拟器的命令: emulator -avd Tester 然后
我已经配置好东西,这样我就可以使用 ssl 登录和访问在 nginx 上运行的 errbit 我的问题是我不知道如何设置我的 Rails 应用程序的 errbit.rb 以便我可以运行测试 nginx
我编写了 flutter 应用程序,我通过 xcode 打开了 ios 部分并且应用程序正在运行,但是当我通过 flutter build ios 通过 vscode 运行应用程序时,我得到了这个错误
我有一个简短的 python 脚本,它使用日志记录模块和 configparser 模块。我在Win7下使用PyCharm 2.7.1和Python 3.3。 当我使用 PyCharm 运行我的脚本时
我在这里遇到了一些难题。 我的开发箱是 64 位的,windows 7。我所有的项目都编译为“任何 CPU”。该项目引用了 64 位版本的第 3 方软件 当我运行不使用任何 Web 引用的单元测试时,
当我注意到以下问题时,我正在做一些 C++ 练习。给定的代码将不会在 Visual Studio 2013 或 Qt Creator 5.4.1 中运行/编译 报错: invalid types 'd
假设我有一个 easteregg.py 文件: from airflow import DAG from dateutil import parser from datetime import tim
我是一名优秀的程序员,十分优秀!