- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我用log4j写了hadoop程序(只有Map步骤,操作不符合我的等待)
package org.myorg;
import java.io.*;
import java.util.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
import org.apache.log4j.Logger;
import org.apache.log4j.LogManager;
import org.apache.log4j.xml.DOMConfigurator;
public class ParallelIndexation {
public static class Map extends MapReduceBase implements
Mapper<LongWritable, Text, Text, LongWritable> {
private final static LongWritable zero = new LongWritable(0);
private Text word = new Text();
private static final Logger logger = LogManager.getLogger(Map.class.getName());
public void map(LongWritable key, Text value,
OutputCollector<Text, LongWritable> output, Reporter reporter)
throws IOException {
DOMConfigurator.configure("/folder/log4j.xml");
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path localPath = new Path("/export/hadoop-1.0.1/bin/input/paths.txt");
Path hdfsPath=new Path("hdfs://192.168.1.8:7000/user/hadoop/paths.txt");
Path localPath1 = new Path("/usr/countcomputers.txt");
Path hdfsPath1=new Path("hdfs://192.168.1.8:7000/user/hadoop/countcomputers.txt");
if (!fs.exists(hdfsPath))
{
fs.copyFromLocalFile(localPath, hdfsPath);
};
if (!fs.exists(hdfsPath1))
{
fs.copyFromLocalFile(localPath1, hdfsPath1);
};
FSDataInputStream in = fs.open(hdfsPath);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String line = br.readLine();
BufferedReader br1=new BufferedReader(new InputStreamReader(fs.open(hdfsPath1)));
int CountComputers;
String result=br1.readLine();
CountComputers=Integer.parseInt(result);
ArrayList<String> paths = new ArrayList<String>();
StringTokenizer tokenizer = new StringTokenizer(line, "|");
while (tokenizer.hasMoreTokens()) {
paths.add(tokenizer.nextToken());
}
for (int i=0; i<paths.size(); i++)
{
logger.debug("paths[i]=" + paths.get(i) + "\n");
}
logger.debug("CountComputers=" + CountComputers + "\n");
我提供/folder/log4j.xml 文件
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration debug="true" xmlns:log4j="http://jakarta.apache.org/log4j/">
<appender name="ConsoleAppender" class="org.apache.log4j.ConsoleAppender">
<param name="Encoding" value="UTF-8"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d{ISO8601} [%-5p][%-16.16t][%32.32c] - %m%n" />
</layout>
</appender>
<root>
<priority value="DEBUG"/>
<appender-ref ref="ConsoleAppender" />
</root>
</log4j:configuration>
但是尽管日志中有输出,但作为命令执行的结果
./hadoop jar /export/hadoop-1.0.1/bin/ParallelIndexation.jar org.myorg.ParallelIndexation /export/hadoop-1.0.1/bin/input /export/hadoop-1.0.1/bin/output -D mapred.map.tasks=1 1> resultofexecute.txt 2>&1
在 resultofexecute.txt 文件中没有适当变量的输出。帮助删除必要的变量。@ChrisWhite 我提供了 hadoop-hadoop-tasktracker-myhost2.log
和 hadoop-hadoop-takstacker-myhost3.log
文件
hadoop-hadoop-tasktracker-myhost2.log
2013-04-20 12:35:43,465 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-04-20 12:35:43,583 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-04-20 12:35:43,584 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-04-20 12:35:43,584 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started
2013-04-20 12:35:43,971 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-04-20 12:35:43,979 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-04-20 12:35:44,168 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-04-20 12:35:44,251 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-04-20 12:35:44,280 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-04-20 12:35:44,287 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as hadoop
2013-04-20 12:35:44,288 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-hadoop/mapred/local
2013-04-20 12:35:44,302 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-04-20 12:35:44,312 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-04-20 12:35:44,312 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source TaskTrackerMetrics registered.
2013-04-20 12:35:49,332 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort51172 registered.
2013-04-20 12:35:49,332 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort51172 registered.
2013-04-20 12:35:49,335 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:51172
2013-04-20 12:35:49,335 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_myhost2:localhost/127.0.0.1:51172
2013-04-20 12:35:49,356 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-04-20 12:35:49,357 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-04-20 12:35:49,357 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 51172: starting
2013-04-20 12:35:49,358 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 51172: starting
2013-04-20 12:35:49,358 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 51172: starting
2013-04-20 12:35:49,358 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 51172: starting
2013-04-20 12:35:49,358 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 51172: starting
2013-04-20 12:35:50,372 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 0 time(s).
2013-04-20 12:35:51,372 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 1 time(s).
2013-04-20 12:35:52,375 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 2 time(s).
2013-04-20 12:35:53,376 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 3 time(s).
2013-04-20 12:35:54,377 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 4 time(s).
2013-04-20 12:35:55,377 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 5 time(s).
2013-04-20 12:35:56,379 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 6 time(s).
2013-04-20 12:35:57,380 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 7 time(s).
2013-04-20 12:35:58,381 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 8 time(s).
2013-04-20 12:35:59,381 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 9 time(s).
2013-04-20 12:35:59,385 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7001 not available yet, Zzzzz...
2013-04-20 12:36:01,387 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 0 time(s).
2013-04-20 12:36:02,387 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 1 time(s).
2013-04-20 12:36:03,388 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 2 time(s).
2013-04-20 12:36:04,388 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 3 time(s).
2013-04-20 12:36:05,388 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 4 time(s).
2013-04-20 12:36:06,388 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 5 time(s).
2013-04-20 12:36:07,390 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 6 time(s).
2013-04-20 12:36:08,390 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 7 time(s).
2013-04-20 12:36:09,391 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 8 time(s).
2013-04-20 12:36:10,392 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 9 time(s).
2013-04-20 12:36:10,393 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7001 not available yet, Zzzzz...
2013-04-20 12:36:12,393 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 0 time(s).
2013-04-20 12:36:43,011 INFO org.apache.hadoop.mapred.TaskTracker: Using ResourceCalculatorPlugin : null
2013-04-20 12:36:43,030 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_myhost2:localhost/127.0.0.1:51172
2013-04-20 12:36:43,031 WARN org.apache.hadoop.util.ProcessTree: setsid is not available on this machine. So not using it.
2013-04-20 12:36:43,031 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-04-20 12:36:43,182 INFO org.apache.hadoop.util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
2013-04-20 12:36:43,182 INFO org.apache.hadoop.mapred.TaskTracker: ProcessTree implementation is missing on this system. TaskMemoryManager is disabled.
2013-04-20 12:36:43,206 INFO org.apache.hadoop.mapred.IndexCache: IndexCache created with max memory = 10485760
2013-04-20 12:36:43,214 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ShuffleServerMetrics registered.
2013-04-20 12:36:43,217 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060
2013-04-20 12:36:43,218 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060
2013-04-20 12:36:43,218 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060
2013-04-20 12:36:43,218 INFO org.mortbay.log: jetty-6.1.26
2013-04-20 12:36:43,611 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50060
2013-04-20 12:36:43,611 INFO org.apache.hadoop.mapred.TaskTracker: FILE_CACHE_SIZE for mapOutputServlet set to : 2000
2013-04-20 12:36:43,635 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304112319_0001 for user-log deletion with retainTimeStamp:1366573003186
2013-04-20 12:36:43,635 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304070459_0003 for user-log deletion with retainTimeStamp:1366573003186
2013-04-20 12:36:43,635 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304070459_0001 for user-log deletion with retainTimeStamp:1366573003186
2013-04-20 12:36:43,635 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304112319_0007 for user-log deletion with retainTimeStamp:1366573003186
2013-04-20 12:36:43,635 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304070413_0001 for user-log deletion with retainTimeStamp:1366573003186
2013-04-20 12:36:43,635 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304121018_0001 for user-log deletion with retainTimeStamp:1366573003186
2013-04-20 12:36:43,635 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304192207_0003 for user-log deletion with retainTimeStamp:1366573003186
2013-04-20 12:49:24,662 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201304201135_0001_m_000001_0 task's state:UNASSIGNED
2013-04-20 12:49:24,665 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201304201135_0001_m_000001_0 which needs 1 slots
2013-04-20 12:49:24,665 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_201304201135_0001_m_000001_0 which needs 1 slots
2013-04-20 12:49:25,006 INFO org.apache.hadoop.mapred.JobLocalizer: Initializing user hadoop on this TT.
2013-04-20 12:49:25,377 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201304201135_0001_m_-1899194781
2013-04-20 12:49:25,378 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201304201135_0001_m_-1899194781 spawned.
2013-04-20 12:49:25,381 INFO org.apache.hadoop.mapred.TaskController: Writing commands to /tmp/hadoop-hadoop/mapred/local/ttprivate/taskTracker/hadoop/jobcache/job_201304201135_0001/attempt_201304201135_0001_m_000001_0/taskjvm.sh
2013-04-20 12:49:26,284 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201304201135_0001_m_-1899194781 given task: attempt_201304201135_0001_m_000001_0
2013-04-20 12:49:29,803 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201304201135_0001_m_000001_0 0.0% setup
2013-04-20 12:49:29,806 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201304201135_0001_m_000001_0 is done.
2013-04-20 12:49:29,807 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201304201135_0001_m_000001_0 was -1
2013-04-20 12:49:29,809 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 2
2013-04-20 12:49:29,947 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201304201135_0001_m_-1899194781 exited with exit code 0. Number of tasks it ran: 1
2013-04-20 12:49:30,719 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201304201135_0001_r_000000_0 task's state:UNASSIGNED
2013-04-20 12:49:30,719 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201304201135_0001_r_000000_0 which needs 1 slots
2013-04-20 12:49:30,719 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_201304201135_0001_r_000000_0 which needs 1 slots
2013-04-20 12:49:30,719 INFO org.apache.hadoop.mapred.TaskTracker: Received KillTaskAction for task: attempt_201304201135_0001_m_000001_0
2013-04-20 12:49:30,719 INFO org.apache.hadoop.mapred.TaskTracker: About to purge task: attempt_201304201135_0001_m_000001_0
2013-04-20 12:49:30,720 INFO org.apache.hadoop.mapred.IndexCache: Map ID attempt_201304201135_0001_m_000001_0 not found in cache
2013-04-20 12:49:30,742 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201304201135_0001_r_-1899194781
2013-04-20 12:49:30,742 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201304201135_0001_r_-1899194781 spawned.
2013-04-20 12:49:30,745 INFO org.apache.hadoop.mapred.TaskController: Writing commands to /tmp/hadoop-hadoop/mapred/local/ttprivate/taskTracker/hadoop/jobcache/job_201304201135_0001/attempt_201304201135_0001_r_000000_0/taskjvm.sh
2013-04-20 12:49:31,611 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201304201135_0001_r_-1899194781 given task: attempt_201304201135_0001_r_000000_0
2013-04-20 12:49:33,219 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201304201135_0001_r_000000_0 0.0%
2013-04-20 12:49:33,226 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201304201135_0001_r_000000_0 0.0%
2013-04-20 12:49:33,296 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201304201135_0001_r_000000_0 is in commit-pending, task state:COMMIT_PENDING
2013-04-20 12:49:33,297 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201304201135_0001_r_000000_0 0.0%
2013-04-20 12:49:33,731 INFO org.apache.hadoop.mapred.TaskTracker: Received commit task action for attempt_201304201135_0001_r_000000_0
2013-04-20 12:49:35,143 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201304201135_0001_r_000000_0 1.0% reduce > reduce
2013-04-20 12:49:35,147 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201304201135_0001_r_000000_0 is done.
2013-04-20 12:49:35,147 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201304201135_0001_r_000000_0 was -1
2013-04-20 12:49:35,155 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 2
2013-04-20 12:49:35,272 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201304201135_0001_r_-1899194781 exited with exit code 0. Number of tasks it ran: 1
2013-04-20 12:49:36,775 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201304201135_0001_m_000000_0 task's state:UNASSIGNED
2013-04-20 12:49:36,783 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201304201135_0001_m_000000_0 which needs 1 slots
2013-04-20 12:49:36,783 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_201304201135_0001_m_000000_0 which needs 1 slots
2013-04-20 12:49:36,783 INFO org.apache.hadoop.mapred.TaskTracker: Received KillTaskAction for task: attempt_201304201135_0001_r_000000_0
2013-04-20 12:49:36,784 INFO org.apache.hadoop.mapred.TaskTracker: About to purge task: attempt_201304201135_0001_r_000000_0
2013-04-20 12:49:36,812 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201304201135_0001_m_-653215379
2013-04-20 12:49:36,812 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201304201135_0001_m_-653215379 spawned.
2013-04-20 12:49:36,814 INFO org.apache.hadoop.mapred.TaskController: Writing commands to /tmp/hadoop-hadoop/mapred/local/ttprivate/taskTracker/hadoop/jobcache/job_201304201135_0001/attempt_201304201135_0001_m_000000_0/taskjvm.sh
2013-04-20 12:49:37,718 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201304201135_0001_m_-653215379 given task: attempt_201304201135_0001_m_000000_0
2013-04-20 12:49:38,227 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201304201135_0001_m_000000_0 0.0%
2013-04-20 12:49:41,232 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201304201135_0001_m_000000_0 0.0% cleanup
2013-04-20 12:49:41,238 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201304201135_0001_m_000000_0 0.0% cleanup
2013-04-20 12:49:41,243 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201304201135_0001_m_000000_0 is done.
2013-04-20 12:49:41,243 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201304201135_0001_m_000000_0 was -1
2013-04-20 12:49:41,245 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 2
2013-04-20 12:49:41,378 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201304201135_0001_m_-653215379 exited with exit code 0. Number of tasks it ran: 1
2013-04-20 12:49:42,821 INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: job_201304201135_0001
2013-04-20 12:49:42,822 INFO org.apache.hadoop.mapred.IndexCache: Map ID attempt_201304201135_0001_m_000000_0 not found in cache
2013-04-20 12:49:42,826 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304201135_0001 for user-log deletion with retainTimeStamp:1366573782822
hadoop-hadoop-tasktracker-myhost3.log
2013-04-20 12:35:43,798 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started
2013-04-20 12:35:44,200 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-04-20 12:35:44,207 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-04-20 12:35:44,382 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-04-20 12:35:44,467 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-04-20 12:35:44,496 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-04-20 12:35:44,500 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as hadoop
2013-04-20 12:35:44,501 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-hadoop/mapred/local
2013-04-20 12:35:44,506 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-04-20 12:35:44,520 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-04-20 12:35:44,521 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source TaskTrackerMetrics registered.
2013-04-20 12:35:49,567 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort39327 registered.
2013-04-20 12:35:49,568 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort39327 registered.
2013-04-20 12:35:49,575 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:39327
2013-04-20 12:35:49,575 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_myhost3:localhost/127.0.0.1:39327
2013-04-20 12:35:49,586 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-04-20 12:35:49,587 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-04-20 12:35:49,587 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 39327: starting
2013-04-20 12:35:49,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 39327: starting
2013-04-20 12:35:49,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 39327: starting
2013-04-20 12:35:49,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 39327: starting
2013-04-20 12:35:49,587 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 39327: starting
2013-04-20 12:35:50,617 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 0 time(s).
2013-04-20 12:35:51,617 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 1 time(s).
2013-04-20 12:35:52,618 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 2 time(s).
2013-04-20 12:35:53,618 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 3 time(s).
2013-04-20 12:35:54,619 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 4 time(s).
2013-04-20 12:35:55,619 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 5 time(s).
2013-04-20 12:35:56,620 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 6 time(s).
2013-04-20 12:35:57,620 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 7 time(s).
2013-04-20 12:35:58,621 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 8 time(s).
2013-04-20 12:35:59,622 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 9 time(s).
2013-04-20 12:35:59,625 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7001 not available yet, Zzzzz...
2013-04-20 12:36:01,624 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 0 time(s).
2013-04-20 12:36:02,625 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 1 time(s).
2013-04-20 12:36:03,624 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 2 time(s).
2013-04-20 12:36:04,625 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 3 time(s).
2013-04-20 12:36:05,624 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 4 time(s).
2013-04-20 12:36:06,625 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 5 time(s).
2013-04-20 12:36:07,627 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 6 time(s).
2013-04-20 12:36:08,627 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 7 time(s).
2013-04-20 12:36:09,627 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 8 time(s).
2013-04-20 12:36:10,628 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 9 time(s).
2013-04-20 12:36:10,629 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7001 not available yet, Zzzzz...
2013-04-20 12:36:12,629 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7001. Already tried 0 time(s).
2013-04-20 12:36:42,982 INFO org.apache.hadoop.mapred.TaskTracker: Using ResourceCalculatorPlugin : null
2013-04-20 12:36:43,000 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_myhost3:localhost/127.0.0.1:39327
2013-04-20 12:36:43,002 WARN org.apache.hadoop.util.ProcessTree: setsid is not available on this machine. So not using it.
2013-04-20 12:36:43,002 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-04-20 12:36:43,157 INFO org.apache.hadoop.util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
2013-04-20 12:36:43,157 INFO org.apache.hadoop.mapred.TaskTracker: ProcessTree implementation is missing on this system. TaskMemoryManager is disabled.
2013-04-20 12:36:43,165 INFO org.apache.hadoop.mapred.IndexCache: IndexCache created with max memory = 10485760
2013-04-20 12:36:43,173 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ShuffleServerMetrics registered.
2013-04-20 12:36:43,176 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060
2013-04-20 12:36:43,176 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060
2013-04-20 12:36:43,176 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060
2013-04-20 12:36:43,176 INFO org.mortbay.log: jetty-6.1.26
2013-04-20 12:36:43,567 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50060
2013-04-20 12:36:43,567 INFO org.apache.hadoop.mapred.TaskTracker: FILE_CACHE_SIZE for mapOutputServlet set to : 2000
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304060623_0002 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304060623_0005 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304070459_0001 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304030513_0001 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304070413_0001 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304012201_0015 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304192207_0001 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304121018_0003 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,576 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304112319_0001 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,577 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304112319_0005 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:36:43,577 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201304070459_0003 for user-log deletion with retainTimeStamp:1366573003161
2013-04-20 12:49:45,627 INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: job_201304201135_0001
2013-04-20 12:49:45,628 WARN org.apache.hadoop.mapred.TaskTracker: Unknown job job_201304201135_0001 being deleted.
最佳答案
Log4j 输出将写入每个 map/reduce 任务的任务日志 - 这不会返回到作业客户端标准输出/错误,除非 map/reduce 任务以某种方式失败。
您需要通过作业跟踪器找到任务实例,然后查看特定 map/reduce 任务实例的日志。
关于java - 借助log4j输出hadoop程序的变量,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16111746/
我正在使用 OUTFILE 命令,但由于权限问题和安全风险,我想将 shell 的输出转储到文件中,但出现了一些错误。我试过的 #This is a simple shell to connect t
我刚刚开始学习 Java,我想克服在尝试为这个“问题”创建 Java 程序时出现的障碍。这是我必须创建一个程序来解决的问题: Tandy 喜欢分发糖果,但只有 n 颗糖果。对于她给第 i 个糖果的人,
你好,我想知道我是否可以得到一些帮助来解决我在 C++ 中打印出 vector 内容的问题 我试图以特定顺序在一个或两个函数调用中输出一个类的所有变量。但是我在遍历 vector 时收到一个奇怪的错误
我正在将 intellij (2019.1.1) 用于 java gradle (5.4.1) 项目,并使用 lombok (1.18.6) 来自动生成代码。 Intellij 将生成的源放在 out
编辑:在与 guest271314 交流后,我意识到问题的措辞(在我的问题正文中)可能具有误导性。我保留了旧版本并更好地改写了新版本 背景: 从远程服务器获取 JSON 时,响应 header 包含一
我的问题可能有点令人困惑。我遇到的问题是我正在使用来自 Java 的 StoredProcedureCall 调用过程,例如: StoredProcedureCall call = new Store
在我使用的一些IDL中,我注意到在方法中标记返回值有2个约定-[in, out]和[out, retval]。 当存在多个返回值时,似乎使用了[in, out],例如: HRESULT MyMetho
当我查看 gar -h 的帮助输出时,它告诉我: [...] gar: supported targets: elf64-x86-64 elf32-i386 a.out-i386-linux [...
我想循环遍历一个列表,并以 HTML 格式打印其中的一部分,以代码格式打印其中的一部分。所以更准确地说:我想产生与这相同的输出 1 is a great number 2 is a great
我有下面的tekton管道,并尝试在Google Cloud上运行。集群角色绑定。集群角色。该服务帐户具有以下权限。。例外。不确定需要为服务帐户设置什么权限。
当尝试从 make 过滤非常长的输出以获取特定警告或错误消息时,第一个想法是这样的: $ make | grep -i 'warning: someone set up us the bomb' 然而
我正在创建一个抽象工具类,该类对另一组外部类(不受我控制)进行操作。外部类在某些接口(interface)点概念上相似,但访问它们相似属性的语法不同。它们还具有不同的语法来应用工具操作的结果。我创建了
这个问题已经有答案了: What do numbers starting with 0 mean in python? (9 个回答) 已关闭 7 年前。 在我的代码中使用按位与运算符 (&) 时,我
我写了这段代码来解析输入文件中的行输入格式:电影 ID 可以有多个条目,所以我们应该计算平均值输出:**没有重复(这是问题所在) import re f = open("ratings2.txt",
我需要处理超过 1000 万个光谱数据集。数据结构如下:大约有 1000 个 .fits(.fits 是某种数据存储格式)文件,每个文件包含大约 600-1000 个光谱,其中每个光谱中有大约 450
我编写了一个简单的 C 程序,它读取一个文件并生成一个包含每个单词及其出现频率的表格。 该程序有效,我已经能够在 Linux 上运行的终端中获得显示的输出,但是,我不确定如何获得生成的显示以生成包含词
很难说出这里要问什么。这个问题模棱两可、含糊不清、不完整、过于宽泛或夸夸其谈,无法以目前的形式得到合理的回答。如需帮助澄清此问题以便重新打开,visit the help center . 关闭 1
1.普通的输出: print(str)#str是任意一个字符串,数字··· 2.格式化输出: ?
我无法让 logstash 正常工作。 Basic logstash Example作品。但后来我与 Advanced Pipeline Example 作斗争.也许这也可能是 Elasticsear
这是我想要做的: 我想让用户给我的程序一些声音数据(通过麦克风输入),然后保持 250 毫秒,然后通过扬声器输出。 我已经使用 Java Sound API 做到了这一点。问题是它有点慢。从发出声音到
我是一名优秀的程序员,十分优秀!