- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
在我的主节点中运行 Hive 查询时,我遇到了一些严重的问题。我有 3 个集群设置(1 个名称节点,2 个数据节点)。
版本:
Hadoop:2.7.3
hive :2.1.0
Java:openjdk 版本“1.8.0_111”
操作系统:ubuntu 16.04.1
环境:亚马逊 EC2
我已经在主节点中安装了 Hive,并使用 start-dfs.sh 和 start-yarn.sh 从主节点启动了所有守护进程。我检查了主节点和从节点中的所有守护进程,都运行良好。当我连接到 Hive 并运行示例查询时,主节点中的所有守护进程都停止运行,但数据节点中的守护进程仍在运行。请从 hadoop-hduser-datanode-namenode.log 中找到以下日志详细信息。
2016-11-26 10:55:45,667 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 45414
2016-11-26 10:55:45,667 INFO org.mortbay.log: jetty-6.1.26
2016-11-26 10:55:45,794 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45414
2016-11-26 10:55:45,862 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:50075
2016-11-26 10:55:45,913 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = user123
2016-11-26 10:55:45,913 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2016-11-26 10:55:45,940 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-11-26 10:55:45,950 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2016-11-26 10:55:45,970 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2016-11-26 10:55:45,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2016-11-26 10:55:46,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2016-11-26 10:55:46,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to namenode/175.45.20.822:9000 starting to offer service
2016-11-26 10:55:46,022 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-11-26 10:55:46,022 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2016-11-26 10:55:46,308 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2016-11-26 10:55:46,315 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-user123/dfs/data/in_use.lock acquired by nodename 15455@namenode
2016-11-26 10:55:46,357 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-1195836218-175.45.20.822-1479972457866
2016-11-26 10:55:46,357 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866
2016-11-26 10:55:46,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=148839353;bpid=BP-1195836218-175.45.20.822-1479972457866;lv=-56;nsInfo=lv=-63;cid=CID-c331d6bd-518b-4b20-a20b-b3bfb3c2896f;nsid=148839353;c=0;bpid=BP-1195836218-175.45.20.822-1479972457866;dnuuid=c98d92d6-14ac-4722-acb4-60727105f60c
2016-11-26 10:55:46,395 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-135f7e8e-5984-47e7-89f2-d41e1bf2cf36
2016-11-26 10:55:46,396 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /tmp/hadoop-user123/dfs/data/current, StorageType: DISK
2016-11-26 10:55:46,401 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2016-11-26 10:55:46,402 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1195836218-175.45.20.822-1479972457866
2016-11-26 10:55:46,403 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1195836218-175.45.20.822-1479972457866 on volume /tmp/hadoop-user123/dfs/data/current...
2016-11-26 10:55:46,413 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1195836218-175.45.20.822-1479972457866 on /tmp/hadoop-user123/dfs/data/current: 11ms
2016-11-26 10:55:46,414 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1195836218-175.45.20.822-1479972457866: 12ms
2016-11-26 10:55:46,414 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1195836218-175.45.20.822-1479972457866 on volume /tmp/hadoop-user123/dfs/data/current...
2016-11-26 10:55:46,429 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1195836218-175.45.20.822-1479972457866 on volume /tmp/hadoop-user123/dfs/data/current: 15ms
2016-11-26 10:55:46,429 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 15ms
2016-11-26 10:55:46,497 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/tmp/hadoop-user123/dfs/data, DS-135f7e8e-5984-47e7-89f2-d41e1bf2cf36): no suitable block pools found to scan. Waiting 1629134398 ms.
2016-11-26 10:55:46,499 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1480165886499 with interval 21600000
2016-11-26 10:55:46,501 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1195836218-175.45.20.822-1479972457866 (Datanode Uuid null) service to namenode/175.45.20.822:9000 beginning handshake with NN
2016-11-26 10:55:46,510 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1195836218-175.45.20.822-1479972457866 (Datanode Uuid null) service to namenode/175.45.20.822:9000 successfully registered with NN
2016-11-26 10:55:46,510 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode namenode/175.45.20.822:9000 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2016-11-26 10:55:46,563 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1195836218-175.45.20.822-1479972457866 (Datanode Uuid c98d92d6-14ac-4722-acb4-60727105f60c) service to namenode/175.45.20.822:9000 trying to claim ACTIVE state with txid=6318
2016-11-26 10:55:46,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1195836218-175.45.20.822-1479972457866 (Datanode Uuid c98d92d6-14ac-4722-acb4-60727105f60c) service to namenode/175.45.20.822:9000
2016-11-26 10:55:46,635 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xe4aa15bb18cd, containing 1 storage report(s), of which we sent 1. The reports had 78 total blocks and used 1 RPC(s). This took 3 msec to generate and 68 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-11-26 10:55:46,636 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1195836218-175.45.20.822-1479972457866
2016-11-26 10:56:46,536 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(175.45.20.822:50010, datanodeUuid=c98d92d6-14ac-4722-acb4-60727105f60c, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-c331d6bd-518b-4b20-a20b-b3bfb3c2896f;nsid=148839353;c=0) Starting thread to transfer BP-1195836218-175.45.20.822-1479972457866:blk_1073742505_1686 to 175.45.20.823:50010
2016-11-26 10:56:46,569 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer: Transmitted BP-1195836218-175.45.20.822-1479972457866:blk_1073742505_1686 (numBytes=5798) to /175.45.20.823:50010
2016-11-26 11:12:57,577 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742514_1695 src: /175.45.20.822:36416 dest: /175.45.20.822:50010
2016-11-26 11:12:57,669 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36416, dest: /175.45.20.822:50010, bytes: 5780, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1542813275_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742514_1695, duration: 68626648
2016-11-26 11:12:57,669 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742514_1695, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:12:57,712 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742515_1696 src: /175.45.20.822:36420 dest: /175.45.20.822:50010
2016-11-26 11:12:57,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36420, dest: /175.45.20.822:50010, bytes: 5175, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1542813275_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742515_1696, duration: 38025952
2016-11-26 11:12:57,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742515_1696, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:12:58,240 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742516_1697 src: /175.45.20.822:36426 dest: /175.45.20.822:50010
2016-11-26 11:12:58,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36426, dest: /175.45.20.822:50010, bytes: 32414403, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1542813275_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742516_1697, duration: 313068763
2016-11-26 11:12:58,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742516_1697, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:12:58,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742517_1698 src: /175.45.20.822:36430 dest: /175.45.20.822:50010
2016-11-26 11:12:58,651 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36430, dest: /175.45.20.822:50010, bytes: 498, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1542813275_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742517_1698, duration: 7376542
2016-11-26 11:12:58,651 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742517_1698, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:12:58,664 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742518_1699 src: /175.45.20.822:36434 dest: /175.45.20.822:50010
2016-11-26 11:12:58,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36434, dest: /175.45.20.822:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1542813275_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742518_1699, duration: 10092926
2016-11-26 11:12:58,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742518_1699, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:12:58,723 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742519_1700 src: /175.45.20.822:36438 dest: /175.45.20.822:50010
2016-11-26 11:12:58,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36438, dest: /175.45.20.822:50010, bytes: 240636, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1542813275_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742519_1700, duration: 40609207
2016-11-26 11:12:58,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742519_1700, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:13:04,927 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742520_1701 src: /175.45.20.822:36456 dest: /175.45.20.822:50010
2016-11-26 11:13:04,953 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36456, dest: /175.45.20.822:50010, bytes: 275749, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_230151483_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742520_1701, duration: 21419056
2016-11-26 11:13:04,953 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742520_1701, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:13:10,781 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742521_1702 src: /175.45.20.822:36466 dest: /175.45.20.822:50010
2016-11-26 11:13:16,567 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(175.45.20.822:50010, datanodeUuid=c98d92d6-14ac-4722-acb4-60727105f60c, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-c331d6bd-518b-4b20-a20b-b3bfb3c2896f;nsid=148839353;c=0) Starting thread to transfer BP-1195836218-175.45.20.822-1479972457866:blk_1073742515_1696 to 175.45.20.823:50010
2016-11-26 11:13:16,568 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer: Transmitted BP-1195836218-175.45.20.822-1479972457866:blk_1073742515_1696 (numBytes=5175) to /175.45.20.823:50010
2016-11-26 11:13:18,158 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36466, dest: /175.45.20.822:50010, bytes: 35339, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_230151483_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742521_1702, duration: 7374403211
2016-11-26 11:13:18,159 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742521_1702, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:13:18,172 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742523_1704 src: /175.45.20.822:36476 dest: /175.45.20.822:50010
2016-11-26 11:13:18,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36476, dest: /175.45.20.822:50010, bytes: 392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_230151483_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742523_1704, duration: 7185360
2016-11-26 11:13:18,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742523_1704, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:13:18,208 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742524_1705 src: /175.45.20.822:36482 dest: /175.45.20.822:50010
2016-11-26 11:13:18,215 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36482, dest: /175.45.20.822:50010, bytes: 35339, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_230151483_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742524_1705, duration: 5068037
2016-11-26 11:13:18,215 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742524_1705, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:13:18,232 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1195836218-175.45.20.822-1479972457866:blk_1073742525_1706 src: /175.45.20.822:36486 dest: /175.45.20.822:50010
2016-11-26 11:13:18,240 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /175.45.20.822:36486, dest: /175.45.20.822:50010, bytes: 275749, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_230151483_1, offset: 0, srvID: c98d92d6-14ac-4722-acb4-60727105f60c, blockid: BP-1195836218-175.45.20.822-1479972457866:blk_1073742525_1706, duration: 6277417
2016-11-26 11:13:18,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1195836218-175.45.20.822-1479972457866:blk_1073742525_1706, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-11-26 11:13:22,568 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742514_1695 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742514 for deletion
2016-11-26 11:13:22,569 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742515_1696 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742515 for deletion
2016-11-26 11:13:22,569 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742516_1697 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742516 for deletion
2016-11-26 11:13:22,569 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742517_1698 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742517 for deletion
2016-11-26 11:13:22,569 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742518_1699 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742518 for deletion
2016-11-26 11:13:22,569 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742519_1700 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742519 for deletion
2016-11-26 11:13:22,569 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742520_1701 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742520 for deletion
2016-11-26 11:13:22,569 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742521_1702 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742521 for deletion
2016-11-26 11:13:22,571 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1195836218-175.45.20.822-1479972457866 blk_1073742514_1695 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742514
2016-11-26 11:13:22,571 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1195836218-175.45.20.822-1479972457866 blk_1073742515_1696 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742515
2016-11-26 11:13:22,576 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1195836218-175.45.20.822-1479972457866 blk_1073742516_1697 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742516
2016-11-26 11:13:22,576 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1195836218-175.45.20.822-1479972457866 blk_1073742517_1698 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742517
2016-11-26 11:13:22,577 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1195836218-175.45.20.822-1479972457866 blk_1073742518_1699 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742518
2016-11-26 11:13:22,577 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1195836218-175.45.20.822-1479972457866 blk_1073742519_1700 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742519
2016-11-26 11:13:22,577 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1195836218-175.45.20.822-1479972457866 blk_1073742520_1701 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742520
2016-11-26 11:13:22,577 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1195836218-175.45.20.822-1479972457866 blk_1073742521_1702 file /tmp/hadoop-user123/dfs/data/current/BP-1195836218-175.45.20.822-1479972457866/current/finalized/subdir0/subdir2/blk_1073742521
2016-11-26 11:13:24,722 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2016-11-26 11:13:24,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at namenode/175.45.20.822
************************************************************/<br>
提前致谢。
最佳答案
经过一番研究,我启动了名称节点中的所有守护进程并输入了以下命令,它运行良好。
hadoop dfsadmin -refreshNodes
谢谢
关于hadoop - 错误 org.apache.hadoop.hdfs.server.datanode.DataNode : RECEIVED SIGNAL 15: SIGTERM,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40818108/
在流处理方面,Apache Beam和Apache Kafka之间有什么区别? 我也试图掌握技术和程序上的差异。 请通过您的经验报告来帮助我理解。 最佳答案 Beam是一种API,它以一种统一的方式使
有点n00b的问题。 如果我使用 Apache Ignite 进行消息传递和事件处理,是否还需要使用 Kafka? 与 Ignite 相比,Kafka 基本上会给我哪些(如果有的话)额外功能? 提前致
Apache MetaModel 是一个数据访问框架,它为发现、探索和查询不同类型的数据源提供了一个通用接口(interface)。 Apache Drill 是一种无架构的 SQL 查询引擎,它通过
Tomcat是一个广泛使用的java web服务器,而Apache也是一个web服务器,它们在实际项目使用中有什么不同? 经过一些研究,我有了一个简单的想法,比如, Apache Tomcat Ja
既然简单地使用 Apache 就足以运行许多 Web 应用程序,那么人们何时以及为什么除了 Apache 之外还使用 Tomcat? 最佳答案 Apache Tomcat是一个网络服务器和 Java
我在某个 VPS( friend 的带 cPanel 的 apache 服务器)上有一个帐户,我在那里有一个 public_html 目录。我们有大约 5-6 个网站: /home/myusernam
我目前正在尝试将模块加载到 Apache,使用 cmake 构建。该模块称为 mod_mapcache。它已成功构建并正确安装在/usr/lib/apache2/modules directroy 中
我对 url 中的问号有疑问。 例如:我有 url test.com/controller/action/part_1%3Fpart_2 (其中 %3F 是 url 编码的问号),并使用此重写规则:R
在同一台机器上,Apache 在端口 80 上运行,Tomcat 在端口 8080 上运行。 Apache 包括 html;css;js;文件并调用 tomcat 服务。 基本上 exampledom
Apache 1 和 Apache 2 的分支有什么区别? 使用一种或另一种的优点和缺点? 似乎 Apache 2 的缺点之一是使用大量内存,但也许它处理请求的速度更快? 最有趣的是 Apache 作
实际上,我们正在使用 Apache 网络服务器来托管我们的 REST-API。 脚本是用 Lua 编写的,并使用 mod-lua 映射。 例如来自 httpd.conf 的实际片段: [...] Lu
我在 apache 上的 ubuntu 中有一个虚拟主机,这不是我的主要配置,我有另一个网页作为我的主要网页,所以我想使用虚拟主机在同一个 IP 上设置这个。 urologyexpert.mx 是我的
我使用 Apache camel 已经很长时间了,发现它是满足各种系统集成相关业务需求的绝佳解决方案。但是几年前我遇到了 Apache Nifi 解决方案。经过一番谷歌搜索后,我发现虽然 Nifi 可
由于两者都是一次处理事件的流框架,这两种技术/流框架之间的核心架构差异是什么? 此外,在哪些特定用例中,一个比另一个更合适? 最佳答案 正如您所提到的,两者都是实时内存计算的流式平台。但是当您仔细观察
apache 文件(如 httpd.conf 和虚拟主机)中使用的语言名称是什么,例如 # Ensure that Apache listens on port 80 Listen 80 D
作为我学习过程的一部分,我认为如果我扩展更多关于 apache 的知识会很好。我有几个问题,虽然我知道有些内容可能需要相当冗长的解释,但我希望您能提供一个概述,以便我知道去哪里寻找。 (最好引用 mo
关闭。这个问题是opinion-based .它目前不接受答案。 想改善这个问题吗?更新问题,以便可以通过 editing this post 用事实和引文回答问题. 4 个月前关闭。 Improve
就目前而言,这个问题不适合我们的问答形式。我们希望答案得到事实、引用或专业知识的支持,但这个问题可能会引起辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the he
这个问题在这里已经有了答案: Difference Between Apache Kafka and Camel (Broker vs Integration) (4 个回答) 3年前关闭。 据我所知
我有 2 个使用相同规则的子域,如下所示: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond
我是一名优秀的程序员,十分优秀!