gpt4 book ai didi

linux - DFS Used% : 100. 00% 从属虚拟机在 Hadoop 中关闭

转载 作者:可可西里 更新时间:2023-11-01 15:02:17 27 4
gpt4 key购买 nike

我的从属虚拟机出现故障,我猜这是因为使用的 DFS 是 100%。你能给出一个系统的方法来解决这个问题吗?是防火墙问题吗?容量问题或可能导致它的原因以及如何解决?

ubuntu@anmol-vm1-new:~$  hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

15/12/13 22:25:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 845446217728 (787.38 GB)
Present Capacity: 797579996211 (742.80 GB)
DFS Remaining: 794296401920 (739.75 GB)
DFS Used: 3283594291 (3.06 GB)
DFS Used%: 0.41%
Under replicated blocks: 1564
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (4 total, 2 dead)

Live datanodes:
Name: 10.0.1.190:50010 (anmol-vm1-new)
Hostname: anmol-vm1-new
Decommission Status : Normal
Configured Capacity: 422723108864 (393.69 GB)
DFS Used: 1641142625 (1.53 GB)
Non DFS Used: 25955075743 (24.17 GB)
DFS Remaining: 395126890496 (367.99 GB)
DFS Used%: 0.39%
DFS Remaining%: 93.47%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:25:51 UTC 2015


Name: 10.0.1.193:50010 (anmol-vm4-new)
Hostname: anmol-vm4-new
Decommission Status : Normal
Configured Capacity: 422723108864 (393.69 GB)
DFS Used: 1642451666 (1.53 GB)
Non DFS Used: 21911145774 (20.41 GB)
DFS Remaining: 399169511424 (371.76 GB)
DFS Used%: 0.39%
DFS Remaining%: 94.43%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:25:51 UTC 2015


Dead datanodes:
Name: 10.0.1.191:50010 (anmol-vm2-new)
Hostname: anmol-vm2-new
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 0 (0 B)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 21:20:12 UTC 2015


Name: 10.0.1.192:50010 (anmol-vm3-new)
Hostname: anmol-vm3-new
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 0 (0 B)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Sun Dec 13 22:09:27 UTC 2015

最佳答案

VM 中只有一个文件系统。以根用户身份登录

  1. df -sh(其中一个挂载点将显示 ~100%)
  2. du -sh/ (它会列出每个目录的大小)
  3. 如果除了你的namenode和datanode目录之外的任何目录占用太多空间,你可以开始清理
  4. 您还可以运行 hadoop fs -du -s -h/user/hadoop(查看目录的使用情况)
  5. 识别所有不需要的目录并通过运行hadoop fs -rm -R/user/hadoop/raw_data 开始清理(-rm 是删除 -R 是递归删除,使用时要小心-R).
  6. 运行hadoop fs -expunge(立即清理垃圾,有时需要运行多次)
  7. 运行 hadoop fs -du -s -h/(它会给你整个文件系统的 hdfs 使用情况,或者你也可以运行 dfsadmin -report - 确认存储是否被回收)

关于linux - DFS Used% : 100. 00% 从属虚拟机在 Hadoop 中关闭,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34257154/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com