- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我正在使用 hadoop 2.2.0 HA。这是我的配置。
核心站点.xml
<property>
<name>ha.zookeeper.quorum</name>
<value>zk01.bi.lietou.inc:2181,zk02.bi.lietou.inc:2181,zk03.bi.lietou.inc:2181</value>
</property>
<property>
<name>ipc.client.connect.timeout</name>
<value>20000</value>
</property>
hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>lynxcluster</value>
</property>
<property>
<name>dfs.ha.namenodes.lynxcluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.lynxcluster.nn1</name>
<value>192.168.30.133:2020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.lynxcluster.nn2</name>
<value>192.168.30.129:2020</value>
</property>
<property>
<name>dfs.namenode.http-address.lynxcluster.nn1</name>
<value>192.168.30.133:2070</value>
</property>
<property>
<name>dfs.namenode.http-address.lynxcluster.nn2</name>
<value>192.168.30.129:2070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://192.168.30.134:8485;192.168.30.135:8485;192.168.30.136:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.lynxcluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.qjournal.write-txns.timeout.ms</name>
<value>6000000</value>
</property>
192.168.30.129为主用NN,192.168.30.133为备用NN。
事件神经网络在启动日志段时出错。
2015-09-26 22:09:06,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.30.133
2015-09-26 22:09:06,044 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2015-09-26 22:09:06,044 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 522618707
2015-09-26 22:09:06,179 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 16 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 10 SyncTimes(ms): 670 2033 142
2015-09-26 22:09:06,185 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /data1/hadoop/name/current/edits_inprogress_0000000000522618707 -> /data1/hadoop/name/current/edits_0000000000522618707-0000000000522618722
2015-09-26 22:09:06,294 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizingedits file /data2/hadoop/name/current/edits_inprogress_0000000000522618707 -> /data2/hadoop/name/current/edits_0000000000522618707-0000000000522618722
2015-09-26 22:09:06,307 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 522618723
2015-09-26 22:09:12,308 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:13,310 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7002 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:14,310 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 8003 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:15,312 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 9004 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:16,312 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 10005 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:17,313 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 11006 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:18,314 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 12007 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:19,315 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 13008 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:20,317 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 14009 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:21,317 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 15010 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:22,319 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 16011 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:23,319 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 17012 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:24,321 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 18013 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:25,321 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 19014 ms (timeout=20000 ms) for a response for startLogSegment(522618723). Succeeded so far: [192.168.30.134:8485]
2015-09-26 22:09:26,308 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: starting log segment 522618723 failed for required journal (JournalAndStream(mgr=QJM to [192.168.30.134:8485, 192.168.30.135:8485, 192.168.30.136:8485], stream=null))
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.startLogSegment(QuorumJournalManager.java:387)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.startLogSegment(JournalSet.java:91)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$2.apply(JournalSet.java:199)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:352)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:196)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1029)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:998)
at org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1082)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:5050)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:832)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:139)
at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:11214)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
2015-09-26 22:09:26,312 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-09-26 22:09:26,319 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at lynx001-bi-30-129.liepin.inc/192.168.30.129
************************************************************/
129关机后,133仍处于待机状态。备用NN日志
2015-09-26 22:09:27,651 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "lynx-bi-30-133.liepin.inc/192.168.30.133"; destination host is: "lynx001-bi-30-129.liepin.inc":2020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy11.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:268)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:310)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
就在这个错误之前,它开始溢出
2015-09-26 22:03:00,941 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:00,941 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 2020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 192.168.30.131:35882Call#7495335 Retry#0: error: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:01,135 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:01,135 INFO org.apache.hadoop.ipc.Server: IPC Server handler 45 on 2020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 192.168.30.131:35886 Call#7495346 Retry#0: error: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:06,050 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:06,050 INFO org.apache.hadoop.ipc.Server: IPC Server handler 19 on 2020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 192.168.30.131:35891 Call#1 Retry#0: error: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
JN 日志
2015-09-26 22:09:44,395 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.startLogSegment from 192.168.30.129:30015 Call#157803 Retry#0: output error
2015-09-26 22:09:45,400 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 8485 caught an exception
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:265)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:474)
at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2577)
at org.apache.hadoop.ipc.Server.access$2200(Server.java:122)
at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1011)
at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1076)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2104)
我尝试将 ipc 超时增加到 60 秒,但它不起作用。
最佳答案
我相信它正在使用 dfs.qjournal.start-segment.timeout.ms 。默认值为 20000。
然而,还有其他配置需要您进行调整,例如 dfs.qjournal.write-txns.timeout.ms。
但是,与更改这些默认值相比,您最好修复基础设施问题。似乎有许多属性定义了 NameNode 如何管理它们与 JouralManager 的各种类型的连接和超时。
在我的例子中,我将以下自定义属性添加到 hdfs-site.xml
dfs.qjournal.start-segment.timeout.ms = 90000
dfs.qjournal.select-input-streams.timeout.ms = 90000
dfs.qjournal.write-txns.timeout.ms = 90000
我还在 core-site.xml 中添加了以下属性
ipc.client.connect.timeout = 90000
到目前为止,这似乎已经缓解了这个问题。
关于Hadoop HA active NN 不断崩溃,自动故障转移不起作用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32802902/
在 Vim 中,我打开了一个基本结构如下的文件: 3677137 00:01:47.04 666239 00:12:57.86 4346 00:00:01.77 418 00:00:0
我正在尝试构建一个正则表达式来处理以字符串形式呈现给我的数据类型,有两种可能的格式: 字符串[nmin..nmax] 字符串[nmax] 其中 nmin 和 nmax 是一些数字。 我构建了适合我的正
我尝试按照 tensorflow 教程实现 MNIST CNN 神经网络,并找到这些实现 softmax 交叉熵的方法给出了不同的结果: (1) 不好的结果 softmax = tf.nn.softm
我是 Pytorch 的新手,我不太了解的一件事是 nn.ModuleList 的用法。和 nn.Sequential .我能知道什么时候应该使用一个而不是另一个吗?谢谢。 最佳答案 nn.Modul
我不明白当数据为 3D 时 BatchNorm1d 如何工作(批量大小、H、W)。 示例 输入大小:(2,50,70) 图层:nn.Linear(70,20) 输出大小:(2,50,20) 如果我随后
我浏览了chapter 7 NLTK 书中的内容正在寻找解决方案,但到目前为止我还不清楚。 *表示 0 个或多个名词 *正如书中所解释的,意思是0个或多个任何类型的名词 NLTK 中是 NN , NN
:nn.MaxPool2d(kernel_size, stride) 和 nn.function.max_pool2d(t, kernel_size, stride) 之间有什么区别? 我在模块中定义
我正在使用 Hadoop 2.6.0-cdh5.6.0。我已经配置了 HA。我显示了事件(NN1)和备用名称节点(NN2)。现在,当我向事件名称节点(NN1)发出终止信号时,备用名称节点(NN2)不会
Pytorch 中的许多损失函数都在 nn.modules.loss 和 nn.functional 中实现。 例如,下面的两行返回相同的结果。 import torch.nn as nn impor
我已阅读 docs of both functions ,但据我所知,对于函数 tf.nn.softmax_cross_entropy_with_logits(logits, labels, dim=
当我尝试比较 tf.nn.fused_batch_norm 的方差输出和 tf.nn.moments 的方差输出时,对于相同的输入,我没有相同的值。 import numpy as np import
当我尝试比较 tf.nn.fused_batch_norm 的方差输出和 tf.nn.moments 的方差输出时,对于相同的输入,我没有相同的值。 import numpy as np import
这个问题在这里已经有了答案: Are there any computational efficiency differences between nn.functional() Vs nn.seq
我有一个简单的 Java 客户端,可以将文件保存到 HDFS - 配置了 1 个名称节点。为此,我使用 hadoop 配置,指定默认文件系统,如: org.apache.hadoop.conf.Con
我将此 varchar 格式作为时间累积,我想将其转换为整数以执行 SUM 并获得一组的总时间。第一部分可以是1、2、3、4甚至5位数字,代表小时数的累加,然后用冒号隔开。然后是第二部分,即分钟的累积
在 pytorch 0.4.0 版本中,有一个 nn.LayerNorm模块。 我想在我的 LSTM 网络中实现这一层,尽管我在 LSTM 网络上找不到任何实现示例。 pytorch 贡献者暗示这 n
以下是使用 PyTorch 中的 nn.functional() 模块的前馈网络 import torch.nn as nn import torch.nn.functional as F class
对于住宿实体,我们有两列可以为空:CollectionType和 AccommodationUnitType . 但是我注意到在数据中它们被设置为零而不是空,导致 NHibernate 尝试查找 id
我只需要分块那些只有那种模式的短语,而不是再分块一次。我在 Python 中使用 NLTK 库 完成了它,但不起作用 import nltk import re document="they run
例如,如果我有以下模型类: class MyTestModel(nn.Module): def __init__(self): super(MyTestModel, self)
我是一名优秀的程序员,十分优秀!