- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我在运行之间得到了非常不同的启动/连接时间。我的集群有三个服务器节点。我想从我的客户端节点(实际上位于三台服务器之一)运行一些任务和缓存操作以进行测试。但是,当我启动客户端时,可能需要长达五分钟才能真正正确连接。在另一个客户端启动时,使用相同的客户端和相同的配置只需要几秒钟。
在客户端节点启动时间很长的情况下,日志的区别是这样的:
[13:35:31,649][INFO][ignite-update-notifier-timer][GridUpdateNotifier] Your version is up to date.
[13:37:21,794][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Failed to wait for partition map exchange [topVer=AffinityTopologyVersion [topVer=16, minorTopVer=0], node=eec8ea18-ded1-42cd-aec7-2af754644008]. Dumping pending objects that might be the cause:
[13:37:21,794][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Ready affinity version: AffinityTopologyVersion [topVer=-1, minorTopVer=0]
[13:37:21,802][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Last exchange future: GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=0, lastExchangeTime=1526060111449, loc=true, ver=2.4.0#20180305-sha1:aa342270, isClient=true], topVer=16, nodeId8=eec8ea18, msg=null, type=NODE_JOINED, tstamp=1526060121610], crd=TcpDiscoveryNode [id=c74ff028-1676-4f1a-8c95-563763ea5875, addrs=[127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[/192.168.122.1:47500, /127.0.0.1:47500, centos_node_2/192.168.0.162:47500], discPort=47500, order=7, intOrder=5, lastExchangeTime=1526060116544, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=16, minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=0, lastExchangeTime=1526060111449, loc=true, ver=2.4.0#20180305-sha1:aa342270, isClient=true], topVer=16, nodeId8=eec8ea18, msg=null, type=NODE_JOINED, tstamp=1526060121610], nodeId=eec8ea18, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=true, hash=2024590198], init=true, lastVer=null, partReleaseFut=null, exchActions=ExchangeActions [startCaches=null, stopCaches=null, startGrps=[], stopGrps=[], resetParts=null, stateChangeRequest=null], affChangeMsg=null, initTs=1526060121650, centralizedAff=false, changeGlobalStateE=null, done=false, state=CLIENT, evtLatch=0, remaining=[830bbef7-0344-4955-bdf6-ff90f6d96602, b0105fdc-5298-4f80-94ae-2f1bbd8b42e8, c74ff028-1676-4f1a-8c95-563763ea5875], super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=189344266]]
[13:37:21,803][WARNING][exchange-worker-#157%Test Cluster%][GridCachePartitionExchangeManager] First 10 pending exchange futures [total=0]
[13:37:21,806][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Last 10 exchange futures (total: 1):
[13:37:21,807][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] >>> GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion [topVer=16, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=0, lastExchangeTime=1526060111449, loc=true, ver=2.4.0#20180305-sha1:aa342270, isClient=true], done=false]
[13:37:21,807][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Pending transactions:
[13:37:21,807][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Pending explicit locks:
[13:37:21,807][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Pending cache futures:
[13:37:21,807][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Pending atomic cache futures:
[13:37:21,808][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Pending data streamer futures:
[13:37:21,808][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Pending transaction deadlock detection futures:
[13:37:21,840][INFO][sys-#158%Test Cluster%][diagnostic] Exchange future waiting for coordinator response [crd=c74ff028-1676-4f1a-8c95-563763ea5875, topVer=AffinityTopologyVersion [topVer=16, minorTopVer=0]]
Remote node information:
General node info [id=c74ff028-1676-4f1a-8c95-563763ea5875, client=false, discoTopVer=AffinityTopologyVersion [topVer=16, minorTopVer=0], time=13:37:21.812]
Partitions exchange info [readyVer=AffinityTopologyVersion [topVer=14, minorTopVer=0]]
Last initialized exchange future: GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=830bbef7-0344-4955-bdf6-ff90f6d96602, addrs=[127.0.0.1, 192.168.0.161], sockAddrs=[/127.0.0.1:47500, /192.168.0.161:47500], discPort=47500, order=15, intOrder=9, lastExchangeTime=1526060055205, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=false], topVer=15, nodeId8=c74ff028, msg=Node joined: TcpDiscoveryNode [id=830bbef7-0344-4955-bdf6-ff90f6d96602, addrs=[127.0.0.1, 192.168.0.161], sockAddrs=[/127.0.0.1:47500, /192.168.0.161:47500], discPort=47500, order=15, intOrder=9, lastExchangeTime=1526060055205, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=false], type=NODE_JOINED, tstamp=1526060060363], crd=TcpDiscoveryNode [id=c74ff028-1676-4f1a-8c95-563763ea5875, addrs=[127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[/192.168.122.1:47500, /127.0.0.1:47500, centos_node_2/192.168.0.162:47500], discPort=47500, order=7, intOrder=5, lastExchangeTime=1526059855998, loc=true, ver=2.4.0#20180305-sha1:aa342270, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=830bbef7-0344-4955-bdf6-ff90f6d96602, addrs=[127.0.0.1, 192.168.0.161], sockAddrs=[/127.0.0.1:47500, /192.168.0.161:47500], discPort=47500, order=15, intOrder=9, lastExchangeTime=1526060055205, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=false], topVer=15, nodeId8=c74ff028, msg=Node joined: TcpDiscoveryNode [id=830bbef7-0344-4955-bdf6-ff90f6d96602, addrs=[127.0.0.1, 192.168.0.161], sockAddrs=[/127.0.0.1:47500, /192.168.0.161:47500], discPort=47500, order=15, intOrder=9, lastExchangeTime=1526060055205, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=false], type=NODE_JOINED, tstamp=1526060060363], nodeId=830bbef7, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=true, hash=1568621067], init=true, lastVer=GridCacheVersion [topVer=0, order=1526059954164, nodeOrder=0], partReleaseFut=PartitionReleaseFuture [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], futures=[]], TxReleaseFuture [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], futures=[]], AtomicUpdateReleaseFuture [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], futures=[]], DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], futures=[]]]], exchActions=null, affChangeMsg=null, initTs=1526060227922, centralizedAff=false, changeGlobalStateE=null, done=false, state=CRD, evtLatch=0, remaining=[830bbef7-0344-4955-bdf6-ff90f6d96602], super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=410898272]]
Communication SPI statistics [rmtNode=eec8ea18-ded1-42cd-aec7-2af754644008]
Communication SPI recovery descriptors:
[key=ConnectionKey [nodeId=eec8ea18-ded1-42cd-aec7-2af754644008, idx=0, connCnt=0], msgsSent=0, msgsAckedByRmt=0, msgsRcvd=2, lastAcked=0, reserveCnt=1, descIdHash=310748176]
Communication SPI clients:
[node=eec8ea18-ded1-42cd-aec7-2af754644008, client=GridTcpNioCommunicationClient [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=4, bytesRcvd=961, bytesSent=28, bytesRcvd0=853, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-4, igniteInstanceName=Test Cluster, finished=false, hashCode=474105904, interrupted=false, runner=grid-nio-worker-tcp-comm-4-#125%Test Cluster%]]], writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], inRecovery=GridNioRecoveryDescriptor [acked=0, resendCnt=0, rcvCnt=2, sentCnt=0, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=10, lastExchangeTime=1526060116518, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=true], connected=true, connectCnt=0, queueLimit=4096, reserveCnt=1, pairedConnections=false], outRecovery=GridNioRecoveryDescriptor [acked=0, resendCnt=0, rcvCnt=2, sentCnt=0, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=10, lastExchangeTime=1526060116518, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=true], connected=true, connectCnt=0, queueLimit=4096, reserveCnt=1, pairedConnections=false], super=GridNioSessionImpl [locAddr=/127.0.0.1:47100, rmtAddr=/127.0.0.1:59666, createTime=1526060121775, closeTime=0, bytesSent=28, bytesRcvd=961, bytesSent0=0, bytesRcvd0=853, sndSchedTime=1526060121775, lastSndTime=1526060121786, lastRcvTime=1526060241812, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=org.apache.ignite.internal.util.nio.GridDirectParser@3f6752aa, directMode=true], GridConnectionBytesVerifyFilter], accepted=true]], super=GridAbstractCommunicationClient [lastUsed=1526060121786, closed=false, connIdx=0]]]
NIO sessions statistics:
>> Selector info [idx=4, keysCnt=1, bytesRcvd=961, bytesRcvd0=853, bytesSent=28, bytesSent0=0]
Connection info [in=true, rmtAddr=/127.0.0.1:59666, locAddr=/127.0.0.1:47100, msgsSent=0, msgsAckedByRmt=0, descIdHash=310748176, msgsRcvd=2, lastAcked=0, descIdHash=310748176, bytesRcvd=961, bytesRcvd0=853, bytesSent=28, bytesSent0=0, opQueueSize=0]
Exchange future: GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=10, lastExchangeTime=1526060116518, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=true], topVer=16, nodeId8=c74ff028, msg=Node joined: TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=10, lastExchangeTime=1526060116518, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=true], type=NODE_JOINED, tstamp=1526060116548], crd=null, exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=16, minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=10, lastExchangeTime=1526060116518, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=true], topVer=16, nodeId8=c74ff028, msg=Node joined: TcpDiscoveryNode [id=eec8ea18-ded1-42cd-aec7-2af754644008, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /192.168.122.1:0, centos_node_2/192.168.0.162:0], discPort=0, order=16, intOrder=10, lastExchangeTime=1526060116518, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=true], type=NODE_JOINED, tstamp=1526060116548], nodeId=eec8ea18, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=1818763044], init=false, lastVer=null, partReleaseFut=null, exchActions=null, affChangeMsg=null, initTs=0, centralizedAff=false, changeGlobalStateE=null, done=false, state=null, evtLatch=0, remaining=[], super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=1650837648]]
Local communication statistics:
Communication SPI statistics [rmtNode=c74ff028-1676-4f1a-8c95-563763ea5875]
Communication SPI recovery descriptors:
[key=ConnectionKey [nodeId=c74ff028-1676-4f1a-8c95-563763ea5875, idx=0, connCnt=-1], msgsSent=2, msgsAckedByRmt=0, msgsRcvd=1, lastAcked=0, reserveCnt=1, descIdHash=1306648390]
Communication SPI clients:
[node=c74ff028-1676-4f1a-8c95-563763ea5875, client=GridTcpNioCommunicationClient [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=0, bytesRcvd=8421, bytesSent=919, bytesRcvd0=8421, bytesSent0=853, select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-0, igniteInstanceName=Test Cluster, finished=false, hashCode=1972519349, interrupted=false, runner=grid-nio-worker-tcp-comm-0-#121%Test Cluster%]]], writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], inRecovery=GridNioRecoveryDescriptor [acked=0, resendCnt=0, rcvCnt=1, sentCnt=2, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode [id=c74ff028-1676-4f1a-8c95-563763ea5875, addrs=[127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[/192.168.122.1:47500, /127.0.0.1:47500, centos_node_2/192.168.0.162:47500], discPort=47500, order=7, intOrder=5, lastExchangeTime=1526060116544, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=false], connected=false, connectCnt=1, queueLimit=4096, reserveCnt=1, pairedConnections=false], outRecovery=GridNioRecoveryDescriptor [acked=0, resendCnt=0, rcvCnt=1, sentCnt=2, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode [id=c74ff028-1676-4f1a-8c95-563763ea5875, addrs=[127.0.0.1, 192.168.0.162, 192.168.122.1], sockAddrs=[/192.168.122.1:47500, /127.0.0.1:47500, centos_node_2/192.168.0.162:47500], discPort=47500, order=7, intOrder=5, lastExchangeTime=1526060116544, loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=false], connected=false, connectCnt=1, queueLimit=4096, reserveCnt=1, pairedConnections=false], super=GridNioSessionImpl [locAddr=/127.0.0.1:59666, rmtAddr=/127.0.0.1:47100, createTime=1526060121782, closeTime=0, bytesSent=919, bytesRcvd=8421, bytesSent0=853, bytesRcvd0=8421, sndSchedTime=1526060121782, lastSndTime=1526060241815, lastRcvTime=1526060241815, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=org.apache.ignite.internal.util.nio.GridDirectParser@619e0deb, directMode=true], GridConnectionBytesVerifyFilter], accepted=false]], super=GridAbstractCommunicationClient [lastUsed=1526060121792, closed=false, connIdx=0]]]
NIO sessions statistics:
>> Selector info [idx=0, keysCnt=1, bytesRcvd=8421, bytesRcvd0=8421, bytesSent=919, bytesSent0=853]
Connection info [in=false, rmtAddr=/127.0.0.1:47100, locAddr=/127.0.0.1:59666, msgsSent=2, msgsAckedByRmt=0, descIdHash=1306648390, unackedMsgs=[GridDhtPartitionsSingleMessage, IgniteDiagnosticMessage], msgsRcvd=1, lastAcked=0, descIdHash=1306648390, bytesRcvd=8421, bytesRcvd0=8421, bytesSent=919, bytesSent0=853, opQueueSize=0]
[13:39:21,652][WARNING][main][GridCachePartitionExchangeManager] Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
[13:39:21,817][WARNING][exchange-worker-#157%Test Cluster%][diagnostic] Failed to wait for partition map exchange [topVer=AffinityTopologyVersion [topVer=16, minorTopVer=0], node=eec8ea18-ded1-42cd-aec7-2af754644008]. Dumping pending objects that might be the cause:
[13:40:43,347][INFO][sys-#159%Test Cluster%][GridDhtPartitionsExchangeFuture] Received full message, will finish exchange [node=c74ff028-1676-4f1a-8c95-563763ea5875, resVer=AffinityTopologyVersion [topVer=16, minorTopVer=0]]
[13:40:43,354][INFO][sys-#159%Test Cluster%][GridDhtPartitionsExchangeFuture] Finish exchange future [startVer=AffinityTopologyVersion [topVer=16, minorTopVer=0], resVer=AffinityTopologyVersion [topVer=16, minorTopVer=0], err=null]
[13:40:43,395][INFO][main][IgniteKernal%Test Cluster] Performance suggestions for grid 'Test Cluster' (fix if possible)
[13:40:43,396][INFO][main][IgniteKernal%Test Cluster] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[13:40:43,396][INFO][main][IgniteKernal%Test Cluster] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
[13:40:43,396][INFO][main][IgniteKernal%Test Cluster] ^-- Specify JVM heap max size (add '-Xmx<size>[g|G|m|M|k|K]' to JVM options)
[13:40:43,396][INFO][main][IgniteKernal%Test Cluster] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
[13:40:43,396][INFO][main][IgniteKernal%Test Cluster] ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options)
[13:40:43,396][INFO][main][IgniteKernal%Test Cluster] ^-- Speed up flushing of dirty pages by OS (alter vm.dirty_expire_centisecs parameter by setting to 500)
[13:40:43,397][INFO][main][IgniteKernal%Test Cluster] ^-- Reduce pages swapping ratio (set vm.swappiness=10)
[13:40:43,397][INFO][main][IgniteKernal%Test Cluster] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning
[13:40:43,397][INFO][main][IgniteKernal%Test Cluster]
[13:40:43,397][INFO][main][IgniteKernal%Test Cluster] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[13:40:43,398][INFO][main][IgniteKernal%Test Cluster]
[13:40:43,401][INFO][grid-nio-worker-tcp-comm-1-#122%Test Cluster%][TcpCommunicationSpi] Established outgoing communication connection [locAddr=/192.168.0.162:40742, rmtAddr=/192.168.0.161:47100]
[13:40:43,403][INFO][main][IgniteKernal%Test Cluster]
>>> +----------------------------------------------------------------------+
>>> Ignite ver. 2.4.0#20180305-sha1:aa342270b13cc1f4713382a8eb23b2eb7edaa3a5
>>> +----------------------------------------------------------------------+
>>> OS name: Linux 3.10.0-693.el7.x86_64 amd64
>>> CPU(s): 56
>>> Heap: 6.9GB
>>> VM name: 78579@centos_node_2
>>> Ignite instance name: Test Cluster
>>> Local node [ID=EEC8EA18-DED1-42CD-AEC7-2AF754644008, order=16, clientMode=true]
>>> Local node addresses: [centos_node_2/0:0:0:0:0:0:0:1%lo, centos_node_2/127.0.0.1, /192.168.0.162, /192.168.122.1]
>>> Local ports: TCP:10801 TCP:47101
[13:40:43,406][INFO][main][GridDiscoveryManager] Topology snapshot [ver=16, servers=3, clients=1, CPUs=168, offheap=16.0GB, heap=19.0GB]
[13:40:43,406][INFO][main][GridDiscoveryManager] Data Regions Configured:
[13:40:43,406][INFO][main][GridDiscoveryManager] ^-- default [initSize=4.0 GiB, maxSize=4.0 GiB, persistenceEnabled=false]
[13:40:43,413][INFO][main][GridDeploymentLocalStore] Class locally deployed: class TestCluster$1
[13:40:45,026][INFO][exchange-worker-#157%Test Cluster%][time] Started exchange init [topVer=AffinityTopologyVersion [topVer=16, minorTopVer=1], crd=false, evt=DISCOVERY_CUSTOM_EVT, evtNode=c74ff028-1676-4f1a-8c95-563763ea5875, customEvt=CacheAffinityChangeMessage [id=c6771405361-ef621a9a-86e4-426a-958d-c53f0d9c0e25, topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], exchId=null, partsMsg=null, exchangeNeeded=true], allowMerge=false]
[13:40:45,028][INFO][exchange-worker-#157%Test Cluster%][time] Finished exchange init [topVer=AffinityTopologyVersion [topVer=16, minorTopVer=1], crd=false]
[13:40:45,037][INFO][sys-#165%Test Cluster%][GridDhtPartitionsExchangeFuture] Received full message, will finish exchange [node=c74ff028-1676-4f1a-8c95-563763ea5875, resVer=AffinityTopologyVersion [topVer=16, minorTopVer=1]]
[13:40:45,039][INFO][sys-#165%Test Cluster%][GridDhtPartitionsExchangeFuture] Finish exchange future [startVer=AffinityTopologyVersion [topVer=16, minorTopVer=1], resVer=AffinityTopologyVersion [topVer=16, minorTopVer=1], err=null]
[13:40:48,545][INFO][grid-nio-worker-tcp-comm-2-#123%Test Cluster%][TcpCommunicationSpi] Established outgoing communication connection [locAddr=/192.168.0.162:35242, rmtAddr=/192.168.0.4:47100]
[13:40:48,597][INFO][main][GridDeploymentLocalStore] Class locally deployed: class TestCluster$2
[13:40:48,676][INFO][main][GridCacheProcessor] Stopped cache [cacheName=ignite-sys-cache]
[13:40:48,678][INFO][main][GridDeploymentLocalStore] Removed undeployed class: GridDeployment [ts=1526060443326, depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader@330bedb4, clsLdrId=85655405361-eec8ea18-ded1-42cd-aec7-2af754644008, userVer=0, loc=true, sampleClsName=org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap, pendingUndeploy=false, undeployed=true, usage=0]
[13:40:48,684][INFO][main][IgniteKernal%Test Cluster]
>>> +---------------------------------------------------------------------------------+
>>> Ignite ver. 2.4.0#20180305-sha1:aa342270b13cc1f4713382a8eb23b2eb7edaa3a5 stopped OK
>>> +---------------------------------------------------------------------------------+
>>> Ignite instance name: Test Cluster
>>> Grid uptime: 00:00:05.289
集群配置是这样的:
<?xml version="1.0" encoding="UTF-8"?>
<!-- This file was generated by Ignite Web Console (05/11/2018, 23:29) -->
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="igniteInstanceName" value="Test Cluster"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>192.168.0.4:47500..47510</value>
<value>192.168.0.161:47500..47510</value>
<value>192.168.0.162:47500..47510</value>
</list>
</property>
</bean>
</property>
<property name="ackTimeout" value="50000"/>
</bean>
</property>
<property name="communicationSpi">
<bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
<property name="connectTimeout" value="600000"/>
</bean>
</property>
<property name="networkTimeout" value="60000"/>
<property name="networkSendRetryCount" value="10"/>
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="initialSize" value="4294967296"/>
<property name="maxSize" value="4294967296"/>
</bean>
</property>
</bean>
</property>
<property name="peerClassLoadingEnabled" value="true"/>
<property name="eventStorageSpi">
<bean class="org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi">
</bean>
</property>
<property name="failureDetectionTimeout" value="100000"/>
<property name="clientFailureDetectionTimeout" value="100000"/>
</bean>
</beans>
为什么客户端节点连接需要这么长时间?为什么只是有时?
感谢您的帮助。
已编辑启动期间的警告:
07:19:46.910 [main][1] WARN org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi-[warning] Failure detection timeout will be ignored (one of SPI parameters has been set explicitly)
07:20:06.953 [main][1] WARN org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi-[warning] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
07:20:06.977 [main][1] WARN org.apache.ignite.spi.checkpoint.noop.NoopCheckpointSpi-[warning] Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
07:20:07.012 [main][1] WARN org.apache.ignite.internal.managers.collision.GridCollisionManager-[warning] Collision resolution is disabled (all jobs will be activated upon arrival).
07:20:22.373 [main][1] WARN org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi-[warning] Failure detection timeout will be ignored (one of SPI parameters has been set explicitly)
07:20:47.527 [main][1] WARN org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi-[warning] Node has not been connected to topology and will repeat join process. Check remote nodes logs for possible error messages. Note that large topology may require significant time to start. Increase 'TcpDiscoverySpi.networkTimeout' configuration property if getting this message on the starting nodes [networkTimeout=5000]
最佳答案
当新节点加入集群时,它需要完成当前的集群操作来注册新的集群拓扑。请注意下面的警告。
[13:39:21,652][WARNING][main][GridCachePartitionExchangeManager] Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
很可能您有一个长时间运行的事务或未释放的锁。
关于java - Ignite集群不同启动时间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50300719/
这就是我的代码中的全部内容。这只是使用 Ignite 的典型方式: Ignite ignite = Ignition.ignite(); 我看到的错误信息是: WARNING: An illegal
有时我会不断地对节点进行分段。它发生在一个大约有 40 个节点的集群中——它一次只发生在一个节点上。有几次它发生在正在进行一些繁重的 GC 工作时。另一方面,我看到类似的繁重的 GC 工作正在进行并且
我有 2 个服务器节点和一个客户端节点。我正在使用 TopologyValidator 来验证拓扑。 如果任何服务器节点离开集群,我想禁用所有操作。 TopologyValidator 仅禁用更新操作
我有一个用例,我必须为我的 ignite 集群支持多个持久性存储,例如缓存 A1 应该从数据库 db1 启动,缓存 B1 应该从数据库 db2 启动。这可以完成吗?。在 ignite 配置 XML 中
我对分区模式下的 Ignite Cache 有几个问题 1)当Ignite集群中的节点发生故障时,如果故障节点是某个 key 的主节点,那么该节点的备份是否会成为新的主节点? 2) 故障节点中的备份副
在页面 https://ignite.apache.org/features/datagrid.html我找到了以下信息: “与其他键值存储不同,Ignite 使用可插入哈希算法确定数据局部性。每个客
可以说我下面有一个对象Employee,它存储在ignite缓存中。 Employee{ int id; String name; List marksInorder;/
我正在尝试使用基于集群的执行器服务。 //获取启用集群的执行器服务。ExecutorService exec = ignite.executorService(); 是否有办法设置执行程序服务池中的线
我在 Linux 节点上的 Kubernetes 集群中运行 Apache Ignite .Net。 最近我将我的 ignite 2.8.1 集群更新到了 v2.9。更新后,作为集群一部分的某些服务无
我有一个 3 节点的 ignite 集群和 1 个创建缓存的客户端。在开发和测试过程中,我不得不多次停止集群或中断缓存构建,现在整个系统都崩溃了。只有一个节点启动,其他节点崩溃。客户端被阻止,它什么也
任何人都可以告诉我 Apache Ignite 中是否有可用的基于时间的触发策略? 我有一个对象在那个日期(时间戳)过期时有过期日期我想更新这个值并在缓存中覆盖它是否可以在 Apache Ignite
用例 这是我们正在研究的拓扑 Server - 1 --> marketData缓存,保存不同的股价信息 客户端 - 1 --> 将数据推送到缓存 Client - 2 --> 持续查询,监听市场上的
我无法理解 Apache Ignite 和 GridGain 平台之间的区别。他们是平等的?或者 GridGain 是 Apache Ignite 实现的标准? 最佳答案 来自 the website
我们正在探索在我们的项目中使用 Apache Ignite。基本上,我们有几十个 oracle 表。我们想将每个表加载到 Ignite Cache 中,然后在这些缓存之间进行连接。我们的表之间有很多连
我无法在 Quarkus 中找到 Apache Ignite 依赖项,也无法在任何有人将 Apache Ignite 与 Quarkus 结合使用的示例中找到它。如果 Quarkus 当前不支持 Ap
有没有人试过使用 IgniteSet 或类似的数据结构在连续查询上设置远程过滤器?没有太多关于 IgniteSet 如何工作的文档,因此这个问题。基本上我的用例如下: 我有一个使用 Ignite 实现
我在 Ignite 中创建了一个缓存,现在我正尝试通过 ignite 包中提供的 sqlline 查询它。在文档中,它只讨论创建表、索引、查询这些表等。但没有讨论以这种方式通过 ignite.getO
我编写了下面的代码来从 JSON 文件中读取数据并将其推送到 Ignite 分布式缓存中,这段代码工作正常,但是,创建“ContainerAgg”类的要求对我来说是个问题。我们的数据结构不是预定义的,
我正在寻找一种方法将具有给定键的所有实体从一个 Ignite 缓存复制到另一个。虽然不断获取新数据,但每 x 秒所有数据都必须复制到缓存 2。我尝试使用 EntryProcessor,因为我只需要就地
我们使用的是 Apache Ignite 2.9.0。它是一个带有 Zookeeper 发现的 5 节点集群。 我们通过从 Intellij 执行 DDL 语句在 Ignite 中创建表。然后我们可以
我是一名优秀的程序员,十分优秀!