- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
Mesos 从站无法将自己添加到集群中。现在我有 3 台机器,3 个从机运行,1 个主机运行。
但是在 mesos 页面上,我只能看到一个主设备和一个从设备(与当前主设备的主机相同)。我可以看到马拉松运行,应用程序等。
但只是其他从站无法连接到主站。
从日志::
I0825 21:30:00.971642 4110 slave.cpp:4193] Received oversubscribable resources from the resource estimator
I0825 21:30:01.000732 4106 group.cpp:313] Group process (group(1)@127.0.1.1:5051) connected to ZooKeeper
I0825 21:30:01.000821 4106 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0825 21:30:01.000874 4106 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0825 21:30:01.007753 4106 detector.cpp:138] Detected a new leader: (id='9')
I0825 21:30:01.008038 4106 group.cpp:656] Trying to get '/mesos/info_0000000009' in ZooKeeper
W0825 21:30:01.020577 4106 detector.cpp:444] Leading master master@127.0.1.1:5050 is using a Protobuf binary format when registering with ZooKeeper (info): this will be deprecated as of Mesos 0.24 (see MESOS-2340)
I0825 21:30:01.021152 4106 detector.cpp:481] A new leading master (UPID=master@127.0.1.1:5050) is detected
I0825 21:30:01.021353 4106 status_update_manager.cpp:176] Pausing sending status updates
I0825 21:30:01.021385 4105 slave.cpp:684] New master detected at master@127.0.1.1:5050
I0825 21:30:01.022073 4105 slave.cpp:709] No credentials provided. Attempting to register without authentication
E0825 21:30:01.022299 4113 socket.hpp:107] Shutdown failed on fd=11: Transport endpoint is not connected [107]
ls /mesos
[info_0000000009, info_0000000010, log_replicas]
ls /mesos/info_0000000009
[]
Trying to get '/mesos/info_0000000009' in ZooKeeper
Leading master master@127.0.1.1:5050
2015-08-25 21:30:01,882 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxn@349] - caught
end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x14f657dafeb000d, likely cl
ient has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
2015-08-25 21:30:01,884 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxn@1001] - Closed
socket connection for client /192.168.0.3:53125 which had sessionid 0x14f657dafeb000d
2015-08-25 21:30:01,952 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] -
Accepted socket connection from /192.168.0.3:53166
I0825 21:30:01.008038 4106 group.cpp:656] Trying to get '/mesos/info_0000000009' in ZooKeeper
W0825 21:30:01.020577 4106 detector.cpp:444] Leading master master@127.0.1.1:5050 is using a Protobuf binary format when registering with ZooKeeper (info): this will be deprecated as of Mesos 0.24 (see MESOS-2340)
I0825 21:30:01.021152 4106 detector.cpp:481] A new leading master (UPID=master@127.0.1.1:5050) is detected
Log file created at: 2015/08/27 07:12:56
Running on machine: vvwslave1
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0827 07:12:56.406455 1303 logging.cpp:172] INFO level logging started!
I0827 07:12:56.438398 1303 main.cpp:162] Build: 2015-07-24 10:05:39 by root
I0827 07:12:56.438534 1303 main.cpp:164] Version: 0.23.0
I0827 07:12:56.438634 1303 main.cpp:167] Git tag: 0.23.0
I0827 07:12:56.438733 1303 main.cpp:171] Git SHA: 4ce5475346a0abb7ef4b7ffc9836c5836d7c7a66
I0827 07:12:56.510270 1303 containerizer.cpp:111] Using isolation: posix/cpu,posix/mem
I0827 07:12:56.566021 1329 group.cpp:313] Group process (group(1)@127.0.1.1:5051) connected to ZooKeeper
I0827 07:12:56.566082 1329 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0827 07:12:56.566108 1329 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0827 07:12:56.571959 1303 main.cpp:249] Starting Mesos slave
I0827 07:12:56.587656 1303 slave.cpp:190] Slave started on 1)@127.0.1.1:5051
I0827 07:12:56.587723 1303 slave.cpp:191] Flags at startup: --authenticatee="crammd5" --cgroups_cpu_enable_pids_and
_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" -
-cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --default_role="*" --disk_wa
tch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_remove_delay="6hrs" --docker_sandbox_di
rectory="/mnt/mesos/sandbox" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --enforce_container_
disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_
dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
--hadoop_home="" --help="false" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher_dir=
"/usr/libexec/mesos" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --master="zk://192.168.0.2:2
281/mesos" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="505
1" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registrat
ion_backoff_factor="1secs" --resource_monitoring_interval="1secs" --revocable_cpu_low_priority="true" --strict="true
" --switch_user="true" --version="false" --work_dir="/tmp/mesos"
I0827 07:12:56.592327 1303 slave.cpp:354] Slave resources: cpus(*):2; mem(*):979; disk(*):67653; ports(*):[31000-32
000]
I0827 07:12:56.592576 1303 slave.cpp:384] Slave hostname: vvwslave1
I0827 07:12:56.592608 1303 slave.cpp:389] Slave checkpoint: true
I0827 07:12:56.633998 1330 state.cpp:36] Recovering state from '/tmp/mesos/meta'
I0827 07:12:56.644068 1330 status_update_manager.cpp:202] Recovering status update manager
I0827 07:12:56.644907 1330 containerizer.cpp:316] Recovering containerizer
I0827 07:12:56.650073 1330 slave.cpp:4026] Finished recovery
I0827 07:12:56.650527 1330 slave.cpp:4179] Querying resource estimator for oversubscribable resources
I0827 07:12:56.650653 1330 slave.cpp:4193] Received oversubscribable resources from the resource estimator
I0827 07:12:56.657416 1329 detector.cpp:138] Detected a new leader: (id='14')
I0827 07:12:56.657564 1329 group.cpp:656] Trying to get '/mesos/info_0000000014' in ZooKeeper
W0827 07:12:56.659080 1329 detector.cpp:444] Leading master master@127.0.1.1:5050 is using a Protobuf binary format
when registering with ZooKeeper (info): this will be deprecated as of Mesos 0.24 (see MESOS-2340)
I0827 07:12:56.677889 1329 detector.cpp:481] A new leading master (UPID=master@127.0.1.1:5050) is detected
I0827 07:12:56.677989 1329 slave.cpp:684] New master detected at master@127.0.1.1:5050
I0827 07:12:56.678146 1326 status_update_manager.cpp:176] Pausing sending status updates
I0827 07:12:56.678195 1329 slave.cpp:709] No credentials provided. Attempting to register without authentication
I0827 07:12:56.678239 1329 slave.cpp:720] Detecting new master
I0827 07:12:56.678591 1329 slave.cpp:3087] master@127.0.1.1:5050 exited
W0827 07:12:56.678702 1329 slave.cpp:3090] Master disconnected! Waiting for a new master to be elected
E0827 07:12:56.678460 1332 socket.hpp:107] Shutdown failed on fd=11: Transport endpoint is not connected [107]
E0827 07:12:57.068922 1332 socket.hpp:107] Shutdown failed on fd=11: Transport endpoint is not connected [107]
E0827 07:12:58.829129 1332 socket.hpp:107] Shutdown failed on fd=11: Transport endpoint is not connected [107]
2015-08-27 07:12:42,672 - INFO [main:QuorumPeerConfig@101] - Reading configuration from: /etc/zookeeper/conf/zoo.cf
g
2015-08-27 07:12:42,718 - ERROR [main:QuorumPeerConfig@283] - Invalid configuration, only one server specified (igno
ring)
2015-08-27 07:12:42,720 - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 10
2015-08-27 07:12:42,720 - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
2015-08-27 07:12:42,721 - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled.
2015-08-27 07:12:42,721 - WARN [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running
in standalone mode
2015-08-27 07:12:42,741 - INFO [main:QuorumPeerConfig@101] - Reading configuration from: /etc/zookeeper/conf/zoo.cf
g
2015-08-27 07:12:42,765 - ERROR [main:QuorumPeerConfig@283] - Invalid configuration, only one server specified (igno
ring)
2015-08-27 07:12:42,765 - INFO [main:ZooKeeperServerMain@95] - Starting server
2015-08-27 07:12:42,776 - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.5--1, built on 06/
10/2013 17:26 GMT
2015-08-27 07:12:42,776 - INFO [main:Environment@100] - Server environment:host.name=vvw
2015-08-27 07:12:42,776 - INFO [main:Environment@100] - Server environment:java.version=1.7.0_79
2015-08-27 07:12:42,776 - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
2015-08-27 07:12:42,777 - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
2015-08-27 07:12:42,777 - INFO [main:Environment@100] - Server environment:java.class.path=/etc/zookeeper/conf:/usr/share/java/jline.jar:/usr/share/java/log4j-1.2.jar:/usr/share/java/xercesImpl.jar:/usr/share/java/xmlParserAPIs.jar:/usr/share/java/netty.jar:/usr/share/java/slf4j-api.jar:/usr/share/java/slf4j-log4j12.jar:/usr/share/java/zookeeper.jar
2015-08-27 07:12:42,777 - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2015-08-27 07:12:42,779 - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
2015-08-27 07:12:42,779 - INFO [main:Environment@100] - Server environment:java.compiler=<NA>
2015-08-27 07:12:42,779 - INFO [main:Environment@100] - Server environment:os.name=Linux
2015-08-27 07:12:42,779 - INFO [main:Environment@100] - Server environment:os.arch=amd64
2015-08-27 07:12:42,780 - INFO [main:Environment@100] - Server environment:os.version=3.19.0-25-generic
2015-08-27 07:12:42,780 - INFO [main:Environment@100] - Server environment:user.name=zookeeper
2015-08-27 07:12:42,780 - INFO [main:Environment@100] - Server environment:user.home=/var/lib/zookeeper
2015-08-27 07:12:42,780 - INFO [main:Environment@100] - Server environment:user.dir=/
2015-08-27 07:12:42,789 - INFO [main:ZooKeeperServer@726] - tickTime set to 2000
2015-08-27 07:12:42,789 - INFO [main:ZooKeeperServer@735] - minSessionTimeout set to -1
2015-08-27 07:12:42,789 - INFO [main:ZooKeeperServer@744] - maxSessionTimeout set to -1
2015-08-27 07:12:42,806 - INFO [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2281
2015-08-27 07:12:42,826 - INFO [main:FileSnap@83] - Reading snapshot /var/lib/zookeeper/version-2/snapshot.705
2015-08-27 07:12:42,859 - INFO [main:FileTxnSnapLog@240] - Snapshotting: 0x728 to /var/lib/zookeeper/version-2/snap
shot.728
2015-08-27 07:12:44,848 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted sock
et connection from /192.168.0.2:44500
2015-08-27 07:12:44,857 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@793] - Connection request
from old client /192.168.0.2:44500; will be dropped if server is in r-o mode
2015-08-27 07:12:44,859 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting
to establish new session at /192.168.0.2:44500
2015-08-27 07:12:44,862 - INFO [SyncThread:0:FileTxnLog@199] - Creating new log file: log.729
2015-08-27 07:12:45,299 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10000 with nego
tiated timeout 10000 for client /192.168.0.2:44500
2015-08-27 07:12:45,505 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted sock
et connection from /192.168.0.2:44501
2015-08-27 07:12:45,506 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@793] - Connection request
from old client /192.168.0.2:44501; will be dropped if server is in r-o mode
2015-08-27 07:12:45,506 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting
to establish new session at /192.168.0.2:44501
2015-08-27 07:12:45,509 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted sock
et connection from /192.168.0.2:44502
2015-08-27 07:12:45,510 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@793] - Connection request
from old client /192.168.0.2:44502; will be dropped if server is in r-o mode
2015-08-27 07:12:45,510 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting to establish new session at /192.168.0.2:44502
2015-08-27 07:12:45,538 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted socket connection from /192.168.0.2:44503
2015-08-27 07:12:45,538 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted socket connection from /192.168.0.2:44504
2015-08-27 07:12:45,538 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@793] - Connection request from old client /192.168.0.2:44503; will be dropped if server is in r-o mode
2015-08-27 07:12:45,539 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting to establish new session at /192.168.0.2:44503
2015-08-27 07:12:45,539 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@793] - Connection request from old client /192.168.0.2:44504; will be dropped if server is in r-o mode
2015-08-27 07:12:45,539 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting to establish new session at /192.168.0.2:44504
2015-08-27 07:12:45,564 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10001 with negotiated timeout 10000 for client /192.168.0.2:44501
2015-08-27 07:12:45,674 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10002 with negotiated timeout 10000 for client /192.168.0.2:44502
2015-08-27 07:12:45,675 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10003 with negotiated timeout 10000 for client /192.168.0.2:44503
2015-08-27 07:12:45,676 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10004 with negotiated timeout 10000 for client /192.168.0.2:44504
2015-08-27 07:12:46,183 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted socket connection from /192.168.0.2:44506
2015-08-27 07:12:46,189 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting to establish new session at /192.168.0.2:44506
2015-08-27 07:12:46,232 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10005 with negotiated timeout 10000 for client /192.168.0.2:44506
2015-08-27 07:12:48,195 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted socket connection from /192.168.0.2:44508
2015-08-27 07:12:48,196 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting to establish new session at /192.168.0.2:44508
2015-08-27 07:12:48,212 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10006 with negotiated timeout 40000 for client /192.168.0.2:44508
2015-08-27 07:12:49,872 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted socket connection from /192.168.0.2:44509
2015-08-27 07:12:49,873 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@793] - Connection request from old client /192.168.0.2:44509; will be dropped if server is in r-o mode
2015-08-27 07:12:49,873 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting to establish new session at /192.168.0.2:44509
2015-08-27 07:12:49,878 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10007 with negotiated timeout 10000 for client /192.168.0.2:44509
2015-08-27 07:12:56,161 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:NIOServerCnxnFactory@197] - Accepted socket connection from /192.168.0.3:60436
2015-08-27 07:12:56,161 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@793] - Connection request from old client /192.168.0.3:60436; will be dropped if server is in r-o mode
2015-08-27 07:12:56,161 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2281:ZooKeeperServer@839] - Client attempting to establish new session at /192.168.0.3:60436
2015-08-27 07:12:56,189 - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x14f6cd241e10008 with negotiated timeout 10000 for client /192.168.0.3:60436
I0827 07:12:45.412888 1604 leveldb.cpp:176] Opened db in 567.381081ms
I0827 07:12:45.469497 1604 leveldb.cpp:183] Compacted db in 56.508537ms
I0827 07:12:45.469674 1604 leveldb.cpp:198] Created db iterator in 21452ns
I0827 07:12:45.502590 1604 leveldb.cpp:204] Seeked to beginning of db in 32.834339ms
I0827 07:12:45.502900 1604 leveldb.cpp:273] Iterated through 3 keys in the db in 101809ns
I0827 07:12:45.503026 1604 replica.cpp:744] Replica recovered with log positions 73 -> 74 with 0 holes and 0 unlear
ned
I0827 07:12:45.507745 1643 log.cpp:238] Attempting to join replica to ZooKeeper group
I0827 07:12:45.507983 1643 recover.cpp:449] Starting replica recovery
I0827 07:12:45.508095 1643 recover.cpp:475] Replica is in VOTING status
I0827 07:12:45.508167 1643 recover.cpp:464] Recover process terminated
I0827 07:12:45.536058 1604 main.cpp:383] Starting Mesos master
I0827 07:12:45.559154 1604 master.cpp:368] Master 20150827-071245-16842879-5050-1604 (vvwmaster) started on 127.0.1
.1:5050
I0827 07:12:45.559239 1604 master.cpp:370] Flags at startup: --allocation_interval="1secs" --allocator="Hierarchica
lDRF" --authenticate="false" --authenticate_slaves="false" --authenticators="crammd5" --framework_sorter="drf" --hel
p="false" --hostname="vvwmaster" --initialize_driver_logging="true" --log_auto_initialize="true" --log_dir="/var/log
/mesos" --logbufsecs="0" --logging_level="INFO" --max_slave_ping_timeouts="5" --port="5050" --quiet="false" --quorum
="1" --recovery_slave_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_s
tore_timeout="5secs" --registry_strict="false" --root_submissions="true" --slave_ping_timeout="15secs" --slave_rereg
ister_timeout="10mins" --user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/var/l
ib/mesos" --zk="zk://192.168.0.2:2281/mesos" --zk_session_timeout="10secs"
I0827 07:12:45.559460 1604 master.cpp:417] Master allowing unauthenticated frameworks to register
I0827 07:12:45.559491 1604 master.cpp:422] Master allowing unauthenticated slaves to register
I0827 07:12:45.559587 1604 master.cpp:459] Using default 'crammd5' authenticator
W0827 07:12:45.559619 1604 authenticator.cpp:504] No credentials provided, authentication requests will be refused.
I0827 07:12:45.559909 1604 authenticator.cpp:511] Initializing server SASL
I0827 07:12:45.564357 1642 group.cpp:313] Group process (group(1)@127.0.1.1:5050) connected to ZooKeeper
I0827 07:12:45.564539 1642 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0827 07:12:45.564590 1642 group.cpp:385] Trying to create path '/mesos/log_replicas' in ZooKeeper
I0827 07:12:45.675650 1644 group.cpp:313] Group process (group(2)@127.0.1.1:5050) connected to ZooKeeper
I0827 07:12:45.675717 1644 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (1, 0, 0)
I0827 07:12:45.675750 1644 group.cpp:385] Trying to create path '/mesos/log_replicas' in ZooKeeper
I0827 07:12:45.676774 1639 group.cpp:313] Group process (group(3)@127.0.1.1:5050) connected to ZooKeeper
I0827 07:12:45.676828 1639 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0827 07:12:45.676857 1639 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0827 07:12:45.678182 1640 group.cpp:313] Group process (group(4)@127.0.1.1:5050) connected to ZooKeeper
I0827 07:12:45.678235 1640 group.cpp:787] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0827 07:12:45.678380 1640 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0827 07:12:45.809567 1645 network.hpp:415] ZooKeeper group memberships changed
I0827 07:12:45.816505 1644 group.cpp:656] Trying to get '/mesos/log_replicas/0000000013' in ZooKeeper
I0827 07:12:45.820705 1645 network.hpp:463] ZooKeeper group PIDs: { log-replica(1)@127.0.1.1:5050 }
I0827 07:12:46.020447 1644 contender.cpp:131] Joining the ZK group
I0827 07:12:46.020498 1639 master.cpp:1420] Successfully attached file '/var/log/mesos/mesos-master.INFO'
I0827 07:12:46.078451 1643 contender.cpp:247] New candidate (id='14') has entered the contest for leadership
I0827 07:12:46.078984 1645 detector.cpp:138] Detected a new leader: (id='14')
I0827 07:12:46.079110 1645 group.cpp:656] Trying to get '/mesos/info_0000000014' in ZooKeeper
W0827 07:12:46.084359 1645 detector.cpp:444] Leading master master@127.0.1.1:5050 is using a Protobuf binary format when registering with ZooKeeper (info): this will be deprecated as of Mesos 0.24 (see MESOS-2340)
I0827 07:12:46.084485 1645 detector.cpp:481] A new leading master (UPID=master@127.0.1.1:5050) is detected
I0827 07:12:46.084553 1645 master.cpp:1481] The newly elected leader is master@127.0.1.1:5050 with id 20150827-071245-16842879-5050-1604
I0827 07:12:46.084653 1645 master.cpp:1494] Elected as the leading master!
I0827 07:12:46.084682 1645 master.cpp:1264] Recovering from registrar
I0827 07:12:46.084812 1645 registrar.cpp:313] Recovering registrar
I0827 07:12:46.085160 1645 log.cpp:661] Attempting to start the writer
I0827 07:12:46.085683 1639 replica.cpp:477] Replica received implicit promise request with proposal 18
I0827 07:12:46.231271 1639 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 145.505945ms
I0827 07:12:46.231402 1639 replica.cpp:345] Persisted promised to 18
I0827 07:12:46.231667 1640 coordinator.cpp:230] Coordinator attemping to fill missing position
I0827 07:12:46.231801 1640 log.cpp:677] Writer started with ending position 74
I0827 07:12:46.232197 1646 leveldb.cpp:438] Reading position from leveldb took 60443ns
I0827 07:12:46.232319 1646 leveldb.cpp:438] Reading position from leveldb took 21312ns
I0827 07:12:46.232934 1646 registrar.cpp:346] Successfully fetched the registry (247B) in 148.019968ms
I0827 07:12:46.233131 1646 registrar.cpp:445] Applied 1 operations in 17888ns; attempting to update the 'registry'
I0827 07:12:46.234346 1640 log.cpp:685] Attempting to append 286 bytes to the log
I0827 07:12:46.234463 1640 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 75
I0827 07:12:46.234748 1645 replica.cpp:511] Replica received write request for position 75
I0827 07:12:46.274888 1645 leveldb.cpp:343] Persisting action (305 bytes) to leveldb took 40.044935ms
I0827 07:12:46.275140 1645 replica.cpp:679] Persisted action at 75
I0827 07:12:46.275503 1646 replica.cpp:658] Replica received learned notice for position 75
I0827 07:12:46.307917 1646 leveldb.cpp:343] Persisting action (307 bytes) to leveldb took 32.320539ms
I0827 07:12:46.308076 1646 replica.cpp:679] Persisted action at 75
I0827 07:12:46.308112 1646 replica.cpp:664] Replica learned APPEND action at position 75
I0827 07:12:46.308668 1646 registrar.cpp:490] Successfully updated the 'registry' in 75.472128ms
I0827 07:12:46.308749 1646 registrar.cpp:376] Successfully recovered registrar
I0827 07:12:46.308888 1646 log.cpp:704] Attempting to truncate the log to 75
I0827 07:12:46.309002 1646 master.cpp:1291] Recovered 1 slaves from the Registry (247B) ; allowing 10mins for slaves to re-register
I0827 07:12:46.309056 1646 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 76
I0827 07:12:46.309252 1646 replica.cpp:511] Replica received write request for position 76
I0827 07:12:46.352067 1646 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 42.749912ms
I0827 07:12:46.352377 1646 replica.cpp:679] Persisted action at 76
I0827 07:12:46.352900 1646 replica.cpp:658] Replica received learned notice for position 76
I0827 07:12:46.407814 1646 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 54.686166ms
I0827 07:12:46.408033 1646 leveldb.cpp:401] Deleting ~2 keys from leveldb took 50800ns
I0827 07:12:46.408068 1646 replica.cpp:679] Persisted action at 76
I0827 07:12:46.408102 1646 replica.cpp:664] Replica learned TRUNCATE action at position 76
I0827 07:12:46.884490 1644 master.cpp:3332] Registering slave at slave(1)@127.0.1.1:5051 (vvw) with id 20150827-071245-16842879-5050-1604-S0
I0827 07:12:46.900085 1644 registrar.cpp:445] Applied 1 operations in 43323ns; attempting to update the 'registry'
I0827 07:12:46.901564 1639 log.cpp:685] Attempting to append 440 bytes to the log
I0827 07:12:46.901736 1639 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 77
I0827 07:12:46.902035 1639 replica.cpp:511] Replica received write request for position 77
I0827 07:12:46.947882 1639 leveldb.cpp:343] Persisting action (459 bytes) to leveldb took 45.777578ms
I0827 07:12:46.948067 1639 replica.cpp:679] Persisted action at 77
I0827 07:12:46.948422 1639 replica.cpp:658] Replica received learned notice for position 77
I0827 07:12:46.992007 1639 leveldb.cpp:343] Persisting action (461 bytes) to leveldb took 43.518061ms
I0827 07:12:46.992187 1639 replica.cpp:679] Persisted action at 77
I0827 07:12:46.992249 1639 replica.cpp:664] Replica learned APPEND action at position 77
I0827 07:12:46.992826 1640 registrar.cpp:490] Successfully updated the 'registry' in 92.466176ms
I0827 07:12:46.992949 1639 log.cpp:704] Attempting to truncate the log to 77
I0827 07:12:46.993027 1639 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 78
I0827 07:12:46.993371 1639 replica.cpp:511] Replica received write request for position 78
I0827 07:12:46.993588 1640 master.cpp:3395] Registered slave 20150827-071245-16842879-5050-1604-S0 at slave(1)@127.0.1.1:5051 (vvw) with cpus(*):4; mem(*):1846; disk(*):141854; ports(*):[31000-32000]
I0827 07:12:46.993785 1644 hierarchical.hpp:528] Added slave 20150827-071245-16842879-5050-1604-S0 (vvw) with cpus(*):4; mem(*):1846; disk(*):141854; ports(*):[31000-32000] (allocated: )
I0827 07:12:47.018685 1641 master.cpp:3687] Received update of slave 20150827-071245-16842879-5050-1604-S0 at slave(1)@127.0.1.1:5051 (vvw) with total oversubscribed resources
I0827 07:12:47.018934 1641 hierarchical.hpp:588] Slave 20150827-071245-16842879-5050-1604-S0 (vvw) updated with oversubscribed resources (total: cpus(*):4; mem(*):1846; disk(*):141854; ports(*):[31000-32000], allocated: )
I0827 07:12:47.036170 1639 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 42.72315ms
I0827 07:12:47.036388 1639 replica.cpp:679] Persisted action at 78
最佳答案
"But at the mesos page i can see just one master and one slave (same as the master's host present)."
127.0.1.1)
。您可以通过
--ip
标志更改它。
关于不同主机上的 mesos slave 无法添加自身,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32209335/
我正在努力解决这个问题, 要获得 mesos slave,我们是必须安装 Mesos 并启动 mesos slave 设置还是? 而且我运行命令的 mesos master 也有问题 ./bin/me
Mesos Master 和 Mesos 代理是否需要 root 访问权限? Mesos master 和 Mesos agent 的默认权限级别是多少?他们可以以非root访问权限运行吗? 最佳答案
我有一些 mesos-slave 节点,我想公开到互联网上。因此,我希望 mesos-slave 除了其默认的大量端口/资源外还提供端口 80 和 443。 来自 what I have gather
我正在尝试与Marathon一起执行一次性任务。我能够运行任务容器,但是在task命令完成之后,马拉松运行另一个任务,依此类推。如何防止Marathon运行多个任务/命令? 或者,如果“马拉松”无法做
我试图了解 Mesos 的各个组件如何一起努力,发现this excellent tutorial其中包含以下架构概述: 我对此有一些未明确说明的担忧(无论是在文章中还是在官方 Mesos docs
我浏览了 video on introduction of DCOS .这很好,但让我对 Mesosphere 中组件定义的分类感到有些困惑。 我知道 DCOS 是一个生态系统,而 Mesos 就像一
我们有多种应用程序在同一个 Mesos-Marathon 集群上运行。这些应用程序可以分为耦合非常松散的类别,几乎可以单独运行。考虑到易于配置、维护集群、重用 CI/CD 管道等,它们在同一个集群上运
我正在学习本教程:http://mesosphere.io/docs/getting-started/cloud-install/ 只是在 Digital Ocean 上的一个 Ubuntu 实例上学
我们的服务有很多批处理作业,从一台机器执行,现在 CPU 资源快用完了。 大多数这些工作都非常简单。比如每隔5分钟查询一次我们的数据库,找到需要处理的数据,然后对这些数据进行处理,将结果写回数据库。
我正在运行一个 mesos 集群,目前在同一台机器上有三个主节点和从节点。 我的问题是,有时我会在 Marathon 和 Chronos 中看到一个进程突然停止。检查我看到的日志后,mesos-sla
我有 3 个 CentOS 虚拟机,我在主节点上安装了 Zookeeper、Marathon 和 Mesos,而只在其他 2 个虚拟机上安装了 Mesos。主节点上没有运行 mesos-slave。我
我正在编写一个 mesos 框架,我想使用我的自定义执行器来执行我的任务。我浏览了其他几个 mesos 框架代码库(chronos 和 marathon),并编写了一个使用默认命令执行器执行 shel
我正在阅读 Building Applications on Mesos ,并遇到以下语句: cpus This resource expresses how many CPU cores are a
我们有一个现有的 Apache Mesos 集群,并希望以其崭新的开源形式试用 DCOS。但是,破坏性地重新安装 DCOS 会很痛苦。那么是否可以在现有的 Mesos 安装上“覆盖”DCOS? DCO
Marathon 不支持可以建立命令模式和避免冗余的任务配置模板。我们正在努力寻找解决方法,否则我们需要创建 100,000 个任务,并且管理这些配置文件将非常困难。我们正在考虑的一种方法是在 mes
我有一个小型 Mesos 集群,我正在使用 Marathon 管理一组长期运行的服务,每个服务的实例数量可变。 我希望能够根据业务需要启动新节点或终止其中的一些节点。然而,当终止一个节点时,我意识到有
Mesos 现在支持 Docker。如果我将 Docker 作为 Executor 运行,我是否还能获得 Mesos 的一些高可用性和调度优势?或者我是否必须运行 Docker 任务,例如在马拉松比赛
我在尝试在 mesos 集群上运行 mesos-dns dockerized 时遇到了一些麻烦。 我在 Windows 8.1 主机上设置了 2 个 ubuntu 可信赖的虚拟机。 我的虚拟机被称为
我正在尝试使用 DCOS cli 在 mesos 上启动一个 spark streaming 作业。我可以开始工作了。我的程序需要一个配置文件作为 cli 参数传递。如何使用 dcos spark r
我想在 Mesos 上运行多个 Kafka 代理作为 Marathon 作业。我在考虑使用什么作为经纪人 ID。我知道有些人正在使用基于 IP 的代理 ID,但我读到这种方法对于将代理迁移到不同机器的
我是一名优秀的程序员,十分优秀!