- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试在集群模式下的mesos上运行spark 1.5。我能够启动调度程序并运行spark-submit。但是,当我这样做时,spark驱动程序将失败,并显示以下内容:
I1111 16:21:33.515130 25325 fetcher.cpp:414] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/2bbe0c3b-433b-45e0-938b-f4d4532df129-S29","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"\/home\/optimus.prime\/Test.jar"}}],"sandbox_directory":"\/tmp\/mesos\/slaves\/2bbe0c3b-433b-45e0-938b-f4d4532df129-S29\/frameworks\/2bbe0c3b-433b-45e0-938b-f4d4532df129-0114\/executors\/driver-20151111162132-0036\/runs\/f0e8f4d7-35cb-4b73-bb5f-1112de2d8156"}
I1111 16:21:33.516376 25325 fetcher.cpp:369] Fetching URI '/home/optimus.prime/Test.jar'
I1111 16:21:33.516388 25325 fetcher.cpp:243] Fetching directly into the sandbox directory
I1111 16:21:33.516407 25325 fetcher.cpp:180] Fetching URI '/home/optimus.prime/Test.jar'
I1111 16:21:33.516417 25325 fetcher.cpp:160] Copying resource with command:cp '/home/optimus.prime/Test.jar' '/tmp/mesos/slaves/2bbe0c3b-433b-45e0-938b-f4d4532df129-S29/frameworks/2bbe0c3b-433b-45e0-938b-f4d4532df129-0114/executors/driver-20151111162132-0036/runs/f0e8f4d7-35cb-4b73-bb5f-1112de2d8156/Test.jar'
W1111 16:21:33.619190 25325 fetcher.cpp:265] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: /home/optimus.prime/Test.jar
I1111 16:21:33.619221 25325 fetcher.cpp:446] Fetched '/home/optimus.prime/Test.jar' to '/tmp/mesos/slaves/2bbe0c3b-433b-45e0-938b-f4d4532df129-S29/frameworks/2bbe0c3b-433b-45e0-938b-f4d4532df129-0114/executors/driver-20151111162132-0036/runs/f0e8f4d7-35cb-4b73-bb5f-1112de2d8156/Test.jar'
I1111 16:21:33.769359 25335 exec.cpp:134] Version: 0.25.0
I1111 16:21:33.774183 25341 exec.cpp:208] Executor registered on slave 2bbe0c3b-433b-45e0-938b-f4d4532df129-S29
WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.
15/11/11 16:21:34 INFO SparkContext: Running Spark version 1.5.1
15/11/11 16:21:35 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/11 16:21:35 INFO SecurityManager: Changing view acls to: root
15/11/11 16:21:35 INFO SecurityManager: Changing modify acls to: root
15/11/11 16:21:35 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/11/11 16:21:36 INFO Slf4jLogger: Slf4jLogger started
15/11/11 16:21:36 INFO Remoting: Starting remoting
15/11/11 16:21:36 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.241.10.12:36818]
15/11/11 16:21:36 INFO Utils: Successfully started service 'sparkDriver' on port 36818.
15/11/11 16:21:36 INFO SparkEnv: Registering MapOutputTracker
15/11/11 16:21:36 INFO SparkEnv: Registering BlockManagerMaster
15/11/11 16:21:37 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-2e733585-81ae-45ad-b81d-f2b977e38153
15/11/11 16:21:37 INFO MemoryStore: MemoryStore started with capacity 1069.1 MB
15/11/11 16:21:37 INFO HttpFileServer: HTTP File server directory is /tmp/spark-bbd7944b-7ffc-4911-a51b-5bed4e174fad/httpd-f94199aa-972d-4724-ad9e-f237401c6bab
15/11/11 16:21:37 INFO HttpServer: Starting HTTP Server
15/11/11 16:21:37 INFO Utils: Successfully started service 'HTTP file server' on port 53947.
15/11/11 16:21:37 INFO SparkEnv: Registering OutputCommitCoordinator
15/11/11 16:21:37 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/11/11 16:21:37 INFO SparkUI: Started SparkUI at http://10.241.10.12:4040
15/11/11 16:21:37 INFO SparkContext: Added JAR file:/mnt/mesos/sandbox/Test.jar at http://10.241.10.12:53947/jars/Test.jar with timestamp 1447258897676
15/11/11 16:21:37 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
I1111 16:21:37.906981 96 sched.cpp:164] Version: 0.25.0
2015-11-11 16:21:37,907:9(0x7f67d2d3c700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2015-11-11 16:21:37,907:9(0x7f67d2d3c700):ZOO_INFO@log_env@716: Client environment:host.name=mesos-slaves-spark-bjrg
2015-11-11 16:21:37,907:9(0x7f67d2d3c700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
2015-11-11 16:21:37,907:9(0x7f67d2d3c700):ZOO_INFO@log_env@724: Client environment:os.arch=3.19.0-33-generic
2015-11-11 16:21:37,907:9(0x7f67d2d3c700):ZOO_INFO@log_env@725: Client environment:os.version=#38~14.04.1-Ubuntu SMP Fri Nov 6 18:17:28 UTC 2015
2015-11-11 16:21:37,907:9(0x7f67d2d3c700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
2015-11-11 16:21:37,907:9(0x7f67d2d3c700):ZOO_INFO@log_env@741: Client environment:user.home=/root
2015-11-11 16:21:37,908:9(0x7f67d2d3c700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt/spark
2015-11-11 16:21:37,908:9(0x7f67d2d3c700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=10.241.10.3:2181,10.241.10.4:2181,110.241.10.5:2181 sessionTimeout=10000 watcher=0x7f67dc7e3600 sessionId=0 sessionPasswd=<null> context=0x7f67ec021650 flags=0
2015-11-11 16:21:37,915:9(0x7f67d1438700):ZOO_INFO@check_events@1703: initiated connection to server [10.241.10.3:2181]
2015-11-11 16:21:37,917:9(0x7f67d1438700):ZOO_INFO@check_events@1750: session establishment complete on server [10.241.10.3:2181], sessionId=0x150a0c4f8a720bd, negotiated timeout=10000
I1111 16:21:37.917933 91 group.cpp:331] Group process (group(1)@10.241.10.12:59519) connected to ZooKeeper
I1111 16:21:37.918011 91 group.cpp:805] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1111 16:21:37.918088 91 group.cpp:403] Trying to create path '/mesos' in ZooKeeper
I1111 16:21:37.919067 91 detector.cpp:156] Detected a new leader: (id='11')
I1111 16:21:37.919288 91 group.cpp:674] Trying to get '/mesos/json.info_0000000011' in ZooKeeper
I1111 16:21:37.919922 91 detector.cpp:481] A new leading master (UPID=master@10.241.10.4:5050) is detected
I1111 16:21:37.920075 91 sched.cpp:262] New master detected at master@10.241.10.4:5050
I1111 16:21:37.920300 91 sched.cpp:272] No credentials provided. Attempting to register without authentication
I1111 16:21:37.926208 88 sched.cpp:641] Framework registered with 2bbe0c3b-433b-45e0-938b-f4d4532df129-0163
15/11/11 16:21:37 INFO MesosSchedulerBackend: Registered as framework ID 2bbe0c3b-433b-45e0-938b-f4d4532df129-0163
15/11/11 16:21:38 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 57551.
15/11/11 16:21:38 INFO NettyBlockTransferService: Server created on 57551
15/11/11 16:21:38 INFO BlockManagerMaster: Trying to register BlockManager
15/11/11 16:21:38 INFO BlockManagerMasterEndpoint: Registering block manager 10.241.10.12:57551 with 1069.1 MB RAM, BlockManagerId(driver, 10.241.10.12, 57551)
15/11/11 16:21:38 INFO BlockManagerMaster: Registered BlockManager
15/11/11 16:21:39 INFO SparkContext: Starting job: sumApprox at Test.scala:21
15/11/11 16:21:39 INFO DAGScheduler: Got job 0 (sumApprox at Test.scala:21) with 8 output partitions
15/11/11 16:21:39 INFO DAGScheduler: Final stage: ResultStage 0(sumApprox at Test.scala:21)
15/11/11 16:21:39 INFO DAGScheduler: Parents of final stage: List()
15/11/11 16:21:39 INFO DAGScheduler: Missing parents: List()
15/11/11 16:21:39 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at numericRDDToDoubleRDDFunctions at Test.scala:21), which has no missing parents
15/11/11 16:21:39 INFO MemoryStore: ensureFreeSpace(1760) called with curMem=0, maxMem=1120995901
15/11/11 16:21:39 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1760.0 B, free 1069.1 MB)
15/11/11 16:21:39 INFO MemoryStore: ensureFreeSpace(1151) called with curMem=1760, maxMem=1120995901
15/11/11 16:21:39 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1151.0 B, free 1069.1 MB)
15/11/11 16:21:39 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.241.10.12:57551 (size: 1151.0 B, free: 1069.1 MB)
15/11/11 16:21:39 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
15/11/11 16:21:39 INFO DAGScheduler: Submitting 8 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at numericRDDToDoubleRDDFunctions at Test.scala:21)
15/11/11 16:21:39 INFO TaskSchedulerImpl: Adding task set 0.0 with 8 tasks
15/11/11 16:21:39 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 10.241.10.15, PROCESS_LOCAL, 2053 bytes)
15/11/11 16:21:39 INFO TaskSetManager: Re-queueing tasks for 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 from TaskSet 0.0
15/11/11 16:21:39 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 10.241.10.15): ExecutorLostFailure (executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 lost)
15/11/11 16:21:39 INFO DAGScheduler: Executor lost: 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 (epoch 0)
15/11/11 16:21:39 INFO BlockManagerMasterEndpoint: Trying to remove executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 from BlockManagerMaster.
15/11/11 16:21:39 INFO BlockManagerMaster: Removed 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 successfully in removeExecutor
15/11/11 16:21:39 INFO DAGScheduler: Host added was in lost list earlier: 10.241.10.15
15/11/11 16:21:39 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1, 10.241.10.15, PROCESS_LOCAL, 2053 bytes)
15/11/11 16:21:40 INFO TaskSetManager: Re-queueing tasks for 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 from TaskSet 0.0
15/11/11 16:21:40 WARN TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1, 10.241.10.15): ExecutorLostFailure (executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 lost)
15/11/11 16:21:40 INFO DAGScheduler: Executor lost: 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 (epoch 1)
15/11/11 16:21:40 INFO BlockManagerMasterEndpoint: Trying to remove executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 from BlockManagerMaster.
15/11/11 16:21:40 INFO BlockManagerMaster: Removed 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 successfully in removeExecutor
15/11/11 16:21:40 INFO DAGScheduler: Host added was in lost list earlier: 10.241.10.15
15/11/11 16:21:40 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2, 10.241.10.15, PROCESS_LOCAL, 2053 bytes)
15/11/11 16:21:40 INFO TaskSetManager: Re-queueing tasks for 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 from TaskSet 0.0
15/11/11 16:21:40 WARN TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2, 10.241.10.15): ExecutorLostFailure (executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 lost)
15/11/11 16:21:40 INFO DAGScheduler: Executor lost: 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 (epoch 2)
15/11/11 16:21:40 INFO BlockManagerMasterEndpoint: Trying to remove executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 from BlockManagerMaster.
15/11/11 16:21:40 INFO BlockManagerMaster: Removed 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 successfully in removeExecutor
15/11/11 16:21:40 INFO DAGScheduler: Host added was in lost list earlier: 10.241.10.15
15/11/11 16:21:40 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3, 10.241.10.15, PROCESS_LOCAL, 2053 bytes)
15/11/11 16:21:40 INFO TaskSetManager: Re-queueing tasks for 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 from TaskSet 0.0
15/11/11 16:21:40 WARN TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3, 10.241.10.15): ExecutorLostFailure (executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 lost)
15/11/11 16:21:40 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
15/11/11 16:21:40 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/11/11 16:21:40 INFO TaskSchedulerImpl: Cancelling stage 0
15/11/11 16:21:40 INFO DAGScheduler: ResultStage 0 (sumApprox at Test.scala:21) failed in 0.713 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.241.10.15): ExecutorLostFailure (executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 lost)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
15/11/11 16:21:40 INFO DAGScheduler: Executor lost: 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 (epoch 3)
15/11/11 16:21:40 INFO SparkContext: Invoking stop() from shutdown hook
15/11/11 16:21:40 INFO BlockManagerMasterEndpoint: Trying to remove executor 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 from BlockManagerMaster.
15/11/11 16:21:40 INFO BlockManagerMaster: Removed 2bbe0c3b-433b-45e0-938b-f4d4532df129-S31 successfully in removeExecutor
15/11/11 16:21:40 INFO DAGScheduler: Host added was in lost list earlier: 10.241.10.15
15/11/11 16:21:40 INFO SparkUI: Stopped Spark web UI at http://10.241.10.12:4040
15/11/11 16:21:40 INFO DAGScheduler: Stopping DAGScheduler
I1111 16:21:40.447157 108 sched.cpp:1771] Asked to stop the driver
I1111 16:21:40.447325 87 sched.cpp:1040] Stopping framework '2bbe0c3b-433b-45e0-938b-f4d4532df129-0163'
15/11/11 16:21:40 INFO MesosSchedulerBackend: driver.run() returned with code DRIVER_STOPPED
15/11/11 16:21:40 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/11/11 16:21:40 INFO MemoryStore: MemoryStore cleared
15/11/11 16:21:40 INFO BlockManager: BlockManager stopped
15/11/11 16:21:40 INFO BlockManagerMaster: BlockManagerMaster stopped
15/11/11 16:21:40 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/11/11 16:21:40 INFO SparkContext: Successfully stopped SparkContext
15/11/11 16:21:40 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/11/11 16:21:40 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/11/11 16:21:40 INFO ShutdownHookManager: Shutdown hook called
15/11/11 16:21:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-bbd7944b-7ffc-4911-a51b-5bed4e174fad
root@bfa1a77de2af:/opt/spark# exit
exit
最佳答案
我遇到了类似的问题,并通过反复试验来找到原因和解决方案。我可能无法给出“真实”的原因,但是按以下方式尝试可以帮助您解决。
尝试使用内存和核心参数启动spark-shell:
spark-shell
--driver-memory=2g
--executor-memory=7g
--num-executors=8
--executor-cores=4
--conf "spark.storage.memoryFraction=1" // important
--conf "spark.akka.frameSize=200" // keep it sufficiently high, maybe higher than 100 is a good thing
--conf "spark.default.parallelism=100"
--conf "spark.core.connection.ack.wait.timeout=600"
--conf "spark.yarn.executor.memoryOverhead=2048" // (in mb) not really valid for shell, but good thing for spark-submit
--conf "spark.yarn.driver.memoryOverhead=400" // not really valid for shell, but good thing for spark-submit. minimum 384 (in mb)
关于apache-spark - Spark Executor丢失失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33655090/
Issue 2019/05/09 21:50:07.380 +0800 ERROR [ExecutorManager] [Azkaban] No active executors found
我的问题是:使用 Executors.newFixedThreadPool(1)?? 有意义吗? 。在两个线程(main + oneAnotherThread)场景下使用执行器服务是否高效?正在通过调
我想知道,Executors.newSingleThreadExecutor() 之间有什么区别?和 Executors.newFixedThreadPool(1) 以下摘自javadoc Unlik
我的问题是:使用 Executors.newFixedThreadPool(1) 有意义吗??。在两个线程(main + oneAnotherThread)场景中使用执行器服务是否有效?通过调用 ne
我有一个 Apache Spark 应用程序在集群模式下运行在 YARN 集群上(spark 在这个集群上有 3 个节点)。 当应用程序运行时,Spark-UI 显示 2 个执行程序(每个运行在不同的
我想知道是否有任何理由使用 Executor 而不是 ExecutorService。 据我所知,JDK 中没有实现 Executor 接口(interface),它也不是 ExecutorServi
我有多个使用 Celery Executor 的 dag,但我希望使用 Kubernetes Executor 运行一个特定的 dag。我无法推断出一种良好而可靠的方法来实现这一目标。 我有一个 ai
假设我们的 Controller 中有一个 Action 。在每次请求时,许多用户都会调用 performLogin。 def performLogin( ) = { Async {
创建和管理您自己的 ExecutorService 与使用 Spring Boot 的 @Async 方法和 @Bean 方法创建 Executor 添加一个@Bean来创建一个Executor 手动
问题从无到有,只有我在代码中所做的更改 - 安装了 RaSharper(但删除它并重新安装 Visual Studio 没有帮助)。 所以我使用 NUnit 3 来运行测试。 我有 Visual St
我们知道每个任务当时都在一个核心中执行。 假设我们有这样配置的节点集群: 10 节点。 每个节点 16 个核心。 每个节点 64 GB 内存。 我的问题是 有 1 个 16 核的执行程序和 16 个
我正在从 Jupyter Notebook 中初始化 PySpark,如下所示: from pyspark import SparkContext # conf = SparkConf().setAp
我正在向我的 Web 应用程序添加一个基于 Flask 的 API,以控制某些网络自动化功能的启动和停止。我遇到了一个奇怪的行为,即 Flask-Executor .submit() 方法调用的函数似
单元测试在本地运行良好。 在 Visual Studio 2017 托管生成代理上运行时,VSTest 任务失败并显示: 2018-12-08T10:42:16.3779907Z An excepti
我正在尝试制作一个执行器和线程的简单示例。 当我调用 newSingleThreadExecutor(new CustomThreadFactory) 时,一切顺利,但是当我使用 null 参数调用
对于一个线程,我通过以下代码段捕获未捕获的异常。但是,对于 ExecutorService executor = Executors.newFixedThreadPool(10);,如何捕获未捕获的异
我想创建一个 CompletableFuture,其返回值在 Kotlin 中的特定执行程序上运行。 下面的代码工作得很好。 return CompletableFuture.supplyAsync
考虑基本的固定线程池: Executors.newFixedThreadPool(MaxListeners) 我打算不断提交新任务 - 响应传入的 TCP 套接字服务请求。 然而,当每个任务中的Run
我们可以在定义 ThreadPoolExecutors 时提供 BlockingQueue 实现。但是,如果我使用工厂(执行器)创建单个线程池(如下所示),我想知道使用哪个阻塞队列。我猜它是一个 Li
我编写了一个程序来执行两个在 shell 前台运行的命令,直到在终端上按下 ^c。 外壳命令 ./weed master -mdir=/var/lib/qualebs/weed 上面命令的输出是 qu
我是一名优秀的程序员,十分优秀!