- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
在 SLURM 集群中,我正在提交一个调用 python 脚本的 shell 脚本(这两个脚本都可以在下面找到。当 shell 脚本执行时,它会到达调用 python 脚本的位置,但随后什么也没发生:没有输出,没有错误消息并且 SLURM 作业继续运行。
我假设 python 脚本的全部内容都不相关(但为了完成我还是将其包含在内)。出于调试目的,我插入了 print("script started")
一开始就行,看看它是否运行,但它没有。我在输出中看到的最后一件事是 moved to directory
.
我试着调用 test.py
包含 print("test")
的脚本就在此之前,它会正常执行。
python 脚本无法启动的原因可能是什么,我该如何解决?
编辑:由于用户 jakub 建议更改 print("script started")
至 print("script started", flush=True)
成功打印。包括更多这些语句表明脚本实际上运行得非常好,只是没有输出任何内容。在不断执行的 for 循环中包含相同的语句也会使所有 print()
以前缺少的语句被打印出来。
那么问题就变成了:为什么print()
这里的语句需要有flush=True
在这个脚本中而不是在其他脚本中?
外壳脚本:
#!/bin/bash
#SBATCH --mail-user=lukas.baehler@pathology.unibe.ch
#SBATCH --mail-type=end,fail
#SBATCH --output=out-ncl
#SBATCH --error=err-ncl
#SBATCH --job-name="Mask RCNN nucleus training and detection"
#SBATCH --time=24:00:00
#SBATCH --partition=gpu
#SBATCH --mem-per-cpu=64G
#SBATCH --gres=gpu:gtx1080ti:1
#SBATCH --constraint=rtx2080
conda init bash
source ~/.bashrc
conda activate nucl
cd MRCNN/samples/nucleus
echo "moved to directory"
python nucleus-pipeline2.py splitTMA
echo "Split TMAs"
python 脚本:
print("script started")
if __name__ == '__main__':
import argparse
import os
# Copied from later in script because the argparse part was moved up and is
# needed as default in --logs.
ROOT_DIR = os.path.abspath("../../")
DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, "logs")
# Parse command line arguments
parser = argparse.ArgumentParser(
description='Mask R-CNN for nuclei counting and segmentation')
parser.add_argument("command",
metavar="<command>",
help="'splitTMA', 'splitSpot', 'structure', 'train' or 'detect'")
parser.add_argument('--dataset', required=False,
metavar="/path/to/dataset/",
help='Root directory of the dataset')
parser.add_argument('--weights', required=False,
metavar="/path/to/weights.h5",
help="Path to weights .h5 file or 'coco'")
parser.add_argument('--logs', required=False,
default=DEFAULT_LOGS_DIR,
metavar="/path/to/logs/",
help='Logs and checkpoints directory (default=logs/)')
parser.add_argument('--subset', required=False,
metavar="Dataset sub-directory",
help="Subset of dataset to run prediction on")
# Own arguments
parser.add_argument("--input", required=False,
metavar="path/to/input/folder",
help="Optionally specify the input directory. Should only be used with splitTMA, splitSpot and structure.")
parser.add_argument("--output", required=False,
metavar="path/to/output/folder",
help="Optionally specify the output directory. Should only be used with splitTMA, splitSpot and structure.")
args = parser.parse_args()
assert args.command in ["train", "detect", "splitTMA", "splitSpot", "structure"], "Must set command."
################################################################################
# splitTMA
################################################################################
# The original script for this is tma-spot.py
# Splits a TMA into images of its spots.
if args.command == "splitTMA":
import os
import cv2
import numpy as np
from openslide import open_slide
from matplotlib import pyplot as plt
###################
# CONFIGURATION
# Defines the level of resolution for spot recognition
level = 7 # Default 7
# Defines the level of resolution to use for the new images
newLevel = 0 # Default 0 (higest resolution)
# Defines the spot size in pixels (has to be changed if newLevel is changed)
SpotSize = 3072 # Default 3500
# # Shift values are for alignment of the two slides.
# shiftX = 445 - 10
# shiftY = -64 + 10
print("Using the following parameters:\nlevel = {}\nnewLevel = {}\nSpotSize = {}".format(level, newLevel, SpotSize))
###################
# NUCLEUS_DIR = "MRCNN/samples/nucleus"
NUCLEUS_DIR = os.path.abspath("")
os.chdir(NUCLEUS_DIR)
if args.input:
INPUT_DIR = args.input
else:
INPUT_DIR = "slides"
print("Using '{}' as input folder.".format(INPUT_DIR))
if args.output:
OUTPUT_DIR = args.output
else:
OUTPUT_DIR = "spots"
print("Using '{}' as output folder.".format(OUTPUT_DIR))
# mrxs_filenames = [filename for filename in os.listdir("slides") if filename[-5:] == ".mrxs"]
mrxs_filenames = [filename for filename in os.listdir(INPUT_DIR) if filename[-5:] == ".mrxs"]
print("\nFound {} MIRAX files.".format(len(mrxs_filenames)))
# Loop through all .mrxs files.
for filename in mrxs_filenames:
print("\nReading {}\n".format(filename))
# filename = mrxs_filenames[0]
img = open_slide("{}/{}".format(INPUT_DIR, filename))
# # Use if you want to to see the resolution of all the levels.
# for i in range(img.level_count):
# print("Level", i, "dimension", img.level_dimensions[i],"down factor",img.level_downsamples[i])
# Use the level set previously and read the slide as an RGB image.
x_img = img.read_region((0,0), level, img.level_dimensions[level])
x_img = np.array(x_img)
rgb = np.zeros_like(x_img)
rgb[x_img==0] = 255
rgba_im = cv2.add(rgb,x_img)
imgLevel = cv2.cvtColor(rgba_im,cv2.COLOR_RGBA2RGB)
# plt.imsave("./Output/level" + str(level) + ".png", imgLevel) # <---------- USE FOR DEBUGGING
# Converts the image to gray levels and applies a gussian blur.
gray = cv2.cvtColor(imgLevel, cv2.COLOR_BGR2GRAY)
gray_blur = cv2.GaussianBlur(gray, (3, 3), 0)
# cv2.imwrite( "./Output/gray.png", gray_blur) # <-------------------------- USE FOR DEBUGGING
# Use an Otsu binarization to generate a mask for where tissue is.
ret3, thresh = cv2.threshold(gray_blur, 8, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
thresh = ~thresh
cont_img = thresh.copy()
# cv2.imwrite( "spots/cd3/contour.png", cont_img) # <------------------------ USE FOR DEBUGGING
# Finds the contours of the mask generated.
contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Loop through all contours
spot_nr = 0
for cnt in contours:
# Decide based on the area of the contour if it is a spot
area = cv2.contourArea(cnt)
spotInfo = []
x, y, w, h = cv2.boundingRect(cnt)
if area < 100 or area > 2000:
spotInfo.append([-1, x, y, w, h])
continue
if len(cnt) < 5:
spotInfo.append([-1, x, y, w, h])
continue
# Calculate the center of the spot
centerX = x + int(w/2)
centerY = y + int(h/2)
# Calculate how much it needs to be scaled
factorOld = img.level_downsamples[level]
factorNew = img.level_downsamples[newLevel]
# Read the spot region
spot = img.read_region((int(centerX * factorOld)-int(SpotSize/2),
int(centerY * factorOld)-int(SpotSize/2)),
newLevel, (SpotSize, SpotSize))
spot = cv2.cvtColor(np.array(spot), cv2.COLOR_RGBA2RGB)
# Create directory and save the image
if not os.path.isdir("{}/{}".format(OUTPUT_DIR, filename[:-5])):
os.makedirs("{}/{}".format(OUTPUT_DIR, filename[:-5]))
spot_name = "{0}/{1}/{1}-{2}.png".format(OUTPUT_DIR, filename[:-5],str(spot_nr).zfill(3))
plt.imsave(spot_name, spot)
spot_nr += 1
print("Spot {} saved - Center X and Y: {}, {}".format(spot_nr, centerX, centerY))
exit()
################################################################################
# splitSpot
################################################################################
# This is copied from spot-annotation.py
# Splits spots into tiles
if args.command == "splitSpot":
import os
import sys
import argparse
import re
import numpy as np
import cv2
from matplotlib import pyplot as plt
# VARIABLES
# Change the resolution of the tiles here. Note the image resolution
# must be an integer multiple of the tile resolutions (both dimensions).
tile_resolution = [768, 768]
# NUCLEUS_DIR = "MRCNN/samples/nucleus"
NUCLEUS_DIR = os.path.abspath("")
os.chdir(NUCLEUS_DIR)
if args.input:
INPUT_DIR = args.input
else:
INPUT_DIR = "spots"
print("\nUsing '{}' as input folder.".format(INPUT_DIR))
if args.output:
OUTPUT_DIR = args.output
else:
OUTPUT_DIR = "tiles"
print("Using '{}' as output folder.".format(OUTPUT_DIR))
# EXECUTION
TMA_folders = os.listdir(INPUT_DIR)
spot_names = []
spot_count = 0
for name in TMA_folders:
spot_names.append(os.listdir("{}/{}".format(INPUT_DIR, name)))
spot_count += len(spot_names[-1])
print("\nFound {} TMA folders with a total of {} spot images.".format(len(TMA_folders), spot_count))
for a, TMA in enumerate(TMA_folders):
for b, spot in enumerate(spot_names[a]):
print("TMA: {}/{} - Spot: {}/{}".format(a+1, len(TMA_folders), b+1, len(spot_names[a])), end="\r")
# Read the image
img = cv2.imread("{}/{}/{}".format(INPUT_DIR,TMA, spot))
# Calculate how many tiles will be produced
tilesX = img.shape[0]/tile_resolution[0]
tilesY = img.shape[1]/tile_resolution[1]
assert (tilesX == int(tilesX) and tilesY == int(tilesY)), "Image resolution is not an integer multiple of the tile resolution."
tilesX, tilesY = int(tilesX), int(tilesY)
# Create the np array that will hold the tiles
tiles = np.zeros([tilesY,tilesX,tile_resolution[0],tile_resolution[1],3])
# Loop through all tiles and store them in tiles
for i in range(tilesX):
for j in range(tilesY):
tiles[j,i] = img[i*tile_resolution[0]:(i+1)*tile_resolution[0],
j*tile_resolution[1]:(j+1)*tile_resolution[1]]
tiles = tiles.astype("uint8")
# print("\nImage was split into {} tiles.".format(tiles.shape[0]*tiles.shape[1]))
# Save all the tiles
for x in range(tiles.shape[0]):
for y in range(tiles.shape[1]):
# Displays progression
# print("Saving {}/{} images...".format(str(x*tiles.shape[0]+y+1),tiles.shape[0]*tiles.shape[1]), end="\r")
# Using the plt.imsave() gives alterations in color which is
# presumably bad. Using cv2.imwrite() is also ca. 10 times faster.
imdir = "{}/{}/{}".format(OUTPUT_DIR, TMA, spot[:-4])
imname = "{}-{}-{}.png".format(spot[:-4], str(x).zfill(2), str(y).zfill(2))
if not os.path.isdir(imdir):
os.makedirs(imdir)
cv2.imwrite("{}/{}".format(imdir, imname), tiles[x,y])
print("\nSaved images in {} as [spot_name]-x-y.png.".format(OUTPUT_DIR))
exit()
################################################################################
# Prepare Data Structure
################################################################################
# Adapted from prepare-data-structure.py
# Creates the data structure required for the network
if args.command == "structure":
import os
from shutil import copyfile
NUCLEUS_DIR = os.path.abspath("")
os.chdir(NUCLEUS_DIR)
# Setting input and output directories
if args.input:
INPUT_DIR = args.input
else:
INPUT_DIR = "tiles"
print("\nUsing '{}' as input folder.".format(INPUT_DIR))
if args.output:
OUTPUT_DIR = args.output
else:
OUTPUT_DIR = "data"
print("Using '{}' as output folder.".format(OUTPUT_DIR))
# Creates a list with the paths of all tiles. Also stores just the
# filename itself with and without file extension
file_names = []
for path,_,files in os.walk(INPUT_DIR):
for f in files:
file_names.append(["{}/{}".format(path, f), f, f[:-4]])
print("\nFound {} images.".format(len(file_names)))
assert file_names != [], "No images found in input folder."
# The dataset needs to be stored inside another folder (default "own_data")
subset = "own_data"
# For each file creates the appropriate sub-folders and copies the file.
skip_count = 0
for i,info in enumerate(file_names):
print("Saving {}/{} images.".format(i+1, len(file_names)), end="\r")
dirname = "{}/{}/{}/images".format(OUTPUT_DIR, subset, info[2])
try:
os.makedirs(dirname)
except:
skip_count += 1
continue
copyfile(info[0], "{}/{}".format(dirname, info[1]))
print("\n\nSaved dataset in {}/{}".format(OUTPUT_DIR, subset))
if skip_count > 0:
print("Skipped {} files because they already existed.".format(skip_count))
print("")
exit()
最佳答案
Python 默认缓冲 stdin、stdout 和 stderr。 print()
写信给 stdout
默认情况下,您将看到此缓冲行为。
来自 https://stackoverflow.com/a/14258511/5666087 :
Python opens the stdin, -out and -error streams in a buffered mode; it'll read or write in larger chunks, keeping data in memory until a threshold is reached.
flush=True
来强制刷新此缓冲区至
print
.见
the documentation想要查询更多的信息。如果您有多个
print
连续的语句,你只需要使用
flush=True
在最后一个。
关于python - SLURM批处理脚本不执行Python脚本,不返回错误信息,不停止运行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63414318/
我有 powershell 脚本。通过调度程序,我运行 bat 文件,该文件运行 PS1 文件。 BAT文件 Powershell.exe -executionpolicy remotesigned
什么更快? 或者 $.getScript('../js/SOME.js', function (){ ... // with $.ajaxSetup({ cache: true });
需要bash脚本来显示文件 #!/bin/bash my_ls() { # save current directory then cd to "$1" pushd "$1" >/dev/nu
我有一个输入 csv 文件,实际上我需要在输入文件中选择第 2 列和第 3 列值,并且需要转换两个值的时区(从 PT 到 CT),转换后我需要替换转换后的时区值到文件。 注意: 所有输入日期值都在太平
我正在使用/etc/init.d/httpd 作为 init.d 脚本的模板。我了解文件中发生的所有内容,但以下行除外: LANG=$HTTPD_LANG daemon --pidfile=${pid
我有以下选择: python runscript.py -O start -a "-a "\"-o \\\"-f/dev/sda1 -b256k -Q8\\\" -l test -p maim\""
我对 shell 脚本完全陌生,但我需要编写一个 shell 脚本来检查文件是否存在,然后移动到另一个位置 这是我写的: 一旦设备崩溃,我就会在/storage/sdcard1/1 中收集日志 #!/
我正在使用 bash 脚本从文本文件中读取数据。 数据: 04:31 Alex M.O.R.P.H. & Natalie Gioia - My Heaven http://goo.gl/rMOa2q
这是单击按钮时运行的 javascript 的结尾 xmlObj.open ('GET', /ajax.php, true); xmlObj.send (''); } 所以这会执行根目录中的php脚本
关闭。这个问题需要debugging details .它目前不接受答案。 编辑问题以包含 desired behavior, a specific problem or error, and th
我需要将文件转换为可读流以通过 api 上传,有一个使用 fs.createReadStream 的 Node js 示例。任何人都可以告诉我上述声明的 python 等价物是什么? 例子 const
我有一个 shell 脚本 cron,它从同一目录调用 python 脚本,但是当这个 cron 执行时,我没有从我的 python 脚本中获得预期的输出,当我手动执行它时,我的 python 脚本的
如何使 XMLHttpRequest (ajax) 调用的 php 脚本安全。 我的意思是,不让 PHP 文件通过直接 url 运行,只能通过脚本从我的页面调用(我不想向未登录的用户显示数据库结果,并
我正在尝试添加以下内容 我正在使用经典的 asp。但我不断收到的错误是“一个脚本 block 不能放在另一个脚本 block 内。”我尝试了此处的 document.write 技术:Javasc
如何从另一个 PHP 脚本(如批处理文件)中运行多个 PHP 脚本?如果我了解 include 在做什么,我认为 include 不会起作用;因为我正在运行的每个文件都会重新声明一些相同的函数等。我想
我想创建具有动态内容的网页。我有一个 HTML 页面,我想从中调用一个 lua 脚本 如何调用 lua 脚本? ? ? 从中检索数据?我可以做类似的事情吗: int xx = 0; xx
我删除了我的第一个问题,并重新编写了更多细节和附加 jSfiddle domos。 我有一个脚本,它运行查询并返回数据,然后填充表。表中的行自动循环滚动。所有这些工作正常,并通过使用以下代码完成。然而
我尝试使用 amp 脚本,但收到此错误: “[amp-script] 脚本哈希未找到。amp-script[script="hello-world"].js 必须在元[name="amp-script
我有一个读取输入的 Shell 脚本 #!/bin/bash echo "Type the year that you want to check (4 digits), followed by [E
我正在从 nodejs 调用 Lua 脚本。我想传递一个数组作为参数。我在 Lua 中解析该数组时遇到问题。 下面是一个例子: var script = 'local actorlist = ARGV
我是一名优秀的程序员,十分优秀!