gpt4 book ai didi

javascript - 使用 ImageJ 在大型集群上编写具有多个时间点的并行迭代反卷积脚本

转载 作者:行者123 更新时间:2023-12-02 16:48:12 26 4
gpt4 key购买 nike

我有一个有趣的 ImageJ 脚本问题想要分享。一位成像科学家给了我一个包含 258 个时间点和 13 张“Z 堆栈”图像的数据集。总共有 3,354 张 tif 图像。他有一个使用 imageJ 宏录制功能制作的宏,该宏可以在他的 Windows 计算机上运行,​​但需要很长时间。我可以访问一个非常大的学术计算集群,可以想象,我可以在其中请求与时间点一样多的节点。输入文件是 3,354 个 tif 图像,标记为“img_000000000_ZeissEpiGreen_000.tif”,九位数字增加 1-258,三位数字是 Z 堆栈顺序 1-13,另一个输入文件是点扩散函数图像用小珠子以亚分辨率制成。这是宏“iterative_parallel_devolving.ijm”。我更改了路径以对应于集群中的必要路径。

//******* SET THESE VARIABLES FIRST!  ********
path = "//tmp//images//";
seqFilename = "img_000000000_ZeissEpiGreen_000.tif";
PSFpath = "//tmp//runfiles//20xLWDZeissEpiPSFsinglebeadnoDICprismCROPPED64x64.tif";
numTimepoints = 258;
numZslices = 13;
xyScaling = 0.289 //microns/pixel
zScaling = 10 //microns/z-slice
timeInterval = 300; //seconds
//********************************************

getDateAndTime(year1, month1, dayOfWeek1, dayOfMonth1, hour1, minute1, second1, msec); //to print start and end times
print("Started " + month1 + "/" + dayOfMonth1 + "/" + year1 + " " + hour1 + ":" + minute1 + ":" + second1);

//number of images in sequence
fileList = getFileList(path);
numImages = fileList.length;

//filename and path for saving each timepoint z-stack
pathMinusLastSlash = substring(path, 1, lengthOf(path) - 1);
baseFilenameIndex = lastIndexOf(pathMinusLastSlash, "\\");
baseFilename = substring(pathMinusLastSlash, baseFilenameIndex + 1, lengthOf(pathMinusLastSlash));
saveDir = substring(path, 0, baseFilenameIndex + 2);

//loop to save each timepoint z-stack and deconvolve it
for(t = 0; t < numTimepoints; t++){
time = IJ.pad(t, 9);
run("Image Sequence...", "open=[" + path + seqFilename + "] number=" + numImages + " starting=1 increment=1 scale=100 file=[" + time + "] sort");
run("Properties...", "channels=1 slices=" + numZslices + " frames=1 unit=um pixel_width=" + xyScaling + " pixel_height=" + xyScaling + " voxel_depth=" + zScaling + " frame=[0 sec] origin=0,0");
filename = baseFilename + "-t" + time + ".tif";
saveAs("tiff", saveDir + filename);
close();

// WPL deconvolution -----------------
pathToBlurredImage = saveDir + filename;
pathToPsf = PSFpath;
pathToDeblurredImage = saveDir + "decon-WPL_" + filename;
boundary = "REFLEXIVE"; //available options: REFLEXIVE, PERIODIC, ZERO
resizing = "AUTO"; // available options: AUTO, MINIMAL, NEXT_POWER_OF_TWO
output = "SAME_AS_SOURCE"; // available options: SAME_AS_SOURCE, BYTE, SHORT, FLOAT
precision = "SINGLE"; //available options: SINGLE, DOUBLE
threshold = "-1"; //if -1, then disabled
maxIters = "5";
nOfThreads = "32";
showIter = "false";
gamma = "0";
filterXY = "1.0";
filterZ = "1.0";
normalize = "false";
logMean = "false";
antiRing = "true";
changeThreshPercent = "0.01";
db = "false";
detectDivergence = "true";
call("edu.emory.mathcs.restoretools.iterative.ParallelIterativeDeconvolution3D.deconvolveWPL", pathToBlurredImage, pathToPsf, pathToDeblurredImage, boundary, resizing, output, precision, threshold, maxIters, nOfThreads, showIter, gamma, filterXY, filterZ, normalize, logMean, antiRing, changeThreshPercent, db, detectDivergence);
}

//save deconvolved timepoints in one TIFF
run("Image Sequence...", "open=["+ saveDir + "decon-WPL_" + baseFilename + "-t000000000.tif] number=999 starting=1 increment=1 scale=100 file=decon-WPL_" + baseFilename + "-t sort");
run("Stack to Hyperstack...", "order=xyczt(default) channels=1 slices=" + numZslices + " frames=" + numTimepoints + " display=Grayscale");
run("Properties...", "channels=1 slices=" + numZslices + " frames=" + numTimepoints + " unit=um pixel_width=" + xyScaling + " pixel_height=" + xyScaling + " voxel_depth=" + zScaling + " frame=[" + timeInterval + " sec] origin=0,0");
saveAs("tiff", saveDir + "decon-WPL_" + baseFilename + ".tif");
close();

getDateAndTime(year2, month2, dayOfWeek2, dayOfMonth2, hour2, minute2, second2, msec);
print("Ended " + month2 + "/" + dayOfMonth2 + "/" + year2 + " " + hour2 + ":" + minute2 + ":" + second2);

ImageJ 插件并行迭代解卷积的网站在这里:https://sites.google.com/site/piotrwendykier/software/deconvolution/paralleliterativedeconvolution

这是我用来将作业提交到集群的 PBS 脚本,使用以下命令:'qsub -l walltime=24:00:00,nodes=1:ppn=32 -q largemem ./PID3.pbs '。我本来可以请求最多 40 ppn,但程序规定它们必须是 2 的幂。

#PBS -S /bin/bash
#PBS -V
#PBS -N PID_Test
#PBS -k n
#PBS -r n
#PBS -m abe

Xvfb :566 &
export DISPLAY=:566.0 &&

cd /tmp &&

mkdir -p /tmp/runfiles /tmp/images &&

cp /home/rcf-proj/met1/pid1/runfiles/* /tmp/runfiles/ &&
cp /home/rcf-proj/met1/pid1/images/*.tif /tmp/images/ &&

java -Xms512G -Xmx512G -Dplugins.dir=/home/rcf-proj/met1/software/fiji/Fiji.app/plugins/ -cp /home/rcf-proj/met1/software/imagej/ij.jar -jar /home/rcf-proj/met1/software/imagej/ij.jar -macro /tmp/runfiles/iterative_parallel_deconvolution.ijm -batch &&

tar czf /tmp/PIDTest.tar.gz /tmp/images &&

cp /tmp/PIDTest.tar.gz /home/rcf-proj/met1/output/ &&

rm -rf /tmp/images &&
rm -rf /tmp/runfiles &&

exit

我们必须使用 Xvfb 来阻止 ImageJ 将图像发送到非假显示器(显示器数量是任意的)。该程序运行了六个小时,但没有输出图像,是因为我需要打开图像吗?

我想重新设计这个宏,以便我可以分割每个时间点并将其发送到自己的节点进行处理。如果您对如何解决此问题有任何想法,我们将非常感谢您的反馈。唯一需要注意的是,我们必须将并行迭代反卷积软件插件与 ImageJ 结合使用

谢谢!

最佳答案

关于 Xvfb 的使用,如果您使用 Fiji 的 ImageJ-launcher(在您的情况下很可能是 ImageJ-linux64),您可以使用 -- headless 选项负责处理 ImageJ 中嵌入的所有 GUI 调用,并且已经过许多人在集群环境中运行 ImageJ 的测试。

通过这种方式,您还可以从查看由例如生成的所有输出中受益。 IJ.log() 在宏中调用,我不确定您调用 ImageJ 的方式是否是这种情况。

您也可以考虑输入 setBatchMode(true)在宏的开头,但我不太确定这在 --headless 模式下运行时是否有任何区别。参见例如示例BatchModeTest.txt了解详情。

由于您打算在集群上运行这些东西,因此可能值得查看 Fiji Archipelago wiki 中的页面提供了大量详细信息并提示如何实现此目的。

干杯〜尼科

关于javascript - 使用 ImageJ 在大型集群上编写具有多个时间点的并行迭代反卷积脚本,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26920573/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com