gpt4 book ai didi

python - 按集群拆分 bam,然后使用检查点按集群合并 bam

转载 作者:太空宇宙 更新时间:2023-11-04 04:08:08 25 4
gpt4 key购买 nike

我有来自 3 个不同样本的三个单细胞 bam 文件,我需要将它们分成更小的 bams。然后我需要合并来自相同集群的不同样本的 bam 文件。我尝试使用检查点但有点迷路。 https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html

这是我发布的这个问题的延续 split bam files to (variable) pre-defined number of small bam files depending on the sample

SAMPLE_cluster = { "SampleA" : [ "1", "2", "3" ], "SampleB" :  [ "1" ], "SampleC" : [ "1", "2" ] }

CLUSTERS = []
for sample in SAMPLE_cluster:
CLUSTERS.extend(SAMPLE_cluster[sample])
CLUSTERS = sorted(set(CLUSTERS)

rule all:
input: expand("01merged_bam/{cluster_id}.bam, cluster_id = CLUSTERS)

checkpoint split_bam:
input: "{sample}.bam"
output: directory("01split_bam/{sample}/")
shell:
"""
split_bam.sh {input}
"""
## the split_bam.sh will split the bam file to "01split_bam/{sample}/{sample}_{cluster_id}.bam"

def merge_bam_input(wildcards):
checkpoint_output = checkpoints.split_bam.get(**wildcards).output[0]
return expand("01split_bam/{sample}/{sample}_{{cluster_id}}.bam", \
sample = glob_wildcards(os.path.join(checkpoint_output, "{sample}_{cluster_id}.bam")).sample)


rule merge_bam_per_cluster:
input: merge_bam_input
output: "01merged_bam/{cluster_id}.bam"
log: "00log/{cluster_id}.merge_bam.log"
threads: 2
shell:
"""
samtools merge -@ 2 -r {output} {input}
"""


根据簇号,规则 merge_bam_per_cluster 的输入会发生变化:

例如对于集群 1:“01split_bam/SampleA/SampleA_1.bam”、“01split_bam/SampleB/SampleB_1.bam”、“01split_bam/SampleC/SampleC_1.bam”。

对于集群 2:“01split_bam/SampleA/SampleA_2.bam”、“01split_bam/SampleC/SampleC_2.bam”。

对于集群 3:“01split_bam/SampleA/SampleA_3.bam”。

最佳答案

我决定不使用检查点并使用输入函数来获取输入


SAMPLE_cluster = { "SampleA" : [ "1", "2", "3" ], "SampleB" : [ "1" ], "SampleC" : [ "1", "2" ] }

# reverse the mapping
cluster_sample = {'1':['sampleA','sample'B','sampleC'], '2':['sampleA', 'sampleC'], '3':['sampleA']}

rule split_bam:
input: "{sample}.bam"
output: "split.touch"
shell:
"""
split_bam {input}
touch split.touch
"""
rule index_split_bam:
input: "split.touch"
output: "split_bam/{sample}_{cluster_id}.bam.bai"
shell:
"""
samtools index 01split_bam/{wildcards.sample}/{wildcards.sample}_{wildcards.cluster_id}.bam
"""

def get_merge_bam_input(wildcards):
samples = cluster_sample[wildcards.cluster_id]
return expand("01split_bam/{sample}/{sample}_{{cluster_id}}.bam.bai", sample = samples)


rule merge_bam_per_cluster:
input: get_merge_bam_input
output: "01merged_bam/{cluster_id}.bam"
params:
bam = lambda wildcards, input: " ".join(input).replace(".bai", "")
log: "00log/{cluster_id}.merge_bam.log"
threads: 2
shell:
"""
samtools merge -@ 2 -r {output} {params.bam}
"""

它似乎在起作用。

关于python - 按集群拆分 bam,然后使用检查点按集群合并 bam,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56861913/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com