gpt4 book ai didi

python - Pydub - 将 split_on_silence 与最小长度/文件大小相结合

转载 作者:太空狗 更新时间:2023-10-29 23:57:53 24 4
gpt4 key购买 nike

我有两个脚本,其中一个会分割一定长度的音频,另一个会在每次有无声段落时分割音频。是否可以在静音时拆分音频,但只有在经过一定时间后才能拆分音频?我需要不少于 5 分钟的无声分割视频。

忽略静音的拆分脚本:

from pydub import AudioSegment
#from pydub.utils import mediainfo
from pydub.utils import make_chunks
import math

#lac_audio = AudioSegment.from_file("Kalimba.mp3", "mp3")
#flac_audio.export("audio.mp3", format="mp3")
myaudio = AudioSegment.from_file("Kalimba.mp3" , "mp3")
channel_count = myaudio.channels #Get channels
sample_width = myaudio.sample_width #Get sample width
duration_in_sec = len(myaudio) / 1000#Length of audio in sec
sample_rate = myaudio.frame_rate

print "sample_width=", sample_width
print "channel_count=", channel_count
print "duration_in_sec=", duration_in_sec
print "frame_rate=", sample_rate
bit_rate =16 #assumption , you can extract from mediainfo("test.wav") dynamically


wav_file_size = (sample_rate * bit_rate * channel_count * duration_in_sec) / 8
print "wav_file_size = ",wav_file_size


file_split_size = 10000000 # 10Mb OR 10, 000, 000 bytes
total_chunks = wav_file_size // file_split_size

#Get chunk size by following method #There are more than one ofcourse
#for duration_in_sec (X) --> wav_file_size (Y)
#So whats duration in sec (K) --> for file size of 10Mb
# K = X * 10Mb / Y

chunk_length_in_sec = math.ceil((duration_in_sec * 10000000 ) /wav_file_size) #in sec
chunk_length_ms = chunk_length_in_sec * 1000
chunks = make_chunks(myaudio, chunk_length_ms)

#Export all of the individual chunks as wav files

for i, chunk in enumerate(chunks):
chunk_name = "chunk{0}.mp3".format(i)
print "exporting", chunk_name
chunk.export(chunk_name, format="mp3")

忽略长度的分割脚本:

from pydub import AudioSegment
from pydub.silence import split_on_silence

sound = AudioSegment.from_mp3("my_file.mp3")
chunks = split_on_silence(sound,
# must be silent for at least half a second
min_silence_len=500,

# consider it silent if quieter than -16 dBFS
silence_thresh=-16

)

for i, chunk in enumerate(chunks):
chunk.export("/path/to/ouput/dir/chunk{0}.wav".format(i), format="wav")

最佳答案

我的建议是使用 pydub.silence.split_on_silence()然后根据需要重新组合这些片段,以便您拥有与目标大小大致相同的文件。

有点像

from pydub import AudioSegment
from pydub.silence import split_on_silence

sound = AudioSegment.from_file("/path/to/file.mp3", format="mp3")
chunks = split_on_silence(
sound,

# split on silences longer than 1000ms (1 sec)
min_silence_len=1000,

# anything under -16 dBFS is considered silence
silence_thresh=-16,

# keep 200 ms of leading/trailing silence
keep_silence=200
)

# now recombine the chunks so that the parts are at least 90 sec long
target_length = 90 * 1000
output_chunks = [chunks[0]]
for chunk in chunks[1:]:
if len(output_chunks[-1]) < target_length:
output_chunks[-1] += chunk
else:
# if the last output chunk is longer than the target length,
# we can start a new one
output_chunks.append(chunk)

# now your have chunks that are bigger than 90 seconds (except, possibly the last one)

或者,您可以使用 pydub.silence.detect_nonsilent()找到范围并自行决定在何处分割原始音频

注意:我还在 similar/duplicate github issue 上发布了这个

关于python - Pydub - 将 split_on_silence 与最小长度/文件大小相结合,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37725416/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com