gpt4 book ai didi

python-3.x - librosa.util.exceptions.ParameterError : Invalid shape for monophonic audio: ndim=2, shape=(172972, 2)

转载 作者:行者123 更新时间:2023-12-03 00:03:22 27 4
gpt4 key购买 nike

请有人帮我解决这个问题

我正在关注本教程:
https://data-flair.training/blogs/python-mini-project-speech-emotion-recognition/

并使用他们从 RAVDESS 数据集中获取的数据集并降低了它们的采样率。我可以轻松地使用这些数据进行训练。但是当我从这里使用原始数据时:
https://zenodo.org/record/1188976

只是“Audio_Speech_Actors_01-24.zip”并尝试训练模型它给了我以下错误:

Traceback (most recent call last):
File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 64, in <module>
x_train, x_test, y_train, y_test = load_data(test_size=0.20)
File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 57, in load_data
feature = extract_feature(file, mfcc=True, chroma=True, mel=True)
File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 32, in extract_feature
stft = np.abs(librosa.stft(X))
File "C:\Users\raj.pandey\Desktop\speech-emotion-recognition\lib\site-packages\librosa\core\spectrum.py", line 215, in stft
util.valid_audio(y)
File "C:\Users\raj.pandey\Desktop\speech-emotion-recognition\lib\site-packages\librosa\util\utils.py", line 268, in valid_audio
'ndim={:d}, shape={}'.format(y.ndim, y.shape))
librosa.util.exceptions.ParameterError: Invalid shape for monophonic audio: ndim=2, shape=(172972, 2)

火车在同一数据集上提供的教程,但只是降低了采样率。为什么不在原版上运行?

它是否必须在代码中对此做任何事情:
X = sound_file.read(dtype="float32")

我也只是出于好奇试图从 .mp3 文件中进行预测,结果出现错误。然后我将该 .mp3 文件转换为 wav 并尝试但仍然在标题中给出错误。

如何解决此错误并使其在原始数据上进行训练?如果它开始对原始文件进行训练,那么我认为它可以预测 .mp3 到 wav 转换文件。

下面是我正在使用的代码:
import librosa
import soundfile
import os
import glob
import pickle
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score

# DataFlair - Emotions in the RAVDESS dataset
emotions = {
'01': 'neutral',
'02': 'calm',
'03': 'happy',
'04': 'sad',
'05': 'angry',
'06': 'fearful',
'07': 'disgust',
'08': 'surprised'
}
# DataFlair - Emotions to observe
observed_emotions = ['calm', 'happy', 'fearful', 'disgust']


# DataFlair - Extract features (mfcc, chroma, mel) from a sound file
def extract_feature(file_name, mfcc, chroma, mel):
with soundfile.SoundFile(file_name) as sound_file:
X = sound_file.read(dtype="float32")
sample_rate = sound_file.samplerate
if chroma:
stft = np.abs(librosa.stft(X))
result = np.array([])
if mfcc:
mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T, axis=0)
result = np.hstack((result, mfccs))
if chroma:
chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T, axis=0)
result = np.hstack((result, chroma))
if mel:
mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T, axis=0)
result = np.hstack((result, mel))
return result


# DataFlair - Load the data and extract features for each sound file
def load_data(test_size=0.2):
x, y = [], []
for file in glob.glob("C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\Actor_*\\*.wav"):
# for file in glob.glob("C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\newactor\\*.wav"):

file_name = os.path.basename(file)
emotion = emotions[file_name.split("-")[2]]

if emotion not in observed_emotions:
continue
feature = extract_feature(file, mfcc=True, chroma=True, mel=True)
x.append(feature)
y.append(emotion)
return train_test_split(np.array(x), y, test_size=test_size, random_state=9)


# DataFlair - Split the dataset
x_train, x_test, y_train, y_test = load_data(test_size=0.20)

# DataFlair - Get the shape of the training and testing datasets
# print((x_train.shape[0], x_test.shape[0]))

# DataFlair - Get the number of features extracted
# print(f'Features extracted: {x_train.shape[1]}')

# DataFlair - Initialize the Multi Layer Perceptron Classifier
model = MLPClassifier(alpha=0.01, batch_size=256, epsilon=1e-08, hidden_layer_sizes=(300,), learning_rate='adaptive',
max_iter=500)

# DataFlair - Train the model
model.fit(x_train, y_train)

# print(model.fit(x_train, y_train))

# DataFlair - Predict for the test set
y_pred = model.predict(x_test)
# print("This is y_pred: ", y_pred)


# DataFlair - Calculate the accuracy of our model
accuracy = accuracy_score(y_true=y_test, y_pred=y_pred)

# DataFlair - Print the accuracy
# print("Accuracy: {:.2f}%".format(accuracy * 100))

# Predicting random files
tar_file = "C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\newactor\\pls-hold-while-try.wav"
new_feature = extract_feature(tar_file, mfcc=True, chroma=True, mel=True)
data = []
data.append(new_feature)
data = np.array(data)
z_pred = model.predict(data)
print("This is output: ", z_pred)

教程提供的训练数据集是这样的: https://drive.google.com/file/d/1wWsrN2Ep7x6lWqOXfr4rpKGYrJhWc8z7/view

您可以从这里获得的原始数据集(不适用于该程序): https://zenodo.org/record/1188976 (audio_speech_actor一)

在预测随机文件时,如果您将任何带有语音的 .wav 文件放入其中,则会导致错误。如果您尝试文本到语音转换器并获取 .wav 并将其传递到这里,它总是会说“fearfull”。我尝试将 .mp3 转换为 .wav 以使其正常工作,但仍然是错误。

有人检查过我怎样才能让它工作?

最佳答案

我刚刚遇到了同样的问题。对于阅读本文但不想删除立体声文件的任何人,可以使用命令行工具 ffmpeg 将它们转换为单声道:

ffmpeg -i stereo_file_name.wav -ac 1 mono_file_name.wav 
Link to ffmpeg
Related Stack Overflow Post

关于python-3.x - librosa.util.exceptions.ParameterError : Invalid shape for monophonic audio: ndim=2, shape=(172972, 2),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59664542/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com