gpt4 book ai didi

python - 如何使用 CuPy 在 GPU 上运行 python?

转载 作者:行者123 更新时间:2023-12-02 02:25:26 25 4
gpt4 key购买 nike

我正在尝试使用 CuPy 在 GPU 上执行 Python 代码图书馆。但是,当我运行 nvidia-smi 时,没有发现GPU进程。

nvidia-smi output

代码如下:

    import numpy as np
import cupy as cp
from scipy.stats import rankdata

def get_top_one_probability(vector):
return (cp.exp(vector) / cp.sum(cp.exp(vector)))

def get_listnet_gradient(training_dataset, real_labels, predicted_labels):
ly_topp = get_top_one_probability(real_labels)
cp.cuda.Stream.null.synchronize()
s1 = -cp.matmul(cp.transpose(training_dataset), cp.reshape(ly_topp, (np.shape(cp.asnumpy(ly_topp))[0], 1)))
cp.cuda.Stream.null.synchronize()
exp_lz_sum = cp.sum(cp.exp(predicted_labels))
cp.cuda.Stream.null.synchronize()
s2 = 1 / exp_lz_sum
s3 = cp.matmul(cp.transpose(training_dataset), cp.exp(predicted_labels))
cp.cuda.Stream.null.synchronize()
s2_s3 = s2 * s3 # s2 is a scalar value
s1.reshape(np.shape(cp.asnumpy(s1))[0], 1)
cp.cuda.Stream.null.synchronize()
s1s2s3 = cp.add(s1, s2_s3)
cp.cuda.Stream.null.synchronize()
return s1s2s3

def relu(matrix):
return cp.maximum(0, matrix)

def get_groups_id_count(groups_id):
current_group = 1
group_counter = 0
groups_id_counter = []
for element in groups_id:
if element != current_group:
groups_id_counter.append((current_group, group_counter))
current_group += 1
group_counter = 1
else:
group_counter += 1
return groups_id_counter

def mul_matrix(matrix1, matrix2):
return cp.matmul(matrix1, matrix2)

if mode == 'train': # Train MLP
number_of_features = np.shape(training_set_data)[1]

# Input neurons are equal to the number of training dataset features
input_neurons = number_of_features
# Assuming that number of hidden neurons are equal to the number of training dataset (input neurons) features + 10
hidden_neurons = number_of_features + 10

# Weights random initialization
input_hidden_weights = cp.array(np.random.rand(number_of_features, hidden_neurons) * init_var)
# Assuming that number of output neurons is 1
hidden_output_weights = cp.array(np.float32(np.random.rand(hidden_neurons, 1) * init_var))

listwise_gradients = np.array([])

for epoch in range(0, 70):
print('Epoch {0} started...'.format(epoch))
start_range = 0
for group in groups_id_count:
end_range = (start_range + group[1]) # Batch is a group of words with same group id
batch_dataset = cp.array(training_set_data[start_range:end_range, :])
cp.cuda.Stream.null.synchronize()
batch_labels = cp.array(dataset_labels[start_range:end_range])
cp.cuda.Stream.null.synchronize()
input_hidden_mul = mul_matrix(batch_dataset, input_hidden_weights)
cp.cuda.Stream.null.synchronize()
hidden_neurons_output = relu(input_hidden_mul)
cp.cuda.Stream.null.synchronize()
mlp_output = relu(mul_matrix(hidden_neurons_output, hidden_output_weights))
cp.cuda.Stream.null.synchronize()
batch_gradient = get_listnet_gradient(batch_dataset, batch_labels, mlp_output)
batch_gradient = cp.mean(cp.transpose(batch_gradient), axis=1)
aggregated_listwise_gradient = cp.sum(batch_gradient, axis=0)
cp.cuda.Stream.null.synchronize()
hidden_output_weights = hidden_output_weights - (learning_rate * aggregated_listwise_gradient)
cp.cuda.Stream.null.synchronize()
input_hidden_weights = input_hidden_weights - (learning_rate * aggregated_listwise_gradient)
cp.cuda.Stream.null.synchronize()
start_range = end_range

listwise_gradients = np.append(listwise_gradients, cp.asnumpy(aggregated_listwise_gradient))

print('Gradients: ', listwise_gradients)

我正在使用cp.cuda.Stream.null.synchronize()因为我读到此语句可确保代码在转到下一行之前在 GPU 上完成执行。

有人可以帮我在 GPU 上运行代码吗?提前致谢

最佳答案

cupy 可以在不同的设备上运行您的代码。您需要选择与您的 GPU 关联的正确设备 ID,以便您的代码在其上执行。我认为这些设备之一是您的 CPU(可能具有 ID 0)。您可以使用以下方法检查当前的设备 ID:

x = cp.array([1, 2, 3])
print(x.device)

要获取计算机上已识别的设备数量:

print(cp.cuda.runtime.getDeviceCount())

例如,要将当前设备更改为 ID 1:

cp.cuda.Device(1).use()

设备 ID 为零索引,因此,如果您有 3 台设备,您将获得 ID 集 {0, 1, 2}。

关于python - 如何使用 CuPy 在 GPU 上运行 python?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60027446/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com