- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我尝试按照 tensorflow 教程实现 MNIST CNN 神经网络,并找到这些实现 softmax 交叉熵的方法给出了不同的结果:
(1) 不好的结果
softmax = tf.nn.softmax(pred)
cross_entropy_cnn = - y * tf.log(softmax + 1e-10)
cost = tf.reduce_sum(cross_entropy_cnn)
cross_entropy_cnn = -y * tf.nn.log_softmax(pred)
cost = tf.reduce_sum(cross_entropy_cnn)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
import tutorials.mnist.input_data as input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.001
training_iters = 20000
batch_size = 100
display_step = 10
# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
W_conv1 = tf.Variable(tf.random_normal(shape=[5,5,1,32]))
b_conv1 = tf.Variable(tf.random_normal(shape=[1,32]))
W_conv2 = tf.Variable(tf.random_normal(shape=[5,5,32,64]))
b_conv2 = tf.Variable(tf.random_normal( shape=[1,64]))
W_full = tf.Variable(tf.random_normal(shape=[7 * 7 * 64, 1024]))
b_full = tf.Variable(tf.random_normal(shape=[1, 1024]))
W_softmax = tf.Variable(tf.truncated_normal(shape=[1024, 10]))
b_softmax = tf.Variable(tf.truncated_normal(shape=[1,10]))
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
keep_prob = tf.placeholder(tf.float32, shape=()) #dropout (keep probability)
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# Create model
def conv_net(x,dropout):
# Reshape input picture
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer
# conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
convOne = tf.nn.conv2d(x, W_conv1, strides=[1,1,1,1], padding="SAME")
reluOne = tf.nn.relu(convOne + b_conv1)
conv1 = tf.nn.max_pool(reluOne, ksize=[1,2,2,1],strides=[1,2,2,1],padding="SAME")
# Convolution Layer
convTwo = tf.nn.conv2d(conv1, W_conv2, strides=[1,1,1,1], padding="SAME")
reluTwo = tf.nn.relu(convTwo + b_conv2)
conv2 = tf.nn.max_pool(reluTwo, ksize=[1,2,2,1], strides=[1,2,2,1],padding="SAME")
# Fully connected layer
input_flat=tf.reshape(conv2, shape=[-1, 7 * 7 * 64])
fc1 = tf.nn.relu(tf.matmul(input_flat, W_full) + b_full)
# Apply Dropout
drop_out = tf.nn.dropout(fc1,keep_prob)
# Output, class prediction
y_predict = tf.matmul(drop_out, W_softmax) + b_softmax
return y_predict
# Construct model
pred = conv_net(x, keep_prob)
# Define loss and optimizer
# cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) #(method (3)
# softmax = tf.nn.softmax(pred) #method (1)
# cross_entropy_cnn = - y * tf.log(softmax + 1e-10) #method (1)
cross_entropy_cnn = -y * tf.nn.log_softmax(pred) #method (2)
cost = tf.reduce_sum(cross_entropy_cnn)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(tf.initialize_all_variables())
for i in range(20000):
batch = mnist.train.next_batch(128)
if i% 100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y:batch[1], keep_prob:1.0},session=sess)
print ("step "+ str(i) +", training accuracy :"+ str(train_accuracy))
cross_entropy_val = cross_entropy_cnn.eval(feed_dict={x:batch[0], y:batch[1], keep_prob:1.0},session=sess)
sess.run(optimizer, feed_dict={x:batch[0], y:batch[1], keep_prob:0.75})
print("test accuracy :" + str(accuracy.eval(feed_dict={x:mnist.test.images, y:mnist.test.labels, keep_prob:1.0},session=sess)))
sess.close()
step 0, training accuracy :0.109375
step 100, training accuracy :0.0703125
step 200, training accuracy :0.0546875
step 300, training accuracy :0.109375
step 400, training accuracy :0.132812
step 500, training accuracy :0.0390625
step 600, training accuracy :0.0859375
step 700, training accuracy :0.0703125
step 800, training accuracy :0.109375
step 900, training accuracy :0.101562
step 1000, training accuracy :0.140625
step 1100, training accuracy :0.0703125
step 1200, training accuracy :0.117188
step 1300, training accuracy :0.109375
step 1400, training accuracy :0.132812
step 1500, training accuracy :0.101562
step 1600, training accuracy :0.109375
step 1700, training accuracy :0.125
step 1800, training accuracy :0.117188
step 1900, training accuracy :0.0859375
step 2000, training accuracy :0.078125
step 2100, training accuracy :0.09375
step 2200, training accuracy :0.117188
step 2300, training accuracy :0.0546875
step 2400, training accuracy :0.117188
step 2500, training accuracy :0.0859375
step 2600, training accuracy :0.0703125
step 2700, training accuracy :0.078125
step 2800, training accuracy :0.117188
step 2900, training accuracy :0.09375
step 3000, training accuracy :0.0546875
step 3100, training accuracy :0.09375
step 3200, training accuracy :0.117188
step 3300, training accuracy :0.0703125
step 3400, training accuracy :0.125
step 3500, training accuracy :0.132812
step 3600, training accuracy :0.0859375
step 3700, training accuracy :0.078125
step 3800, training accuracy :0.0859375
step 3900, training accuracy :0.109375
step 4000, training accuracy :0.101562
step 4100, training accuracy :0.140625
step 4200, training accuracy :0.0859375
step 4300, training accuracy :0.125
step 4400, training accuracy :0.109375
step 4500, training accuracy :0.0859375
step 4600, training accuracy :0.09375
step 4700, training accuracy :0.117188
step 4800, training accuracy :0.132812
step 4900, training accuracy :0.0625
step 5000, training accuracy :0.09375
step 5100, training accuracy :0.078125
step 5200, training accuracy :0.09375
step 5300, training accuracy :0.0859375
step 5400, training accuracy :0.0703125
step 5500, training accuracy :0.109375
step 5600, training accuracy :0.132812
step 5700, training accuracy :0.09375
step 5800, training accuracy :0.117188
step 5900, training accuracy :0.0703125
step 6000, training accuracy :0.078125
step 6100, training accuracy :0.078125
step 6200, training accuracy :0.0703125
step 6300, training accuracy :0.09375
step 6400, training accuracy :0.09375
step 6500, training accuracy :0.117188
step 6600, training accuracy :0.0859375
step 6700, training accuracy :0.117188
step 6800, training accuracy :0.0859375
step 6900, training accuracy :0.078125
step 7000, training accuracy :0.109375
step 7100, training accuracy :0.09375
step 7200, training accuracy :0.117188
step 7300, training accuracy :0.140625
step 7400, training accuracy :0.101562
step 7500, training accuracy :0.0703125
step 7600, training accuracy :0.101562
step 7700, training accuracy :0.0703125
step 7800, training accuracy :0.078125
step 7900, training accuracy :0.0859375
step 8000, training accuracy :0.117188
step 8100, training accuracy :0.101562
step 8200, training accuracy :0.101562
step 8300, training accuracy :0.125
step 8400, training accuracy :0.125
step 8500, training accuracy :0.101562
step 8600, training accuracy :0.078125
step 8700, training accuracy :0.046875
step 8800, training accuracy :0.0859375
step 8900, training accuracy :0.109375
step 9000, training accuracy :0.101562
step 9100, training accuracy :0.132812
step 9200, training accuracy :0.109375
step 9300, training accuracy :0.109375
step 9400, training accuracy :0.0859375
step 9500, training accuracy :0.101562
step 9600, training accuracy :0.117188
step 9700, training accuracy :0.0703125
step 9800, training accuracy :0.0625
step 9900, training accuracy :0.0859375
step 10000, training accuracy :0.0625
step 10100, training accuracy :0.09375
step 10200, training accuracy :0.0859375
step 10300, training accuracy :0.09375
step 10400, training accuracy :0.078125
step 10500, training accuracy :0.148438
step 10600, training accuracy :0.101562
step 10700, training accuracy :0.125
step 10800, training accuracy :0.109375
step 10900, training accuracy :0.109375
step 11000, training accuracy :0.0625
step 11100, training accuracy :0.0859375
step 11200, training accuracy :0.078125
step 11300, training accuracy :0.148438
step 11400, training accuracy :0.078125
step 11500, training accuracy :0.109375
step 11600, training accuracy :0.117188
step 11700, training accuracy :0.09375
step 11800, training accuracy :0.078125
step 11900, training accuracy :0.0859375
step 12000, training accuracy :0.148438
step 12100, training accuracy :0.0859375
step 12200, training accuracy :0.09375
step 12300, training accuracy :0.101562
step 12400, training accuracy :0.078125
step 12500, training accuracy :0.109375
step 12600, training accuracy :0.078125
step 12700, training accuracy :0.101562
step 12800, training accuracy :0.0625
step 12900, training accuracy :0.101562
step 13000, training accuracy :0.109375
step 13100, training accuracy :0.125
step 13200, training accuracy :0.0703125
step 13300, training accuracy :0.117188
step 13400, training accuracy :0.101562
step 13500, training accuracy :0.140625
step 13600, training accuracy :0.132812
step 13700, training accuracy :0.109375
step 13800, training accuracy :0.148438
step 13900, training accuracy :0.09375
step 14000, training accuracy :0.109375
step 14100, training accuracy :0.0625
step 14200, training accuracy :0.125
step 14300, training accuracy :0.09375
step 14400, training accuracy :0.101562
step 14500, training accuracy :0.132812
step 14600, training accuracy :0.09375
step 14700, training accuracy :0.132812
step 14800, training accuracy :0.148438
step 14900, training accuracy :0.109375
step 15000, training accuracy :0.117188
step 15100, training accuracy :0.125
step 15200, training accuracy :0.117188
step 15300, training accuracy :0.109375
step 15400, training accuracy :0.0859375
step 15500, training accuracy :0.148438
step 15600, training accuracy :0.078125
step 15700, training accuracy :0.117188
step 15800, training accuracy :0.0859375
step 15900, training accuracy :0.09375
step 16000, training accuracy :0.078125
step 16100, training accuracy :0.109375
step 16200, training accuracy :0.101562
step 16300, training accuracy :0.125
step 16400, training accuracy :0.109375
step 16500, training accuracy :0.109375
step 16600, training accuracy :0.078125
step 16700, training accuracy :0.117188
step 16800, training accuracy :0.125
step 16900, training accuracy :0.109375
step 17000, training accuracy :0.132812
step 17100, training accuracy :0.109375
step 17200, training accuracy :0.117188
step 17300, training accuracy :0.148438
step 17400, training accuracy :0.0859375
step 17500, training accuracy :0.109375
step 17600, training accuracy :0.09375
step 17700, training accuracy :0.09375
step 17800, training accuracy :0.101562
step 17900, training accuracy :0.078125
step 18000, training accuracy :0.148438
step 18100, training accuracy :0.09375
step 18200, training accuracy :0.171875
step 18300, training accuracy :0.101562
step 18400, training accuracy :0.078125
step 18500, training accuracy :0.109375
step 18600, training accuracy :0.0859375
step 18700, training accuracy :0.078125
step 18800, training accuracy :0.101562
step 18900, training accuracy :0.140625
step 19000, training accuracy :0.0546875
step 19100, training accuracy :0.0859375
step 19200, training accuracy :0.0859375
step 19300, training accuracy :0.0859375
step 19400, training accuracy :0.078125
step 19500, training accuracy :0.117188
step 19600, training accuracy :0.078125
step 19700, training accuracy :0.117188
step 19800, training accuracy :0.0859375
step 19900, training accuracy :0.148438
test accuracy :0.1032
step 0, training accuracy :0.101562
step 100, training accuracy :0.789062
step 200, training accuracy :0.875
step 300, training accuracy :0.921875
step 400, training accuracy :0.929688
step 500, training accuracy :0.953125
step 600, training accuracy :0.960938
step 700, training accuracy :0.96875
step 800, training accuracy :0.960938
step 900, training accuracy :0.984375
step 1000, training accuracy :0.984375
step 1100, training accuracy :0.96875
step 1200, training accuracy :0.984375
step 1300, training accuracy :0.960938
step 1400, training accuracy :0.984375
step 1500, training accuracy :1.0
step 1600, training accuracy :1.0
step 1700, training accuracy :0.992188
step 1800, training accuracy :0.96875
step 1900, training accuracy :0.96875
step 2000, training accuracy :1.0
step 2100, training accuracy :0.984375
step 2200, training accuracy :0.96875
step 2300, training accuracy :0.984375
step 2400, training accuracy :0.984375
step 2500, training accuracy :0.96875
step 2600, training accuracy :0.992188
step 2700, training accuracy :0.984375
step 2800, training accuracy :0.96875
step 2900, training accuracy :0.984375
step 3000, training accuracy :0.992188
step 3100, training accuracy :0.976562
step 3200, training accuracy :1.0
step 3300, training accuracy :0.984375
step 3400, training accuracy :0.984375
step 3500, training accuracy :0.984375
step 3600, training accuracy :0.992188
step 3700, training accuracy :0.984375
step 3800, training accuracy :0.984375
step 3900, training accuracy :0.984375
step 4000, training accuracy :0.96875
step 4100, training accuracy :1.0
step 4200, training accuracy :1.0
step 4300, training accuracy :1.0
step 4400, training accuracy :0.984375
step 4500, training accuracy :1.0
step 4600, training accuracy :0.984375
step 4700, training accuracy :0.984375
step 4800, training accuracy :1.0
step 4900, training accuracy :1.0
step 5000, training accuracy :1.0
step 5100, training accuracy :0.984375
step 5200, training accuracy :0.992188
step 5300, training accuracy :0.992188
step 5400, training accuracy :1.0
step 5500, training accuracy :1.0
step 5600, training accuracy :1.0
step 5700, training accuracy :1.0
step 5800, training accuracy :1.0
step 5900, training accuracy :0.992188
step 6000, training accuracy :1.0
step 6100, training accuracy :1.0
step 6200, training accuracy :0.992188
step 6300, training accuracy :0.992188
step 6400, training accuracy :0.992188
step 6500, training accuracy :0.992188
step 6600, training accuracy :0.992188
step 6700, training accuracy :1.0
step 6800, training accuracy :1.0
step 6900, training accuracy :1.0
step 7000, training accuracy :1.0
step 7100, training accuracy :1.0
step 7200, training accuracy :0.992188
step 7300, training accuracy :0.992188
step 7400, training accuracy :1.0
step 7500, training accuracy :1.0
step 7600, training accuracy :0.992188
step 7700, training accuracy :1.0
step 7800, training accuracy :0.984375
step 7900, training accuracy :1.0
step 8000, training accuracy :1.0
step 8100, training accuracy :0.992188
step 8200, training accuracy :1.0
step 8300, training accuracy :1.0
step 8400, training accuracy :1.0
step 8500, training accuracy :1.0
step 8600, training accuracy :1.0
step 8700, training accuracy :1.0
step 8800, training accuracy :1.0
step 8900, training accuracy :1.0
step 9000, training accuracy :1.0
step 9100, training accuracy :1.0
step 9200, training accuracy :1.0
step 9300, training accuracy :1.0
step 9400, training accuracy :1.0
step 9500, training accuracy :1.0
step 9600, training accuracy :0.992188
step 9700, training accuracy :0.992188
step 9800, training accuracy :1.0
step 9900, training accuracy :1.0
step 10000, training accuracy :1.0
step 10100, training accuracy :1.0
step 10200, training accuracy :0.992188
step 10300, training accuracy :1.0
step 10400, training accuracy :1.0
step 10500, training accuracy :1.0
step 10600, training accuracy :0.992188
step 10700, training accuracy :1.0
step 10800, training accuracy :1.0
step 10900, training accuracy :1.0
step 11000, training accuracy :1.0
step 11100, training accuracy :1.0
step 11200, training accuracy :1.0
step 11300, training accuracy :1.0
step 11400, training accuracy :0.992188
step 11500, training accuracy :1.0
step 11600, training accuracy :1.0
step 11700, training accuracy :1.0
step 11800, training accuracy :1.0
step 11900, training accuracy :1.0
step 12000, training accuracy :1.0
step 12100, training accuracy :1.0
step 12200, training accuracy :0.992188
step 12300, training accuracy :1.0
step 12400, training accuracy :1.0
step 12500, training accuracy :1.0
step 12600, training accuracy :1.0
step 12700, training accuracy :1.0
step 12800, training accuracy :1.0
step 12900, training accuracy :1.0
step 13000, training accuracy :1.0
step 13100, training accuracy :1.0
step 13200, training accuracy :0.992188
step 13300, training accuracy :1.0
step 13400, training accuracy :1.0
step 13500, training accuracy :1.0
step 13600, training accuracy :1.0
step 13700, training accuracy :1.0
step 13800, training accuracy :1.0
step 13900, training accuracy :1.0
step 14000, training accuracy :1.0
step 14100, training accuracy :1.0
step 14200, training accuracy :1.0
step 14300, training accuracy :1.0
step 14400, training accuracy :1.0
step 14500, training accuracy :1.0
step 14600, training accuracy :1.0
step 14700, training accuracy :1.0
step 14800, training accuracy :1.0
step 14900, training accuracy :1.0
step 15000, training accuracy :1.0
step 15100, training accuracy :1.0
step 15200, training accuracy :1.0
step 15300, training accuracy :1.0
step 15400, training accuracy :0.992188
step 15500, training accuracy :1.0
step 15600, training accuracy :1.0
step 15700, training accuracy :1.0
step 15800, training accuracy :1.0
step 15900, training accuracy :1.0
step 16000, training accuracy :1.0
step 16100, training accuracy :1.0
step 16200, training accuracy :1.0
step 16300, training accuracy :1.0
step 16400, training accuracy :1.0
step 16500, training accuracy :1.0
step 16600, training accuracy :1.0
step 16700, training accuracy :0.992188
step 16800, training accuracy :1.0
step 16900, training accuracy :1.0
step 17000, training accuracy :1.0
step 17100, training accuracy :1.0
step 17200, training accuracy :1.0
step 17300, training accuracy :1.0
step 17400, training accuracy :1.0
step 17500, training accuracy :1.0
step 17600, training accuracy :1.0
step 17700, training accuracy :1.0
step 17800, training accuracy :1.0
step 17900, training accuracy :1.0
step 18000, training accuracy :1.0
step 18100, training accuracy :1.0
step 18200, training accuracy :1.0
step 18300, training accuracy :1.0
step 18400, training accuracy :1.0
step 18500, training accuracy :1.0
step 18600, training accuracy :1.0
step 18700, training accuracy :1.0
step 18800, training accuracy :0.992188
step 18900, training accuracy :1.0
step 19000, training accuracy :1.0
step 19100, training accuracy :1.0
step 19200, training accuracy :1.0
step 19300, training accuracy :1.0
step 19400, training accuracy :1.0
step 19500, training accuracy :1.0
step 19600, training accuracy :1.0
step 19700, training accuracy :1.0
step 19800, training accuracy :1.0
step 19900, training accuracy :1.0
test accuracy :0.987
最佳答案
tf.nn.softmax_cross_entropy_with_logits
"softmax_cross_entropy_with_logits"
应该只是
"cross_entropy"
,因为如果您认为“交叉熵”是
"nll_loss"
的
"log_softmax"
;所以
"softmax"
因为前缀是错误的。
"logits"
任何进入“softmax”的成熟名称,所以又是一个非常可疑的名字。
关于logging - tensorflow log_softmax tf.nn.log(tf.nn.softmax(predict)) tf.nn.softmax_cross_entropy_with_logits,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40675182/
在 Tensorflow(从 v1.2.1 开始)中,似乎有(至少)两个并行 API 来构建计算图。 tf.nn 中有函数,如 conv2d、avg_pool、relu、dropout,tf.laye
我正在处理眼睛轨迹数据和卷积神经网络。我被要求使用 tf.reduce_max(lastconv, axis=2)代替 MaxPooling 层和 tf.reduce_sum(lastconv,axi
TensorFlow 提供了 3 种不同的数据存储格式 tf.train.Feature .它们是: tf.train.BytesList tf.train.FloatList tf.train.In
我正在尝试为上下文强盗问题 (https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part
我在使用 Tensorflow 时遇到问题: 以下代码为卷积 block 生成正确的图: def conv_layer(self, inputs, filter_size = 3, num_filte
我正在将我的训练循环迁移到 Tensorflow 2.0 API .在急切执行模式下,tf.GradientTape替换 tf.gradients .问题是,它们是否具有相同的功能?具体来说: 在函数
tensorflow 中 tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)) 的目的是什么? 更多上下文:
我一直在努力学习 TensorFlow,我注意到不同的函数用于相同的目标。例如,为了平方变量,我看到了 tf.square()、tf.math.square() 和 tf.keras.backend.
我正在尝试使用自动编码器开发图像着色器。有 13000 张训练图像。如果我使用 tf.data,每个 epoch 大约需要 45 分钟,如果我使用 tf.utils.keras.Sequence 大约
我尝试按照 tensorflow 教程实现 MNIST CNN 神经网络,并找到这些实现 softmax 交叉熵的方法给出了不同的结果: (1) 不好的结果 softmax = tf.nn.softm
其实,我正在coursera上做deeplearning.ai的作业“Art Generation with Neural Style Transfer”。在函数 compute_layer_styl
训练神经网络学习“异或” 我正在尝试使用“批量归一化”,我创建了一个批量归一化层函数“batch_norm1”。 import tensorflow as tf import nump
我正在尝试协调来自 TF“图形和 session ”指南以及 TF“Keras”指南和 TF Estimators 指南的信息。现在在前者中它说 tf.Session 使计算图能够访问物理硬件以执行图
我正在关注此处的多层感知器示例:https://github.com/aymericdamien/TensorFlow-Examples我对函数 tf.nn.softmax_cross_entropy
回到 TensorFlow = 2.0 中消失了。因此,像这样的解决方案...... with tf.variable_scope("foo"): with tf.variable_scope
我按照官方网站中的步骤安装了tensorflow。但是,在该网站中,作为安装的最后一步,他们给出了一行代码来“验证安装”。但他们没有告诉这段代码会给出什么输出。 该行是: python -c "imp
代码: x = tf.constant([1.,2.,3.], shape = (3,2,4)) y = tf.constant([1.,2.,3.], shape = (3,21,4)) tf.ma
我正在尝试从 Github 训练一个 3D 分割网络.我的模型是用 Keras (Python) 实现的,这是一个典型的 U-Net 模型。模型,总结如下, Model: "functional_3"
我正在使用 TensorFlow 2。我正在尝试优化一个函数,该函数使用经过训练的 tensorflow 模型(毒药)的损失。 @tf.function def totalloss(x): x
试图了解 keras 优化器中的 SGD 优化代码 (source code)。在 get_updates 模块中,我们有: # momentum shapes = [K.int_shape(p) f
我是一名优秀的程序员,十分优秀!