- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我的神经网络在用Python训练后没有给出预期的输出。代码中是否有错误?有什么方法可以减少均方误差(MSE)?
我试图反复训练(运行程序)网络,但它不是在学习,而是给出相同的MSE和输出。
这是我使用的数据:
https://drive.google.com/open?id=1GLm87-5E_6YhUIPZ_CtQLV9F9wcGaTj2
这是我的代码:
#load and evaluate a saved model
from numpy import loadtxt
from tensorflow.keras.models import load_model
# load model
model = load_model('ANNnew.h5')
# summarize model.
model.summary()
#Model starts
import numpy as np
import pandas as pd
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# Importing the dataset
X = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet1").values
y = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet2").values
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.08, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Initialising the ANN
model = Sequential()
# Adding the input layer and the first hidden layer
model.add(Dense(32, activation = 'tanh', input_dim = 4))
# Adding the second hidden layer
model.add(Dense(units = 18, activation = 'tanh'))
# Adding the third hidden layer
model.add(Dense(units = 32, activation = 'tanh'))
#model.add(Dense(1))
model.add(Dense(units = 1))
# Compiling the ANN
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the ANN to the Training set
model.fit(X_train, y_train, batch_size = 100, epochs = 1000)
y_pred = model.predict(X_test)
for i in range(5):
print('%s => %d (expected %s)' % (X[i].tolist(), y_pred[i], y[i].tolist()))
plt.plot(y_test, color = 'red', label = 'Test data')
plt.plot(y_pred, color = 'blue', label = 'Predicted data')
plt.title('Prediction')
plt.legend()
plt.show()
# save model and architecture to single file
model.save("ANNnew.h5")
print("Saved model to disk")
最佳答案
我注意到您的印刷报告存在一个小错误-而不是:
for i in range(5):
print('%s => %d (expected %s)' % (X[i].tolist(), y_pred[i], y[i].tolist()))
for i in range(len(y_test)):
print('%s => %d (expected %s)' % (X[i].tolist(), y_pred[i], y_test[i].tolist()))
# imports
import numpy as np
import pandas as pd
import os
import tensorflow as tf
import matplotlib.pyplot as plt
import random
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.models import Sequential
from tensorflow import set_random_seed
from tensorflow.keras.initializers import glorot_uniform
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from importlib import reload
# useful pandas display settings
pd.options.display.float_format = '{:.3f}'.format
# useful functions
def plot_history(history, metrics_to_plot):
"""
Function plots history of selected metrics for fitted neural net.
"""
# plot
for metric in metrics_to_plot:
plt.plot(history.history[metric])
# name X axis informatively
plt.xlabel('epoch')
# name Y axis informatively
plt.ylabel('metric')
# add informative legend
plt.legend(metrics_to_plot)
# plot
plt.show()
def plot_fit(y_true, y_pred, title='title'):
"""
Function plots true values and predicted values, sorted in increase order by true values.
"""
# create one dataframe with true values and predicted values
results = y_true.reset_index(drop=True).merge(pd.DataFrame(y_pred), left_index=True, right_index=True)
# rename columns informartively
results.columns = ['true', 'prediction']
# sort for clarity of visualization
results = results.sort_values(by=['true']).reset_index(drop=True)
# plot true values vs predicted values
results.plot()
# adding scatter on line plots
plt.scatter(results.index, results.true, s=5)
plt.scatter(results.index, results.prediction, s=5)
# name X axis informatively
plt.xlabel('obs sorted in ascending order with respect to true values')
# add customizable title
plt.title(title)
# plot
plt.show();
def reset_all_randomness():
"""
Function assures reproducibility of NN estimation results.
"""
# reloads
reload(tf)
reload(np)
reload(random)
# seeds - for reproducibility
os.environ['PYTHONHASHSEED']=str(984797)
random.seed(984797)
set_random_seed(984797)
np.random.seed(984797)
my_init = glorot_uniform(seed=984797)
return my_init
X = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet1").values
y = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet2").values
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.08, random_state = 0)
# Feature Scaling
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# model0
# Initialising the ANN
model0 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model0.add(Dense(units = 128, activation = 'tanh', input_dim = 4, kernel_initializer=my_init))
# Adding 2 hidden layer
model0.add(Dense(units = 64, activation = 'tanh', kernel_initializer=my_init))
# Adding 3 hidden layer
model0.add(Dense(units = 32, activation = 'tanh', kernel_initializer=my_init))
# Adding 4 hidden layer
model0.add(Dense(units = 16, activation = 'tanh', kernel_initializer=my_init))
# Adding output layer
model0.add(Dense(units = 1, kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.99)
# Compiling the ANN
model0.compile(optimizer = Optimizer, loss = 'mean_squared_error', metrics=['mse','mae'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model0.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size = 100, epochs = 1000)
# Generate prediction for both Train and Valid set
y_train_pred_model0 = model0.predict(X_train)
y_test_pred_model0 = model0.predict(X_test)
# check what metrics are in fact available in history
history.history.keys()
dict_keys(['val_loss', 'val_mean_squared_error', 'val_mean_absolute_error', 'loss', 'mean_squared_error', 'mean_absolute_error'])
# look at model fitting history
plot_history(history, ['mean_squared_error', 'val_mean_squared_error'])
plot_history(history, ['mean_absolute_error', 'val_mean_absolute_error'])
# look at model fit quality
for i in range(len(y_test)):
print('%s => %s (expected %s)' % (X[i].tolist(), y_test_pred_model0[i], y_test[i]))
plot_fit(pd.DataFrame(y_train), y_train_pred_model0, 'Fit on train data')
plot_fit(pd.DataFrame(y_test), y_test_pred_model0, 'Fit on test data')
print('MSE on train data is: {}'.format(history.history['mean_squared_error'][-1]))
print('MSE on test data is: {}'.format(history.history['val_mean_squared_error'][-1]))
[1000.0, 25.0, 2235.3, 1.0] => [2.2463024] (expected [3])
[1000.0, 30.0, 2190.1, 1.0] => [5.6396966] (expected [3])
[1000.0, 35.0, 2144.7, 1.0] => [5.6486473] (expected [5])
[1000.0, 40.0, 2098.9, 1.0] => [4.852657] (expected [3])
[1000.0, 45.0, 2052.9, 1.0] => [3.9801836] (expected [4])
[1000.0, 25.0, 2235.3, 1.0] => [5.761505] (expected [6])
MSE on train data is: 0.1629941761493683
MSE on test data is: 1.9077353477478027
# augment features by calculating absolute values and squares of original features
X_train = np.array([list(x) + list(np.abs(x)) + list(x**2) for x in X_train])
X_test = np.array([list(x) + list(np.abs(x)) + list(x**2) for x in X_test])
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# model1
# Initialising the ANN
model1 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model1.add(Dense(units = 128, activation = 'tanh', input_dim = 12, kernel_initializer=my_init))
# Adding 2 hidden layer
model1.add(Dense(units = 64, activation = 'tanh', kernel_initializer=my_init))
# Adding 3 hidden layer
model1.add(Dense(units = 32, activation = 'tanh', kernel_initializer=my_init))
# Adding 4 hidden layer
model1.add(Dense(units = 16, activation = 'tanh', kernel_initializer=my_init))
# Adding output layer
model1.add(Dense(units = 1, kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.99)
# Compiling the ANN
model1.compile(optimizer = Optimizer, loss = 'mean_squared_error', metrics=['mse','mae'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model1.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size = 100, epochs = 1000)
# Generate prediction for both Train and Valid set
y_train_pred_model1 = model1.predict(X_train)
y_test_pred_model1 = model1.predict(X_test)
# look at model fitting history
plot_history(history, ['mean_squared_error', 'val_mean_squared_error'])
plot_history(history, ['mean_absolute_error', 'val_mean_absolute_error'])
# look at model fit quality
for i in range(len(y_test)):
print('%s => %s (expected %s)' % (X[i].tolist(), y_test_pred_model1[i], y_test[i]))
plot_fit(pd.DataFrame(y_train), y_train_pred_model1, 'Fit on train data')
plot_fit(pd.DataFrame(y_test), y_test_pred_model1, 'Fit on test data')
print('MSE on train data is: {}'.format(history.history['mean_squared_error'][-1]))
print('MSE on test data is: {}'.format(history.history['val_mean_squared_error'][-1]))
[1000.0, 25.0, 2235.3, 1.0] => [2.5696845] (expected [3])
[1000.0, 30.0, 2190.1, 1.0] => [5.0152197] (expected [3])
[1000.0, 35.0, 2144.7, 1.0] => [4.4963903] (expected [5])
[1000.0, 40.0, 2098.9, 1.0] => [5.004753] (expected [3])
[1000.0, 45.0, 2052.9, 1.0] => [3.982211] (expected [4])
[1000.0, 25.0, 2235.3, 1.0] => [6.158882] (expected [6])
MSE on train data is: 0.17548464238643646
MSE on test data is: 1.4240833520889282
# init experiment_results
experiment_results = []
# the experiment
for layer1_neurons in [4, 8, 16,32 ]:
for layer2_neurons in [4, 8, 16, 32]:
for activation_function in ['tanh', 'relu']:
for learning_rate in [0.01, 0.001]:
for beta1 in [0.9]:
for beta2 in [0.99]:
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# model2
# Initialising the ANN
model2 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model2.add(Dense(units = layer1_neurons, activation = activation_function, input_dim = 12, kernel_initializer=my_init))
# Adding 2 hidden layer
model2.add(Dense(units = layer2_neurons, activation = activation_function, kernel_initializer=my_init))
# Adding output layer
model2.add(Dense(units = 1, kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1, beta2=beta2)
# Compiling the ANN
model2.compile(optimizer = Optimizer, loss = 'mean_squared_error', metrics=['mse','mae'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model2.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size = 100, epochs = 1000, verbose=0)
# Generate prediction for both Train and Valid set
y_train_pred_model2 = model2.predict(X_train)
y_test_pred_model2 = model2.predict(X_test)
print('MSE on train data is: {}'.format(history.history['mean_squared_error'][-1]))
print('MSE on test data is: {}'.format(history.history['val_mean_squared_error'][-1]))
# create data you want to save for each processed NN
partial_results = \
{
'layer1_neurons': layer1_neurons,
'layer2_neurons': layer2_neurons,
'activation_function': activation_function,
'learning_rate': learning_rate,
'beta1': beta1,
'beta2': beta2,
'final_train_mean_squared_error': history.history['mean_squared_error'][-1],
'final_val_mean_squared_error': history.history['val_mean_squared_error'][-1],
'best_train_epoch': history.history['mean_squared_error'].index(min(history.history['mean_squared_error'])),
'best_train_mean_squared_error': np.min(history.history['mean_squared_error']),
'best_val_epoch': history.history['val_mean_squared_error'].index(min(history.history['val_mean_squared_error'])),
'best_val_mean_squared_error': np.min(history.history['val_mean_squared_error']),
}
experiment_results.append(
partial_results
)
# put experiment_results into DataFrame
experiment_results_df = pd.DataFrame(experiment_results)
# identifying models hopefully not too much overfitted to valid data at the end of estimation (after 1000 epochs) :
experiment_results_df['valid'] = experiment_results_df['final_val_mean_squared_error'] > experiment_results_df['final_train_mean_squared_error']
# display the best combinations of parameters for valid data, which seems not overfitted
experiment_results_df[experiment_results_df['valid']].sort_values(by=['final_val_mean_squared_error']).head()
layer1_neurons layer2_neurons activation_function learning_rate beta1 beta2 final_train_mean_squared_error final_val_mean_squared_error best_train_epoch best_train_mean_squared_error best_val_epoch best_val_mean_squared_error valid
26 8 16 relu 0.010 0.900 0.990 0.992 1.232 998 0.992 883 1.117 True
36 16 8 tanh 0.010 0.900 0.990 0.178 1.345 998 0.176 40 1.245 True
14 4 32 relu 0.010 0.900 0.990 1.320 1.378 980 1.300 98 0.937 True
2 4 4 relu 0.010 0.900 0.990 1.132 1.419 996 1.131 695 1.002 True
57 32 16 tanh 0.001 0.900 0.990 1.282 1.432 999 1.282 999 1.432 True
# for each NN estimation identify dictionary of epochs for which NN was not overfitted towards valid data
# for each such epoch I store its number and corresponding mean_squared_error on valid data
experiment_results_df['not_overfitted_epochs_on_valid'] = \
experiment_results_df.apply(
lambda row:
{
i: row['val_mean_squared_error_history'][i]
for i in range(len(row['train_mean_squared_error_history']))
if row['val_mean_squared_error_history'][i] > row['train_mean_squared_error_history'][i]
},
axis=1
)
# basing on previosuly prepared dict, for each NN estimation I can identify:
# best not overfitted mse value on valid data and corresponding best not overfitted epoch on valid data
experiment_results_df['best_not_overfitted_mse_on_valid'] = \
experiment_results_df['not_overfitted_epochs_on_valid'].apply(
lambda x: np.min(list(x.values())) if len(list(x.values()))>0 else np.NaN
)
experiment_results_df['best_not_overfitted_epoch_on_valid'] = \
experiment_results_df['not_overfitted_epochs_on_valid'].apply(
lambda x: list(x.keys())[list(x.values()).index(np.min(list(x.values())))] if len(list(x.values()))>0 else np.NaN
)
# now I can sort all estimations according to best not overfitted mse on valid data overall, not only at the end of estimation
experiment_results_df.sort_values(by=['best_not_overfitted_mse_on_valid'])[[
'layer1_neurons','layer2_neurons','activation_function','learning_rate','beta1','beta2',
'best_not_overfitted_mse_on_valid','best_not_overfitted_epoch_on_valid'
]].head()
layer1_neurons layer2_neurons activation_function learning_rate beta1 beta2 best_not_overfitted_mse_on_valid best_not_overfitted_epoch_on_valid
26 8 16 relu 0.010 0.900 0.990 1.117 883.000
54 32 8 relu 0.010 0.900 0.990 1.141 717.000
50 32 4 relu 0.010 0.900 0.990 1.210 411.000
36 16 8 tanh 0.010 0.900 0.990 1.246 821.000
56 32 16 tanh 0.010 0.900 0.990 1.264 693.000
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# model3
# Initialising the ANN
model3 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model3.add(Dense(units = 8, activation = 'relu', input_dim = 12, kernel_initializer=my_init))
# Adding 2 hidden layer
model3.add(Dense(units = 16, activation = 'relu', kernel_initializer=my_init))
# Adding output layer
model3.add(Dense(units = 1, kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=0.010, beta1=0.900, beta2=0.990)
# Compiling the ANN
model3.compile(optimizer = Optimizer, loss = 'mean_squared_error', metrics=['mse','mae'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model3.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size = 100, epochs = 884)
# Generate prediction for both Train and Valid set
y_train_pred_model3 = model3.predict(X_train)
y_test_pred_model3 = model3.predict(X_test)
# look at model fitting history
plot_history(history, ['mean_squared_error', 'val_mean_squared_error'])
plot_history(history, ['mean_absolute_error', 'val_mean_absolute_error'])
# look at model fit quality
for i in range(len(y_test)):
print('%s => %s (expected %s)' % (X[i].tolist(), y_test_pred_model3[i], y_test[i]))
plot_fit(pd.DataFrame(y_train), y_train_pred_model3, 'Fit on train data')
plot_fit(pd.DataFrame(y_test), y_test_pred_model3, 'Fit on test data')
print('MSE on train data is: {}'.format(history.history['mean_squared_error'][-1]))
print('MSE on test data is: {}'.format(history.history['val_mean_squared_error'][-1]))
[1000.0, 25.0, 2235.3, 1.0] => [1.8813248] (expected [3])
[1000.0, 30.0, 2190.1, 1.0] => [4.3430963] (expected [3])
[1000.0, 35.0, 2144.7, 1.0] => [4.827326] (expected [5])
[1000.0, 40.0, 2098.9, 1.0] => [4.6029215] (expected [3])
[1000.0, 45.0, 2052.9, 1.0] => [3.8530324] (expected [4])
[1000.0, 25.0, 2235.3, 1.0] => [4.9882255] (expected [6])
MSE on train data is: 1.088669776916504
MSE on test data is: 1.1166337728500366
def give_me_mse(true, prediction):
"""
This function returns mse for 2 vectors: true and predicted values.
"""
return np.mean((true-prediction)**2)
# as previosly
from sklearn.preprocessing import LabelEncoder
from keras.utils import np_utils
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(np.ravel(y))
y_encoded = encoder.transform(np.ravel(y))
# convert integers to dummy variables (i.e. one hot encoded)
y_dummy = np_utils.to_categorical(y_encoded)
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test, y_train_dummy, y_test_dummy = train_test_split(X, y, y_dummy, test_size = 0.08, random_state = 0)
# as previosly
# model0
# Initialising the ANN
model0 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model0.add(Dense(units = 128, activation = 'tanh', input_dim = 4, kernel_initializer=my_init))
# Adding 2 hidden layer
model0.add(Dense(units = 64, activation = 'tanh', kernel_initializer=my_init))
# Adding 3 hidden layer
model0.add(Dense(units = 32, activation = 'tanh', kernel_initializer=my_init))
# Adding 4 hidden layer
model0.add(Dense(units = 16, activation = 'tanh', kernel_initializer=my_init))
# Adding output layer
model0.add(Dense(units = 7, activation = 'softmax', kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.99)
# Compiling the ANN
model0.compile(optimizer = Optimizer, loss = 'categorical_crossentropy', metrics=['accuracy','categorical_crossentropy','mse'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model0.fit(X_train, y_train_dummy, validation_data=(X_test, y_test_dummy), batch_size = 100, epochs = 1000)
# Generate prediction for both Train and Valid set
y_train_pred_model0 = model0.predict(X_train)
y_test_pred_model0 = model0.predict(X_test)
# find final prediction by taking class with highest probability
y_train_pred_model0 = np.array([[list(x).index(max(list(x))) + 1] for x in y_train_pred_model0])
y_test_pred_model0 = np.array([[list(x).index(max(list(x))) + 1] for x in y_test_pred_model0])
# check what metrics are in fact available in history
history.history.keys()
dict_keys(['val_loss', 'val_acc', 'val_categorical_crossentropy', 'val_mean_squared_error', 'loss', 'acc', 'categorical_crossentropy', 'mean_squared_error'])
# look at model fitting history
plot_history(history, ['mean_squared_error', 'val_mean_squared_error'])
plot_history(history, ['categorical_crossentropy', 'val_categorical_crossentropy'])
plot_history(history, ['acc', 'val_acc'])
# look at model fit quality
plot_fit(pd.DataFrame(y_train), y_train_pred_model0, 'Fit on train data')
plot_fit(pd.DataFrame(y_test), y_test_pred_model0, 'Fit on test data')
print('MSE on train data is: {}'.format(give_me_mse(y_train, y_train_pred_model0)))
print('MSE on test data is: {}'.format(give_me_mse(y_test, y_test_pred_model0)))
MSE on train data is: 0.0
MSE on test data is: 1.3333333333333333
关于python - 使用Python训练后,神经网络未提供预期的输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58918390/
我对java有点陌生,所以如果我犯了一个简单的错误,请原谅我,但我不确定我哪里出错了,我收到的错误是“预期的.class,预期的标识符,而不是声明, ';'预期的。”我尝试了不同的方法,并从这些方法中
This question already has answers here: chai test array equality doesn't work as expected (3个答案) 3年前
我正在学习 Java(对不起,我的英语很差,这不是我的母语),当我在 Eclipse (JavaSE-1.7) 中在我输入的每个“try”中执行“try-finally” block 时,会出现以下消
我收到两个错误,指出 token 上的语法错误,ConstructorHeaderName expected instead & token “(”上的语法错误,< expected 在线: mTM.
我找不到错误。 Eclipse 给我这个错误。每个 { } 都是匹配的。请帮忙。 Multiple markers at this line - Syntax error on token “)”,
代码: import java.awt.*; import javax.swing.*; import java.awt.event.*; public class DoubleIt extends
我正在用 python(Vs 代码)编写代码,但出现此错误: Expected ")" Pylance 错误发生在:def main() 我试着运行我的 main 并将它打印到我的屏幕上。我用谷歌搜
我正在尝试按照 documentation 中的建议使用异步函数。但我收到此错误 意外的 token ,预期 ( async function getMoviesFromApi() { try
Closed. This question does not meet Stack Overflow guidelines。它当前不接受答案。 想改善这个问题吗?更新问题,以便将其作为on-topic
Closed. This question does not meet Stack Overflow guidelines。它当前不接受答案。 想改善这个问题吗?更新问题,以便将其作为on-topic
第一行包含一个表示数组长度的整数p。第二行包含用空格分隔的整数,这些整数描述数组中的每个元素。第三行打印一个整数,指示负数组的数量。 package asgn3; import java.util.*
好的,我是初学者,我必须修复此 java 表达式语言代码才能在我的系统 (Windchill) 中工作,但看起来我在语法中遗漏了一些内容: LWCNormalizedObject lwc =
我无法编译我的程序! 我想我缺少一个花括号,但我怎么也看不出在哪里! import javax.swing.*; import java.awt.*;
我的 jQuery 代码有问题,我的 Firebug 向我发出警告:需要选择器。 这是代码: $("img[id$='_tick']").each(function() { $(this).c
我的新类(class) Fountainofyouth 遇到了问题。尝试构建整个项目后,调试器显示 warning: extended initializer lists only available
我已经从 Java 转向 CPP,并且正在努力围绕构造构造函数链进行思考,我认为这是我的问题的根源。 我的头文件如下: public: GuidedTour(); GuidedTour(string
鉴于以下 for(var i=0; i< data.cats.length; i++) list += buildCategories(data.cats[i]); jsLint 告诉我 Expect
我有这个 json,但 Visual Studio Code 在标题中给了我警告。 [ { "title": "Book A", "imageUrl": "https:
我正在尝试编写一个有条件地禁用四个特殊成员函数(复制构造、移动构造、复制赋值和移动赋值)的包装类,下面是我用于测试目的的快速草稿: enum class special_member : uint8_
所以我用 F# 编写了一个非常简单的程序,它应该对 1000 以下的所有 3 和 5 的倍数求和: [1..999] |> List.filter (fun x -> x % 3 = 0 || x %
我是一名优秀的程序员,十分优秀!