- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我的神经网络在用Python训练后没有给出预期的输出。代码中是否有错误?有什么方法可以减少均方误差(MSE)?
我试图反复训练(运行程序)网络,但它不是在学习,而是给出相同的MSE和输出。
这是我使用的数据:
https://drive.google.com/open?id=1GLm87-5E_6YhUIPZ_CtQLV9F9wcGaTj2
这是我的代码:
#load and evaluate a saved model
from numpy import loadtxt
from tensorflow.keras.models import load_model
# load model
model = load_model('ANNnew.h5')
# summarize model.
model.summary()
#Model starts
import numpy as np
import pandas as pd
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# Importing the dataset
X = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet1").values
y = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet2").values
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.08, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Initialising the ANN
model = Sequential()
# Adding the input layer and the first hidden layer
model.add(Dense(32, activation = 'tanh', input_dim = 4))
# Adding the second hidden layer
model.add(Dense(units = 18, activation = 'tanh'))
# Adding the third hidden layer
model.add(Dense(units = 32, activation = 'tanh'))
#model.add(Dense(1))
model.add(Dense(units = 1))
# Compiling the ANN
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the ANN to the Training set
model.fit(X_train, y_train, batch_size = 100, epochs = 1000)
y_pred = model.predict(X_test)
for i in range(5):
print('%s => %d (expected %s)' % (X[i].tolist(), y_pred[i], y[i].tolist()))
plt.plot(y_test, color = 'red', label = 'Test data')
plt.plot(y_pred, color = 'blue', label = 'Predicted data')
plt.title('Prediction')
plt.legend()
plt.show()
# save model and architecture to single file
model.save("ANNnew.h5")
print("Saved model to disk")
最佳答案
我注意到您的印刷报告存在一个小错误-而不是:
for i in range(5):
print('%s => %d (expected %s)' % (X[i].tolist(), y_pred[i], y[i].tolist()))
for i in range(len(y_test)):
print('%s => %d (expected %s)' % (X[i].tolist(), y_pred[i], y_test[i].tolist()))
# imports
import numpy as np
import pandas as pd
import os
import tensorflow as tf
import matplotlib.pyplot as plt
import random
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.models import Sequential
from tensorflow import set_random_seed
from tensorflow.keras.initializers import glorot_uniform
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from importlib import reload
# useful pandas display settings
pd.options.display.float_format = '{:.3f}'.format
# useful functions
def plot_history(history, metrics_to_plot):
"""
Function plots history of selected metrics for fitted neural net.
"""
# plot
for metric in metrics_to_plot:
plt.plot(history.history[metric])
# name X axis informatively
plt.xlabel('epoch')
# name Y axis informatively
plt.ylabel('metric')
# add informative legend
plt.legend(metrics_to_plot)
# plot
plt.show()
def plot_fit(y_true, y_pred, title='title'):
"""
Function plots true values and predicted values, sorted in increase order by true values.
"""
# create one dataframe with true values and predicted values
results = y_true.reset_index(drop=True).merge(pd.DataFrame(y_pred), left_index=True, right_index=True)
# rename columns informartively
results.columns = ['true', 'prediction']
# sort for clarity of visualization
results = results.sort_values(by=['true']).reset_index(drop=True)
# plot true values vs predicted values
results.plot()
# adding scatter on line plots
plt.scatter(results.index, results.true, s=5)
plt.scatter(results.index, results.prediction, s=5)
# name X axis informatively
plt.xlabel('obs sorted in ascending order with respect to true values')
# add customizable title
plt.title(title)
# plot
plt.show();
def reset_all_randomness():
"""
Function assures reproducibility of NN estimation results.
"""
# reloads
reload(tf)
reload(np)
reload(random)
# seeds - for reproducibility
os.environ['PYTHONHASHSEED']=str(984797)
random.seed(984797)
set_random_seed(984797)
np.random.seed(984797)
my_init = glorot_uniform(seed=984797)
return my_init
X = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet1").values
y = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet2").values
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.08, random_state = 0)
# Feature Scaling
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# model0
# Initialising the ANN
model0 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model0.add(Dense(units = 128, activation = 'tanh', input_dim = 4, kernel_initializer=my_init))
# Adding 2 hidden layer
model0.add(Dense(units = 64, activation = 'tanh', kernel_initializer=my_init))
# Adding 3 hidden layer
model0.add(Dense(units = 32, activation = 'tanh', kernel_initializer=my_init))
# Adding 4 hidden layer
model0.add(Dense(units = 16, activation = 'tanh', kernel_initializer=my_init))
# Adding output layer
model0.add(Dense(units = 1, kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.99)
# Compiling the ANN
model0.compile(optimizer = Optimizer, loss = 'mean_squared_error', metrics=['mse','mae'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model0.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size = 100, epochs = 1000)
# Generate prediction for both Train and Valid set
y_train_pred_model0 = model0.predict(X_train)
y_test_pred_model0 = model0.predict(X_test)
# check what metrics are in fact available in history
history.history.keys()
dict_keys(['val_loss', 'val_mean_squared_error', 'val_mean_absolute_error', 'loss', 'mean_squared_error', 'mean_absolute_error'])
# look at model fitting history
plot_history(history, ['mean_squared_error', 'val_mean_squared_error'])
plot_history(history, ['mean_absolute_error', 'val_mean_absolute_error'])
# look at model fit quality
for i in range(len(y_test)):
print('%s => %s (expected %s)' % (X[i].tolist(), y_test_pred_model0[i], y_test[i]))
plot_fit(pd.DataFrame(y_train), y_train_pred_model0, 'Fit on train data')
plot_fit(pd.DataFrame(y_test), y_test_pred_model0, 'Fit on test data')
print('MSE on train data is: {}'.format(history.history['mean_squared_error'][-1]))
print('MSE on test data is: {}'.format(history.history['val_mean_squared_error'][-1]))
[1000.0, 25.0, 2235.3, 1.0] => [2.2463024] (expected [3])
[1000.0, 30.0, 2190.1, 1.0] => [5.6396966] (expected [3])
[1000.0, 35.0, 2144.7, 1.0] => [5.6486473] (expected [5])
[1000.0, 40.0, 2098.9, 1.0] => [4.852657] (expected [3])
[1000.0, 45.0, 2052.9, 1.0] => [3.9801836] (expected [4])
[1000.0, 25.0, 2235.3, 1.0] => [5.761505] (expected [6])
MSE on train data is: 0.1629941761493683
MSE on test data is: 1.9077353477478027
# augment features by calculating absolute values and squares of original features
X_train = np.array([list(x) + list(np.abs(x)) + list(x**2) for x in X_train])
X_test = np.array([list(x) + list(np.abs(x)) + list(x**2) for x in X_test])
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# model1
# Initialising the ANN
model1 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model1.add(Dense(units = 128, activation = 'tanh', input_dim = 12, kernel_initializer=my_init))
# Adding 2 hidden layer
model1.add(Dense(units = 64, activation = 'tanh', kernel_initializer=my_init))
# Adding 3 hidden layer
model1.add(Dense(units = 32, activation = 'tanh', kernel_initializer=my_init))
# Adding 4 hidden layer
model1.add(Dense(units = 16, activation = 'tanh', kernel_initializer=my_init))
# Adding output layer
model1.add(Dense(units = 1, kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.99)
# Compiling the ANN
model1.compile(optimizer = Optimizer, loss = 'mean_squared_error', metrics=['mse','mae'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model1.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size = 100, epochs = 1000)
# Generate prediction for both Train and Valid set
y_train_pred_model1 = model1.predict(X_train)
y_test_pred_model1 = model1.predict(X_test)
# look at model fitting history
plot_history(history, ['mean_squared_error', 'val_mean_squared_error'])
plot_history(history, ['mean_absolute_error', 'val_mean_absolute_error'])
# look at model fit quality
for i in range(len(y_test)):
print('%s => %s (expected %s)' % (X[i].tolist(), y_test_pred_model1[i], y_test[i]))
plot_fit(pd.DataFrame(y_train), y_train_pred_model1, 'Fit on train data')
plot_fit(pd.DataFrame(y_test), y_test_pred_model1, 'Fit on test data')
print('MSE on train data is: {}'.format(history.history['mean_squared_error'][-1]))
print('MSE on test data is: {}'.format(history.history['val_mean_squared_error'][-1]))
[1000.0, 25.0, 2235.3, 1.0] => [2.5696845] (expected [3])
[1000.0, 30.0, 2190.1, 1.0] => [5.0152197] (expected [3])
[1000.0, 35.0, 2144.7, 1.0] => [4.4963903] (expected [5])
[1000.0, 40.0, 2098.9, 1.0] => [5.004753] (expected [3])
[1000.0, 45.0, 2052.9, 1.0] => [3.982211] (expected [4])
[1000.0, 25.0, 2235.3, 1.0] => [6.158882] (expected [6])
MSE on train data is: 0.17548464238643646
MSE on test data is: 1.4240833520889282
# init experiment_results
experiment_results = []
# the experiment
for layer1_neurons in [4, 8, 16,32 ]:
for layer2_neurons in [4, 8, 16, 32]:
for activation_function in ['tanh', 'relu']:
for learning_rate in [0.01, 0.001]:
for beta1 in [0.9]:
for beta2 in [0.99]:
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# model2
# Initialising the ANN
model2 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model2.add(Dense(units = layer1_neurons, activation = activation_function, input_dim = 12, kernel_initializer=my_init))
# Adding 2 hidden layer
model2.add(Dense(units = layer2_neurons, activation = activation_function, kernel_initializer=my_init))
# Adding output layer
model2.add(Dense(units = 1, kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1, beta2=beta2)
# Compiling the ANN
model2.compile(optimizer = Optimizer, loss = 'mean_squared_error', metrics=['mse','mae'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model2.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size = 100, epochs = 1000, verbose=0)
# Generate prediction for both Train and Valid set
y_train_pred_model2 = model2.predict(X_train)
y_test_pred_model2 = model2.predict(X_test)
print('MSE on train data is: {}'.format(history.history['mean_squared_error'][-1]))
print('MSE on test data is: {}'.format(history.history['val_mean_squared_error'][-1]))
# create data you want to save for each processed NN
partial_results = \
{
'layer1_neurons': layer1_neurons,
'layer2_neurons': layer2_neurons,
'activation_function': activation_function,
'learning_rate': learning_rate,
'beta1': beta1,
'beta2': beta2,
'final_train_mean_squared_error': history.history['mean_squared_error'][-1],
'final_val_mean_squared_error': history.history['val_mean_squared_error'][-1],
'best_train_epoch': history.history['mean_squared_error'].index(min(history.history['mean_squared_error'])),
'best_train_mean_squared_error': np.min(history.history['mean_squared_error']),
'best_val_epoch': history.history['val_mean_squared_error'].index(min(history.history['val_mean_squared_error'])),
'best_val_mean_squared_error': np.min(history.history['val_mean_squared_error']),
}
experiment_results.append(
partial_results
)
# put experiment_results into DataFrame
experiment_results_df = pd.DataFrame(experiment_results)
# identifying models hopefully not too much overfitted to valid data at the end of estimation (after 1000 epochs) :
experiment_results_df['valid'] = experiment_results_df['final_val_mean_squared_error'] > experiment_results_df['final_train_mean_squared_error']
# display the best combinations of parameters for valid data, which seems not overfitted
experiment_results_df[experiment_results_df['valid']].sort_values(by=['final_val_mean_squared_error']).head()
layer1_neurons layer2_neurons activation_function learning_rate beta1 beta2 final_train_mean_squared_error final_val_mean_squared_error best_train_epoch best_train_mean_squared_error best_val_epoch best_val_mean_squared_error valid
26 8 16 relu 0.010 0.900 0.990 0.992 1.232 998 0.992 883 1.117 True
36 16 8 tanh 0.010 0.900 0.990 0.178 1.345 998 0.176 40 1.245 True
14 4 32 relu 0.010 0.900 0.990 1.320 1.378 980 1.300 98 0.937 True
2 4 4 relu 0.010 0.900 0.990 1.132 1.419 996 1.131 695 1.002 True
57 32 16 tanh 0.001 0.900 0.990 1.282 1.432 999 1.282 999 1.432 True
# for each NN estimation identify dictionary of epochs for which NN was not overfitted towards valid data
# for each such epoch I store its number and corresponding mean_squared_error on valid data
experiment_results_df['not_overfitted_epochs_on_valid'] = \
experiment_results_df.apply(
lambda row:
{
i: row['val_mean_squared_error_history'][i]
for i in range(len(row['train_mean_squared_error_history']))
if row['val_mean_squared_error_history'][i] > row['train_mean_squared_error_history'][i]
},
axis=1
)
# basing on previosuly prepared dict, for each NN estimation I can identify:
# best not overfitted mse value on valid data and corresponding best not overfitted epoch on valid data
experiment_results_df['best_not_overfitted_mse_on_valid'] = \
experiment_results_df['not_overfitted_epochs_on_valid'].apply(
lambda x: np.min(list(x.values())) if len(list(x.values()))>0 else np.NaN
)
experiment_results_df['best_not_overfitted_epoch_on_valid'] = \
experiment_results_df['not_overfitted_epochs_on_valid'].apply(
lambda x: list(x.keys())[list(x.values()).index(np.min(list(x.values())))] if len(list(x.values()))>0 else np.NaN
)
# now I can sort all estimations according to best not overfitted mse on valid data overall, not only at the end of estimation
experiment_results_df.sort_values(by=['best_not_overfitted_mse_on_valid'])[[
'layer1_neurons','layer2_neurons','activation_function','learning_rate','beta1','beta2',
'best_not_overfitted_mse_on_valid','best_not_overfitted_epoch_on_valid'
]].head()
layer1_neurons layer2_neurons activation_function learning_rate beta1 beta2 best_not_overfitted_mse_on_valid best_not_overfitted_epoch_on_valid
26 8 16 relu 0.010 0.900 0.990 1.117 883.000
54 32 8 relu 0.010 0.900 0.990 1.141 717.000
50 32 4 relu 0.010 0.900 0.990 1.210 411.000
36 16 8 tanh 0.010 0.900 0.990 1.246 821.000
56 32 16 tanh 0.010 0.900 0.990 1.264 693.000
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# model3
# Initialising the ANN
model3 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model3.add(Dense(units = 8, activation = 'relu', input_dim = 12, kernel_initializer=my_init))
# Adding 2 hidden layer
model3.add(Dense(units = 16, activation = 'relu', kernel_initializer=my_init))
# Adding output layer
model3.add(Dense(units = 1, kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=0.010, beta1=0.900, beta2=0.990)
# Compiling the ANN
model3.compile(optimizer = Optimizer, loss = 'mean_squared_error', metrics=['mse','mae'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model3.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size = 100, epochs = 884)
# Generate prediction for both Train and Valid set
y_train_pred_model3 = model3.predict(X_train)
y_test_pred_model3 = model3.predict(X_test)
# look at model fitting history
plot_history(history, ['mean_squared_error', 'val_mean_squared_error'])
plot_history(history, ['mean_absolute_error', 'val_mean_absolute_error'])
# look at model fit quality
for i in range(len(y_test)):
print('%s => %s (expected %s)' % (X[i].tolist(), y_test_pred_model3[i], y_test[i]))
plot_fit(pd.DataFrame(y_train), y_train_pred_model3, 'Fit on train data')
plot_fit(pd.DataFrame(y_test), y_test_pred_model3, 'Fit on test data')
print('MSE on train data is: {}'.format(history.history['mean_squared_error'][-1]))
print('MSE on test data is: {}'.format(history.history['val_mean_squared_error'][-1]))
[1000.0, 25.0, 2235.3, 1.0] => [1.8813248] (expected [3])
[1000.0, 30.0, 2190.1, 1.0] => [4.3430963] (expected [3])
[1000.0, 35.0, 2144.7, 1.0] => [4.827326] (expected [5])
[1000.0, 40.0, 2098.9, 1.0] => [4.6029215] (expected [3])
[1000.0, 45.0, 2052.9, 1.0] => [3.8530324] (expected [4])
[1000.0, 25.0, 2235.3, 1.0] => [4.9882255] (expected [6])
MSE on train data is: 1.088669776916504
MSE on test data is: 1.1166337728500366
def give_me_mse(true, prediction):
"""
This function returns mse for 2 vectors: true and predicted values.
"""
return np.mean((true-prediction)**2)
# as previosly
from sklearn.preprocessing import LabelEncoder
from keras.utils import np_utils
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(np.ravel(y))
y_encoded = encoder.transform(np.ravel(y))
# convert integers to dummy variables (i.e. one hot encoded)
y_dummy = np_utils.to_categorical(y_encoded)
# reset_all_randomness - for reproducibility
my_init = reset_all_randomness()
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test, y_train_dummy, y_test_dummy = train_test_split(X, y, y_dummy, test_size = 0.08, random_state = 0)
# as previosly
# model0
# Initialising the ANN
model0 = Sequential()
# Adding 1 hidden layer: the input layer and the first hidden layer
model0.add(Dense(units = 128, activation = 'tanh', input_dim = 4, kernel_initializer=my_init))
# Adding 2 hidden layer
model0.add(Dense(units = 64, activation = 'tanh', kernel_initializer=my_init))
# Adding 3 hidden layer
model0.add(Dense(units = 32, activation = 'tanh', kernel_initializer=my_init))
# Adding 4 hidden layer
model0.add(Dense(units = 16, activation = 'tanh', kernel_initializer=my_init))
# Adding output layer
model0.add(Dense(units = 7, activation = 'softmax', kernel_initializer=my_init))
# Set up Optimizer
Optimizer = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.99)
# Compiling the ANN
model0.compile(optimizer = Optimizer, loss = 'categorical_crossentropy', metrics=['accuracy','categorical_crossentropy','mse'])
# Fitting the ANN to the Train set, at the same time observing quality on Valid set
history = model0.fit(X_train, y_train_dummy, validation_data=(X_test, y_test_dummy), batch_size = 100, epochs = 1000)
# Generate prediction for both Train and Valid set
y_train_pred_model0 = model0.predict(X_train)
y_test_pred_model0 = model0.predict(X_test)
# find final prediction by taking class with highest probability
y_train_pred_model0 = np.array([[list(x).index(max(list(x))) + 1] for x in y_train_pred_model0])
y_test_pred_model0 = np.array([[list(x).index(max(list(x))) + 1] for x in y_test_pred_model0])
# check what metrics are in fact available in history
history.history.keys()
dict_keys(['val_loss', 'val_acc', 'val_categorical_crossentropy', 'val_mean_squared_error', 'loss', 'acc', 'categorical_crossentropy', 'mean_squared_error'])
# look at model fitting history
plot_history(history, ['mean_squared_error', 'val_mean_squared_error'])
plot_history(history, ['categorical_crossentropy', 'val_categorical_crossentropy'])
plot_history(history, ['acc', 'val_acc'])
# look at model fit quality
plot_fit(pd.DataFrame(y_train), y_train_pred_model0, 'Fit on train data')
plot_fit(pd.DataFrame(y_test), y_test_pred_model0, 'Fit on test data')
print('MSE on train data is: {}'.format(give_me_mse(y_train, y_train_pred_model0)))
print('MSE on test data is: {}'.format(give_me_mse(y_test, y_test_pred_model0)))
MSE on train data is: 0.0
MSE on test data is: 1.3333333333333333
关于python - 使用Python训练后,神经网络未提供预期的输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58918390/
我正在处理一组标记为 160 个组的 173k 点。我想通过合并最接近的(到 9 或 10 个组)来减少组/集群的数量。我搜索过 sklearn 或类似的库,但没有成功。 我猜它只是通过 knn 聚类
我有一个扁平数字列表,这些数字逻辑上以 3 为一组,其中每个三元组是 (number, __ignored, flag[0 or 1]),例如: [7,56,1, 8,0,0, 2,0,0, 6,1,
我正在使用 pipenv 来管理我的包。我想编写一个 python 脚本来调用另一个使用不同虚拟环境(VE)的 python 脚本。 如何运行使用 VE1 的 python 脚本 1 并调用另一个 p
假设我有一个文件 script.py 位于 path = "foo/bar/script.py"。我正在寻找一种在 Python 中通过函数 execute_script() 从我的主要 Python
这听起来像是谜语或笑话,但实际上我还没有找到这个问题的答案。 问题到底是什么? 我想运行 2 个脚本。在第一个脚本中,我调用另一个脚本,但我希望它们继续并行,而不是在两个单独的线程中。主要是我不希望第
我有一个带有 python 2.5.5 的软件。我想发送一个命令,该命令将在 python 2.7.5 中启动一个脚本,然后继续执行该脚本。 我试过用 #!python2.7.5 和http://re
我在 python 命令行(使用 python 2.7)中,并尝试运行 Python 脚本。我的操作系统是 Windows 7。我已将我的目录设置为包含我所有脚本的文件夹,使用: os.chdir("
剧透:部分解决(见最后)。 以下是使用 Python 嵌入的代码示例: #include int main(int argc, char** argv) { Py_SetPythonHome
假设我有以下列表,对应于及时的股票价格: prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11] 我想确定以下总体上最
所以我试图在选择某个单选按钮时更改此框架的背景。 我的框架位于一个类中,并且单选按钮的功能位于该类之外。 (这样我就可以在所有其他框架上调用它们。) 问题是每当我选择单选按钮时都会出现以下错误: co
我正在尝试将字符串与 python 中的正则表达式进行比较,如下所示, #!/usr/bin/env python3 import re str1 = "Expecting property name
考虑以下原型(prototype) Boost.Python 模块,该模块从单独的 C++ 头文件中引入类“D”。 /* file: a/b.cpp */ BOOST_PYTHON_MODULE(c)
如何编写一个程序来“识别函数调用的行号?” python 检查模块提供了定位行号的选项,但是, def di(): return inspect.currentframe().f_back.f_l
我已经使用 macports 安装了 Python 2.7,并且由于我的 $PATH 变量,这就是我输入 $ python 时得到的变量。然而,virtualenv 默认使用 Python 2.6,除
我只想问如何加快 python 上的 re.search 速度。 我有一个很长的字符串行,长度为 176861(即带有一些符号的字母数字字符),我使用此函数测试了该行以进行研究: def getExe
list1= [u'%app%%General%%Council%', u'%people%', u'%people%%Regional%%Council%%Mandate%', u'%ppp%%Ge
这个问题在这里已经有了答案: Is it Pythonic to use list comprehensions for just side effects? (7 个答案) 关闭 4 个月前。 告
我想用 Python 将两个列表组合成一个列表,方法如下: a = [1,1,1,2,2,2,3,3,3,3] b= ["Sun", "is", "bright", "June","and" ,"Ju
我正在运行带有最新 Boost 发行版 (1.55.0) 的 Mac OS X 10.8.4 (Darwin 12.4.0)。我正在按照说明 here构建包含在我的发行版中的教程 Boost-Pyth
学习 Python,我正在尝试制作一个没有任何第 3 方库的网络抓取工具,这样过程对我来说并没有简化,而且我知道我在做什么。我浏览了一些在线资源,但所有这些都让我对某些事情感到困惑。 html 看起来
我是一名优秀的程序员,十分优秀!