- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我设置了一个 TPOT 回归器来预测数据集上的股票价格(经过一些特征工程之后),当涉及 XGBoost 回归器时我遇到了一个问题,我会收到一条错误消息:
feature_names mismatch:
...然后它会显示我的数据集的列名列表。在 Github 上针对此问题提出了解决方案。建议在 train_test_split 期间将 X 特征和 Y 标签的数据帧转换为 Numpy 数组以处理它,这就是我所做的,但现在我收到一个错误:
X_train, X_test, Y_train, Y_test = train_test_split(X.values, Y.values, test_size = test_size, random_state = seed)
print('[INFO] Printing the shapes of the training/testing feature/label sets...')
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
[INFO] Printing the shapes of the training/testing feature/label sets...
(1374, 68)
(459, 68)
(1374,)
(459,)
Best pipeline: ExtraTreesRegressor(DecisionTreeRegressor(input_matrix, max_depth=1, min_samples_leaf=9, min_samples_split=11), bootstrap=False, max_features=0.8500000000000001, min_samples_leaf=1, min_samples_split=9, n_estimators=100)
Traceback (most recent call last):
File "main2.py", line 656, in <module>
predictions = best_model.predict(X_test)
File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\tpot\base.py", line 921, in predict
return self.fitted_pipeline_.predict(features)
File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\utils\metaestimators.py", line 116, in <lambda>
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\pipeline.py", line 422, in predict
return self.steps[-1][-1].predict(Xt, **predict_params)
File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\ensemble\forest.py", line 693, in predict
X = self._validate_X_predict(X)
File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\ensemble\forest.py", line 359, in _validate_X_predict
return self.estimators_[0]._validate_X_predict(X, check_input=True)
File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\tree\tree.py", line 402, in _validate_X_predict
% (self.n_features_, n_features))
ValueError: Number of features of the model must match the input. Model n_features is 68 and input n_features is 69
Github 上的问题现已关闭,但我希望这里有人可以解释我在这里缺少什么?如您所见,有 68 个特征列和 1 个标签列。您还会看到这次的模型甚至没有使用 XGBoost,但我希望能够让它产生的任何模型与 .predict()
函数一起使用。
更新代码
好吧,我真的被困在这里了。我在下面发布了一个工作代码来复制错误。让我知道你看到了什么。输入股票代码 CLVS。我在整个过程中添加了数据框和数组的打印形状,它仍然说形状很好,所以我没有看到什么?您需要安装 Pandas 0.23(是的旧版本)和 TPOT 和 DASK。谢谢:
def main():
# 1. Input a stock ticker
ticker_input = input('Which stock ticker would you like to predict?') # Start with CLVS for testing
print('Getting the historical data for: ',ticker_input)
# 2. Download the historical daily data
# Import dependencies
from datetime import datetime
from pandas_datareader import data as web
import pandas as pd
pd.options.display.float_format = '{:,.2f}'.format
import seaborn as sns
import matplotlib.pyplot as plt
import random
import os
import numpy as np
import time
# Downloading historical data as dataframe
ex = 'yahoo'
start = datetime(2000, 1, 1)
end = datetime.now()
dataset = web.DataReader(ticker_input, ex, start, end) #.reset_index()
# 3. Construct the dataframe from the historical data
# Only use the Adj Close, and use the open price
# of the current day. Then shift all the other
# data 1 day to make the dataset include the
# previous day's values for each.
# (This is because on the trading day, we won't know what the
# High or Low or Close or Volume is, but we would
# know the Open.)
dataset = dataset.drop(['Close'],axis=1)
dataset['PrevOpen'] = dataset['Open'].shift(1)
dataset['PrevHigh'] = dataset['High'].shift(1)
dataset['PrevLow'] = dataset['Low'].shift(1)
dataset['PrevAdjClose'] = dataset['Adj Close'].shift(1)
dataset['PrevVol'] = dataset['Volume'].shift(1)
dataset = dataset.drop(['High'],axis=1)
dataset = dataset.drop(['Low'],axis=1)
dataset = dataset.drop(['Volume'],axis=1)
# Add in moving averages based on Opening prices
dataset['9MA'] = dataset['Open'].rolling(window=9).mean()
dataset['20MA'] = dataset['Open'].rolling(window=20).mean()
# Get which industry the stock is in to get the industry performance data
from bs4 import BeautifulSoup
import requests
headers = requests.utils.default_headers()
headers['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'
# Get the industry name of the stock
url = 'https://finance.yahoo.com/quote/' + ticker_input + '/profile'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
table = soup.find('p', {'class' :'D(ib) Va(t)'})
industry = table.findAll('span')
indust = industry[3].text
print(indust)
print('Getting Industry ETF historical data...')
# Then get historical data for that industry's ETF
if indust == "Biotechnology":
etf_ticker = "IBB"
elif indust == "Specialty Retail":
etf_ticker = "XRT"
elif indust == "Oil & Gas E&P":
etf_ticker = "XOP"
ex = 'yahoo'
etf_df = web.DataReader(etf_ticker, ex, start, end)
dataset['PrevIndOpen'] = etf_df['Open'].shift(1)
dataset['PrevIndHigh'] = etf_df['High'].shift(1)
dataset['PrevIndLow'] = etf_df['Low'].shift(1)
dataset['PrevIndClose'] = etf_df['Adj Close'].shift(1)
dataset['PrevIndVol'] = etf_df['Volume'].shift(1)
# Reshape the dataframe to put Adj Close at the far right
# so when we export the predictions dataset, the predictions
# column will be right next to it for easier analysis
dataset = dataset[['Open','9MA','20MA','PrevOpen','PrevHigh','PrevLow','PrevAdjClose','PrevVol','PrevIndOpen','PrevIndHigh','PrevIndLow','PrevIndClose','PrevIndVol','Adj Close']]
# Disable the Future Warnings that repeat "needlessly" (for now)
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings("ignore")
# 5. Explore the inital dataset
# Show the shape of the dataset
print("[INFO] features shape : {}".format(dataset.shape))
# Print the feature names
print("[INFO] dataset names : {}".format(dataset.columns))
# Convert the dataframe into a Pandas dataframe and print the first 5 rows
df = pd.DataFrame(dataset)
print("[INFO] df type : {}".format(type(df)))
print("[INFO] df shape: {}".format(df.shape))
print(df.head())
# Specify the column names and print
df.columns = dataset.columns
#print('[INFO] df shape with features:')
#print(df.head())
# This prints the same as above
# Find any columns with missing values? If you find them, you either have to:
# 1. Replace the missing value with a large negative number (e.g. -999).
# 2. Replace the missing value with mean of the column.
# 3. Replace the missing value with median of the column.
# Because of our 1 day shift, the first row will have empty values,
# so we'll drop them as one day won't make much difference in our entire model
print('[INFO] Checking for any columns with no values...')
df = df.dropna(how='any')
print(pd.isnull(df).any())
# Ensure numeric datatypes of the dataframe.
# If a column has different datatype such as string or character,
# we need to map that column to a numeric datatype such as integer
# or float. For this dataset, the Date index column is one.
print('[INFO] Feature types:')
print(df.dtypes)
# Print a statistical summary of the dataset for reference
print('[INFO] Print a statistical summary of dataset:')
print(df.describe())
# # Reset the index column for FeatureTools to use Date as the index, then it'll revert it back after feature stuff is done
# df = df.reset_index()
# This is not good way to drop the rows here because if there are any
# nan values in the middle of the dataset, those will get lost too.
# Need to work with this
df = df.dropna()
print(df)
# 4. Hold out a prediction dataset to predict on later
prediction_df = df.tail(90).copy()
df = df.iloc[:-90,:].copy() # subtracting 90 rows/days from the dataset to use as the predictions dataset later
# 7. Split the dataset into features (X) and target (Y)
# Split into features (x) and target (y) and print the shapes of them
X = df.drop("Adj Close", axis=1)
Y = df["Adj Close"]
print('Shape of features: ', X.shape)
print('Shape of target: ', Y.shape)
# Standardize the data. Commenting this out until you can figure out how to
# unscale the prediction dataset for review
#from sklearn.preprocessing import StandardScaler, MinMaxScaler
#scaler = MinMaxScaler().fit(X)
#scaled_X = scaler.transform(X)
print('Printing X and Y shape :')
print(X.shape)
print(Y.shape)
# 8. Split dataset into training and validation data
# Split the data into training and testing data and print their shapes
from sklearn.model_selection import train_test_split
seed = 9
test_size = 0.25
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = test_size, random_state = seed)
print('[INFO] Printing the shapes of the training/testing feature/label sets...')
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
X_train=X_train.values
X_test=X_test.values
Y_train=Y_train.values
Y_test=Y_test.values
print('[INFO] Printing the arrays of the training/testing feature/label sets...')
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
# 9. Start a TPOT Auto Regression to find the best Regression model and export feature importances
from sklearn.metrics import mean_squared_error, r2_score, explained_variance_score
from tpot import TPOTRegressor
import os
# Create a custom config dictionary for TPOT to use.
# I've made this list full of Regressors that use the
# .feature_importances_ attribute. How to implement XGBoost
# into the plotting of feature importances below? IF XGBOOST is
# present in the final model, then plot one way, ELSE, plot the
# way it is now?
tpot_config = {
'sklearn.ensemble.ExtraTreesRegressor': {
'n_estimators': [100],
'max_features': np.arange(0.05, 1.01, 0.05),
'min_samples_split': range(2, 21),
'min_samples_leaf': range(1, 21),
'bootstrap': [True, False]
},
'sklearn.tree.DecisionTreeRegressor': {
'max_depth': range(1, 11),
'min_samples_split': range(2, 21),
'min_samples_leaf': range(1, 21)
},
'sklearn.ensemble.RandomForestRegressor': {
'n_estimators': [100],
'max_features': np.arange(0.05, 1.01, 0.05),
'min_samples_split': range(2, 21),
'min_samples_leaf': range(1, 21),
'bootstrap': [True, False]
},
# Preprocesssors
'sklearn.preprocessing.Binarizer': {
'threshold': np.arange(0.0, 1.01, 0.05)
},
'sklearn.decomposition.FastICA': {
'tol': np.arange(0.0, 1.01, 0.05)
},
'sklearn.cluster.FeatureAgglomeration': {
'linkage': ['ward', 'complete', 'average'],
'affinity': ['euclidean', 'l1', 'l2', 'manhattan', 'cosine']
},
'sklearn.preprocessing.MaxAbsScaler': {
},
'sklearn.preprocessing.MinMaxScaler': {
},
'sklearn.preprocessing.Normalizer': {
'norm': ['l1', 'l2', 'max']
},
'sklearn.kernel_approximation.Nystroem': {
'kernel': ['rbf', 'cosine', 'chi2', 'laplacian', 'polynomial', 'poly', 'linear', 'additive_chi2', 'sigmoid'],
'gamma': np.arange(0.0, 1.01, 0.05),
'n_components': range(1, 11)
},
'sklearn.decomposition.PCA': {
'svd_solver': ['randomized'],
'iterated_power': range(1, 11)
},
'sklearn.preprocessing.PolynomialFeatures': {
'degree': [2],
'include_bias': [False],
'interaction_only': [False]
},
'sklearn.kernel_approximation.RBFSampler': {
'gamma': np.arange(0.0, 1.01, 0.05)
},
'sklearn.preprocessing.RobustScaler': {
},
'sklearn.preprocessing.StandardScaler': {
},
'tpot.builtins.ZeroCount': {
},
'tpot.builtins.OneHotEncoder': {
'minimum_fraction': [0.05, 0.1, 0.15, 0.2, 0.25],
'sparse': [False],
'threshold': [10]
},
# Selectors
'sklearn.feature_selection.SelectFwe': {
'alpha': np.arange(0, 0.05, 0.001),
'score_func': {
'sklearn.feature_selection.f_regression': None
}
},
'sklearn.feature_selection.SelectPercentile': {
'percentile': range(1, 100),
'score_func': {
'sklearn.feature_selection.f_regression': None
}
},
'sklearn.feature_selection.VarianceThreshold': {
'threshold': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.2]
},
'sklearn.feature_selection.SelectFromModel': {
'threshold': np.arange(0, 1.01, 0.05),
'estimator': {
'sklearn.ensemble.ExtraTreesRegressor': {
'n_estimators': [100],
'max_features': np.arange(0.05, 1.01, 0.05)
}
}
}
}
# Cross Validation folds to run
folds = 10
# Start the TPOT regression
best_model = TPOTRegressor(use_dask=True,n_jobs=-1,config_dict=tpot_config, cv=folds,
generations=5, population_size=20, verbosity=2, random_state=seed) #memory='./PipelineCache', memory='auto',
best_model.fit(X_train, Y_train)
# Export the TPOT pipeline if you want to use it for anything later
if os.path.exists('./Exported Pipelines'):
pass
else:
os.mkdir('./Exported Pipelines')
best_model.export('./Exported Pipelines/' + ticker_input + '-prediction-pipeline.py')
# Extract what the best pipeline was and fit it to the training set
# to get an idea of the most important features used by the model
exctracted_best_model = best_model.fitted_pipeline_.steps[-1][1]
# Train the `exctracted_best_model` using the training/vildation set.
# You need to use the whole dataset in order to get feature importance for all the
# features in your dataset.
exctracted_best_model.fit(X_train, Y_train)
# plot model's feature importance and save the plot for later
feature_importance = exctracted_best_model.feature_importances_
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, df.columns[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.savefig("feature_importance.png")
plt.clf()
plt.close()
print(X_test.shape)
# 10. See the stats of the validation predictions from the tuned model and export more plots
# Make predictions using the tuned model and display error metrics
# R2 and Explained Variance, best is 1
predictions = best_model.predict(X_test)
print('=============================')
print("TPOT's final score on testing dataset is : ", best_model.score(X_test, Y_test))
print('=============================')
print("[INFO] MSE on test set : {}".format(round(mean_squared_error(Y_test, predictions), 3)))
print('[INFO] R2 Score on test set : {}'.format(round(r2_score(Y_test, predictions), 3)))
print('[INFO] Explained Variance Score on test set : {}'.format(round(explained_variance_score(Y_test, predictions), 3)))
# Plot between predictions and Y_test
x_axis = np.array(range(0, predictions.shape[0]))
plt.plot(x_axis, predictions, linestyle="--", marker="o", alpha=0.7, color='r', label="predictions")
plt.plot(x_axis, Y_test, linestyle="--", marker="o", alpha=0.7, color='g', label="Y_test")
plt.xlabel('Row number')
plt.ylabel('PRICE')
plt.title('Predictions vs Y_test')
plt.legend(loc='lower right')
plt.savefig("predictions_vs_ytest.png")
plt.clf()
plt.close()
# 11. Use the model on the held-out prediction dataset
# Now, run the model on the prediction dataset
features = prediction_df.drop(['Adj Close'], axis=1)
labels = prediction_df['Adj Close']
# Fit the model to the prediction_df and predict the labels
#tpot.fit(features, labels)
results = best_model.predict(features)
predictions_list = []
for preds in results:
predictions_list.append(preds)
prediction_df['Predictions'] = predictions_list
prediction_df.to_csv('Final Predictions Performance.csv', index=True)
print('============================')
print("[INFO] MSE on prediction set : {}".format(round(mean_squared_error(labels, results), 3)))
print('[INFO] R2 Score on prediction set : {}'.format(round(r2_score(labels, results), 3)))
print('[INFO] Explained Variance Score on prediction set : {}'.format(round(explained_variance_score(labels, results), 3)))
# 12. Review the exported .csv file of the predictions, and review all your plots
print('DONE!')
if __name__ == "__main__":
main()
最佳答案
看来我找到了解决办法。我已经使用 XGBRegressor 和 RandomDecisionTrees 运行了几个模型,它似乎工作正常。
只需打开“X_train=X_train.values”和“X_test=X_test.values”,但将 Y 作为数据框单独保留,因为当我更改这两个组时,出现错误。所以我暂时保留它。
关于python - 将 Pandas DF 转换为 Numpy Array 在尝试预测时会出现 # of features 错误?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57405141/
我正在尝试将一个字符串逐个字符地复制到另一个字符串中。目的不是复制整个字符串,而是复制其中的一部分(我稍后会为此做一些条件......) 但我不知道如何使用迭代器。 你能帮帮我吗? std::stri
我想将 void 指针转换为结构引用。 结构的最小示例: #include "Interface.h" class Foo { public: Foo() : mAddress((uint
这有点烦人:我有一个 div,它从窗口的左上角开始过渡,即使它位于文档的其他任何位置。我试过 usign -webkit-transform-origin 但没有成功,也许我用错了。有人可以帮助我吗?
假设,如果将 CSS3 转换/转换/动画分配给 DOM 元素,我是否可以检测到该过程的状态? 我想这样做的原因是因为我正在寻找类似过渡链的东西,例如,在前一个过渡之后运行一个过渡。 最佳答案 我在 h
最近我遇到了“不稳定”屏幕,这很可能是由 CSS 转换引起的。事实上,它只发生在 Chrome 浏览器 上(可能还有 Safari,因为一些人也报告了它)。知道如何让它看起来光滑吗?此外,您可能会注意
我正在开发一个简单的 slider ,它使用 CSS 过渡来为幻灯片设置动画。我用一些基本样式和一些 javascript 创建了一支笔 here .注意:由于 Codepen 使用 Prefixfr
我正在使用以下代码返回 IList: public IList FindCodesByCountry(string country) { var query =
如何设计像这样的操作: 计算 转化 翻译 例如:从“EUR”转换为“CNY”金额“100”。 这是 /convert?from=EUR&to=CNY&amount=100 RESTful 吗? 最佳答
我使用 jquery 组合了一个图像滚动器,如下所示 function rotateImages(whichHolder, start) { var images = $('#' +which
如何使用 CSS (-moz-transform) 更改一个如下所示的 div: 最佳答案 你可以看看Mozilla Developer Center .甚至还有例子。 但是,在我看来,您的具体示例不
我需要帮助我正在尝试在选中和未选中的汉堡菜单上实现动画。我能够为菜单设置动画,但我不知道如何在转换为 0 时为左菜单动画设置动画 &__menu { transform: translateX(
我正在为字典格式之间的转换而苦苦挣扎:我正在尝试将下面的项目数组转换为下面的结果数组。本质上是通过在项目第一个元素中查找重复项,然后仅在第一个参数不同时才将文件添加到结果集中。 var items:[
如果我有两个定义相同的结构,那么在它们之间进行转换的最佳方式是什么? struct A { int i; float f; }; struct B { int i; float f; }; void
我编写了一个 javascript 代码,可以将视口(viewport)从一个链接滑动到另一个链接。基本上一切正常,你怎么能在那里看到http://jsfiddle.net/DruwJ/8/ 我现在的
我需要将文件上传到 meteor ,对其进行一些图像处理(必要时进行图像转换,从图像生成缩略图),然后将其存储在外部图像存储服务器(s3)中。这应该尽可能快。 您对 nodejs 图像处理库有什么建议
刚开始接触KDB+,有一些问题很难从Q for Mortals中得到。 说,这里 http://code.kx.com/wiki/JB:QforMortals2/casting_and_enumera
我在这里的一个项目中使用 JSF 1.2 和 IceFaces 1.8。 我有一个页面,它基本上是一大堆浮点数字段的大编辑网格。这是通过 inputText 实现的页面上的字段指向具有原始值的值对象
ScnMatrix4 是一个 4x4 矩阵。我的问题是什么矩阵行对应于位置(ScnVector3),旋转(ScnVector4),比例(ScnVector3)。第 4 行是空的吗? 编辑: 我玩弄了
恐怕我是 Scala 新手: 我正在尝试根据一些简单的逻辑将 Map 转换为新 Map: val postVals = Map("test" -> "testing1", "test2" -> "te
输入: This is sample 1 This is sample 2 输出: ~COLOR~[Green]This is sample 1~COLOR~[Red]This is sam
我是一名优秀的程序员,十分优秀!