- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我遇到一个问题,我的任务是创建三个分类器(两个“开箱即用”,一个“优化”)来使用 sklearn 预测情感分析。
说明是:
步骤 1-3 没有问题,而且坦率地说工作得很好,问题在于使用 model.predict()
。我正在使用sklearn的TfidfVectorizer
,它从文本创建特征向量。我的问题在于,我为训练集创建的特征向量与为测试集创建的训练向量不同,因为提供的文本是不同。
下面是来自 train.tsv
文件的示例...
4|z8DDztUxuIoHYHddDL9zQ|So let me set the scene first, My church social group took a trip here last saturday. We are not your mothers church. The churhc is Community Church of Hope, We are the valleys largest GLBT church so when we desended upon Organ stop Pizza, in LDS land you know we look a little out of place. We had about 50 people from our church come and boy did we have fun. There was a baptist church a couple rows down from us who didn't see it coming. Now we aren't a bunch of flamers frolicking around or anything but we do tend to get a little loud and generally have a great time. I did recognized some of the music so I was able to sing along with those. This is a great place to take anyone over 50. I do think they might be washing dirtymob money or something since the business is cash only.........which I think caught a lot of people off guard including me. The show starts at 530 so dont be late !!!!!!
:-----:|:-----:|:-----:
2|BIeDBg4MrEd1NwWRlFHLQQ|Decent but terribly inconsistent food. I've had some great dishes and some terrible ones, I love chaat and 3 out of 4 times it was great, but once it was just a fried greasy mess (in a bad way, not in the good way it usually is.) Once the matar paneer was great, once it was oversalted and the peas were just plain bad. I don't know how they do it, but it's a coinflip between good food and an oversalted overcooked bowl. Either way, portions are generous.
4|NJHPiW30SKhItD5E2jqpHw|Looks aren't everything....... This little divito looks a little scary looking, but like I've said before "you can't judge a book by it's cover". Not necessarily the kind of place you will take your date (unless she's blind and hungry), but man oh man is the food ever good! We have ordered breakfast, lunch, & dinner, and it is all fantastico. They make home-made corn tortillas and several salsas. The breakfast burritos are out of this world and cost about the same as a McDonald's meal. We are a family that eats out frequently and we are frankly tired of pretty places with below average food. This place is sure to cure your hankerin for a tasty Mexican meal.
2|nnS89FMpIHz7NPjkvYHmug|Being a creature of habit anytime I want good sushi I go to Tokyo Lobby. Well, my group wanted to branch out and try something new so we decided on Sakana. Not a fan. And what's shocking to me is this place was packed! The restaurant opens at 5:30 on Saturday and we arrived at around 5:45 and were lucky to get the last open table. I don't get it... Messy rolls that all tasted the same. We ordered the tootsie roll and the crunch roll, both tasted similar, except of course for the crunchy captain crunch on top. Just a mushy mess, that was hard to eat. Bland tempura. No bueno. I did, however, have a very good tuna poke salad, but I would not go back just for that. If you want good sushi on the west side, or the entire valley for that matter, say no to Sakana and yes to Tokyo Lobby.
2|FYxSugh9PGrX1PR0BHBIw|I recently told a friend that I cant figure out why there is no good Mexican restaurants in Tempe. His response was what about MacAyo's? I responded with "why are there no good Mexican food restaurants in Tempe?" Seriously if anyone out there knows of any legit Mexican in Tempe let me know. And don't say restaurant Mexico!
这是train.py
文件:
import nltk, re, pandas as pd
from nltk.corpus import stopwords
import sklearn, string
import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from itertools import islice
import time
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=X.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp = MLPClassifier()
rf = RandomForestClassifier()
mlp_opt = MLPClassifier(
activation = 'tanh',
hidden_layer_sizes = (1000,),
alpha = 0.009,
learning_rate = 'adaptive',
learning_rate_init = 0.01,
max_iter = 250,
momentum = 0.9,
solver = 'lbfgs',
warm_start = False
)
print("Training Classifiers")
mlp_opt.fit(X, Y)
mlp.fit(X, Y)
rf.fit(X, Y)
dump(mlp_opt, "C:\\filepath\Models\\mlp_opt.joblib")
dump(mlp, "C:\\filepath\\Models\\mlp.joblib")
dump(rf, "C:\\filepath\\Models\\rf.joblib")
print("Trained Classifiers")
main()
这是 Tester.py
文件:
from nltk.corpus import stopwords
import sklearn, string, nltk, re, pandas as pd, numpy, time
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath\\dev.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=X.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp_opt = load("C:\\filepath\\Models\\mlp_opt.joblib")
mlp = load("C:\\filepath\\Models\\mlp.joblib")
rf = load("C:\\filepath\\Models\\rf.joblib")
print("Testing Classifiers")
mlp_opt_preds = mlp_opt.predict(X)
mlp_preds = mlp.predict(X)
rf_preds = rf.predict(X)
mlp_opt_performance = check_performance(mlp_opt_preds, Y)
mlp_performance = check_performance(mlp_preds, Y)
rf_performance = check_performance(rf_preds, Y)
print("MLP OPT PERF: {}".format(mlp_opt_performance))
print("MLP PERF: {}".format(mlp_performance))
print("RF PERF: {}".format(rf_performance))
main()
我最终得到的是一个错误:
Testing Classifiers
Traceback (most recent call last):
File "Reader.py", line 121, in <module>
main()
File "Reader.py", line 109, in main
mlp_opt_preds = mlp_opt.predict(X)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 953, in predict
y_pred = self._predict(X)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 676, in _predict
self._forward_pass(activations)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 102, in _forward_pass
self.coefs_[i])
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\utils\extmath.py", line 173, in safe_sparse_dot
return np.dot(a, b)
**ValueError: shapes (2000,13231) and (12299,1000) not aligned: 13231 (dim 1) != 12299 (dim 0)**
我知道错误与特征向量的差异有关大小——因为向量是根据数据中的文本创建的。我对 NLP 或机器学习了解不够,无法设计一个解决此问题的解决方案。我怎样才能创造一种方式来拥有模型使用测试数据中的特征集进行预测?
我尝试对以下答案进行编辑以保存特征向量:
Train.py
现在看起来像:
import nltk, re, pandas as pd
from nltk.corpus import stopwords
import sklearn, string
import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from itertools import islice
import time
import pickle
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath\\train.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
test = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=test.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp = MLPClassifier()
rf = RandomForestClassifier()
mlp_opt = MLPClassifier(
activation = 'tanh',
hidden_layer_sizes = (1000,),
alpha = 0.009,
learning_rate = 'adaptive',
learning_rate_init = 0.01,
max_iter = 250,
momentum = 0.9,
solver = 'lbfgs',
warm_start = False
)
print("Training Classifiers")
mlp_opt.fit(X, Y)
mlp.fit(X, Y)
rf.fit(X, Y)
dump(mlp_opt, "filepath\\Models\\mlp_opt.joblib")
dump(mlp, "filepath\\Models\\mlp.joblib")
dump(rf, "filepath\\Models\\rf.joblib")
pickle.dump(test, open("filepath\\tfidf_vectorizer.pkl", 'wb'))
print("Trained Classifiers")
main()
Test.py
现在看起来像:
from nltk.corpus import stopwords
import sklearn, string, nltk, re, pandas as pd, numpy, time
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from joblib import dump, load
import pickle
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tfidf_vectorizer = pickle.load(open("filepath\\tfidf_vectorizer.pkl", 'rb'))
tsv_file = "filepath\\dev.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
print(type(corpus))
print(corpus.head())
X = tfidf_vectorizer.transform(corpus)
print(X)
df = pd.DataFrame(data=X.todense(), columns=tfidf_vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp_opt = load("filepath\\Models\\mlp_opt.joblib")
mlp = load("filepath\\Models\\mlp.joblib")
rf = load("filepath\\Models\\rf.joblib")
print("Testing Classifiers")
mlp_opt_preds = mlp_opt.predict(X)
mlp_preds = mlp.predict(X)
rf_preds = rf.predict(X)
mlp_opt_performance = check_performance(mlp_opt_preds, Y)
mlp_performance = check_performance(mlp_preds, Y)
rf_performance = check_performance(rf_preds, Y)
print("MLP OPT PERF: {}".format(mlp_opt_performance))
print("MLP PERF: {}".format(mlp_performance))
print("RF PERF: {}".format(rf_performance))
main()
但这会产生:
Traceback (most recent call last):
File "Filepath\Reader.py", line 128, in <module>
main()
File "Filepath\Reader.py", line 95, in main
X = tfidf_vectorizer.transform(corpus)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\sparse\base.py", line 689, in __getattr__
raise AttributeError(attr + " not found")
AttributeError: transform not found
最佳答案
您不应该在测试数据集上使用fit_transform()
。您应该只使用从训练数据集中学到的词汇。
这是一个示例解决方案,
import pickle
tfidf_vectorizer = TfidfVectorizer()
train_data = tfidf_vectorizer.fit_transform(train_corpus) # fit on train
# You could just save the vectorizer with pickle
pickle.dump(tfidf_vectorizer, open('tfidf_vectorizer.pkl', 'wb'))
# then later load the vectorizer and transform on test-dataset.
tfidf_vectorizer = pickle.load(open('tfidf_vectorizer.pkl', 'rb'))
test_data = tfidf_vectorizer.transform(test_corpus)
当您使用transform()
时,它仅考虑从训练语料库中学习的词汇,忽略测试集中发现的任何新单词。
关于python - 忽略训练数据中不存在的测试特征,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58574136/
我获得了一些源代码示例,我想测试一些功能。不幸的是,我在执行程序时遇到问题: 11:41:31 [linqus@ottsrvafq1 example]$ javac -g test/test.jav
我想测试ggplot生成的两个图是否相同。一种选择是在绘图对象上使用all.equal,但我宁愿进行更艰巨的测试以确保它们相同,这似乎是identical()为我提供的东西。 但是,当我测试使用相同d
我确实使用 JUnit5 执行我的 Maven 测试,其中所有测试类都有 @ExtendWith({ProcessExtension.class}) 注释。如果是这种情况,此扩展必须根据特殊逻辑使测试
在开始使用 Node.js 开发有用的东西之前,您的流程是什么?您是否在 VowJS、Expresso 上创建测试?你使用 Selenium 测试吗?什么时候? 我有兴趣获得一个很好的工作流程来开发我
这个问题已经有答案了: What is a NullPointerException, and how do I fix it? (12 个回答) 已关闭 3 年前。 基于示例here ,我尝试为我的
我正在考虑测试一些 Vue.js 组件,作为 Laravel 应用程序的一部分。所以,我有一个在 Blade 模板中使用并生成 GET 的组件。在 mounted 期间请求生命周期钩子(Hook)。假
考虑以下程序: #include struct Test { int a; }; int main() { Test t=Test(); std::cout<
我目前的立场是:如果我使用 web 测试(在我的例子中可能是通过 VS.NET'08 测试工具和 WatiN)以及代码覆盖率和广泛的数据来彻底测试我的 ASP.NET 应用程序,我应该不需要编写单独的
我正在使用 C#、.NET 4.7 我有 3 个字符串,即。 [test.1, test.10, test.2] 我需要对它们进行排序以获得: test.1 test.2 test.10 我可能会得到
我有一个 ID 为“rv_list”的 RecyclerView。单击任何 RecyclerView 项目时,每个项目内都有一个可见的 id 为“star”的 View 。 我想用 expresso
我正在使用 Jest 和模拟器测试 Firebase 函数,尽管这些测试可能来自竞争条件。所谓 flakey,我的意思是有时它们会通过,有时不会,即使在同一台机器上也是如此。 测试和函数是用 Type
我在测试我与 typeahead.js ( https://github.com/angular-ui/bootstrap/blob/master/src/typeahead/typeahead.js
我正在尝试使用 Teamcity 自动运行测试,但似乎当代理编译项目时,它没有正确完成,因为当我运行运行测试之类的命令时,我收到以下错误: fatal error: 'Pushwoosh/PushNo
这是我第一次玩 cucumber ,还创建了一个测试和 API 的套件。我的问题是在测试 API 时是否需要运行它? 例如我脑子里有这个, 启动 express 服务器作为后台任务 然后当它启动时(我
我有我的主要应用程序项目,然后是我的测试的第二个项目。将所有类型的测试存储在该测试项目中是一种好的做法,还是应该将一些测试驻留在主应用程序项目中? 我应该在我的主项目中保留 POJO JUnit(测试
我正在努力弄清楚如何实现这个计数。模型是用户、测试、等级 用户 has_many 测试,测试 has_many 成绩。 每个等级都有一个计算分数(strong_pass、pass、fail、stron
我正在尝试测试一些涉及 OkHttp3 的下载代码,但不幸失败了。目标:测试 下载图像文件并验证其是否有效。平台:安卓。此代码可在生产环境中运行,但测试代码没有任何意义。 产品代码 class Fil
当我想为 iOS 运行 UI 测试时,我收到以下消息: SetUp : System.Exception : Unable to determine simulator version for X 堆
我正在使用 Firebase Remote Config 在 iOS 上设置 A/B 测试。 一切都已设置完毕,我正在 iOS 应用程序中读取服务器端默认值。 但是在多个模拟器上尝试,它们都读取了默认
[已编辑]:我已经用 promise 方式更改了我的代码。 我正在写 React with this starter 由 facebook 创建,我是测试方面的新手。 现在我有一个关于图像的组件,它有
我是一名优秀的程序员,十分优秀!