- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我遇到一个问题,我的任务是创建三个分类器(两个“开箱即用”,一个“优化”)来使用 sklearn 预测情感分析。
说明是:
步骤 1-3 没有问题,而且坦率地说工作得很好,问题在于使用 model.predict()
。我正在使用sklearn的TfidfVectorizer
,它从文本创建特征向量。我的问题在于,我为训练集创建的特征向量与为测试集创建的训练向量不同,因为提供的文本是不同。
下面是来自 train.tsv
文件的示例...
4|z8DDztUxuIoHYHddDL9zQ|So let me set the scene first, My church social group took a trip here last saturday. We are not your mothers church. The churhc is Community Church of Hope, We are the valleys largest GLBT church so when we desended upon Organ stop Pizza, in LDS land you know we look a little out of place. We had about 50 people from our church come and boy did we have fun. There was a baptist church a couple rows down from us who didn't see it coming. Now we aren't a bunch of flamers frolicking around or anything but we do tend to get a little loud and generally have a great time. I did recognized some of the music so I was able to sing along with those. This is a great place to take anyone over 50. I do think they might be washing dirtymob money or something since the business is cash only.........which I think caught a lot of people off guard including me. The show starts at 530 so dont be late !!!!!!
:-----:|:-----:|:-----:
2|BIeDBg4MrEd1NwWRlFHLQQ|Decent but terribly inconsistent food. I've had some great dishes and some terrible ones, I love chaat and 3 out of 4 times it was great, but once it was just a fried greasy mess (in a bad way, not in the good way it usually is.) Once the matar paneer was great, once it was oversalted and the peas were just plain bad. I don't know how they do it, but it's a coinflip between good food and an oversalted overcooked bowl. Either way, portions are generous.
4|NJHPiW30SKhItD5E2jqpHw|Looks aren't everything....... This little divito looks a little scary looking, but like I've said before "you can't judge a book by it's cover". Not necessarily the kind of place you will take your date (unless she's blind and hungry), but man oh man is the food ever good! We have ordered breakfast, lunch, & dinner, and it is all fantastico. They make home-made corn tortillas and several salsas. The breakfast burritos are out of this world and cost about the same as a McDonald's meal. We are a family that eats out frequently and we are frankly tired of pretty places with below average food. This place is sure to cure your hankerin for a tasty Mexican meal.
2|nnS89FMpIHz7NPjkvYHmug|Being a creature of habit anytime I want good sushi I go to Tokyo Lobby. Well, my group wanted to branch out and try something new so we decided on Sakana. Not a fan. And what's shocking to me is this place was packed! The restaurant opens at 5:30 on Saturday and we arrived at around 5:45 and were lucky to get the last open table. I don't get it... Messy rolls that all tasted the same. We ordered the tootsie roll and the crunch roll, both tasted similar, except of course for the crunchy captain crunch on top. Just a mushy mess, that was hard to eat. Bland tempura. No bueno. I did, however, have a very good tuna poke salad, but I would not go back just for that. If you want good sushi on the west side, or the entire valley for that matter, say no to Sakana and yes to Tokyo Lobby.
2|FYxSugh9PGrX1PR0BHBIw|I recently told a friend that I cant figure out why there is no good Mexican restaurants in Tempe. His response was what about MacAyo's? I responded with "why are there no good Mexican food restaurants in Tempe?" Seriously if anyone out there knows of any legit Mexican in Tempe let me know. And don't say restaurant Mexico!
这是train.py
文件:
import nltk, re, pandas as pd
from nltk.corpus import stopwords
import sklearn, string
import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from itertools import islice
import time
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=X.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp = MLPClassifier()
rf = RandomForestClassifier()
mlp_opt = MLPClassifier(
activation = 'tanh',
hidden_layer_sizes = (1000,),
alpha = 0.009,
learning_rate = 'adaptive',
learning_rate_init = 0.01,
max_iter = 250,
momentum = 0.9,
solver = 'lbfgs',
warm_start = False
)
print("Training Classifiers")
mlp_opt.fit(X, Y)
mlp.fit(X, Y)
rf.fit(X, Y)
dump(mlp_opt, "C:\\filepath\Models\\mlp_opt.joblib")
dump(mlp, "C:\\filepath\\Models\\mlp.joblib")
dump(rf, "C:\\filepath\\Models\\rf.joblib")
print("Trained Classifiers")
main()
这是 Tester.py
文件:
from nltk.corpus import stopwords
import sklearn, string, nltk, re, pandas as pd, numpy, time
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath\\dev.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=X.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp_opt = load("C:\\filepath\\Models\\mlp_opt.joblib")
mlp = load("C:\\filepath\\Models\\mlp.joblib")
rf = load("C:\\filepath\\Models\\rf.joblib")
print("Testing Classifiers")
mlp_opt_preds = mlp_opt.predict(X)
mlp_preds = mlp.predict(X)
rf_preds = rf.predict(X)
mlp_opt_performance = check_performance(mlp_opt_preds, Y)
mlp_performance = check_performance(mlp_preds, Y)
rf_performance = check_performance(rf_preds, Y)
print("MLP OPT PERF: {}".format(mlp_opt_performance))
print("MLP PERF: {}".format(mlp_performance))
print("RF PERF: {}".format(rf_performance))
main()
我最终得到的是一个错误:
Testing Classifiers
Traceback (most recent call last):
File "Reader.py", line 121, in <module>
main()
File "Reader.py", line 109, in main
mlp_opt_preds = mlp_opt.predict(X)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 953, in predict
y_pred = self._predict(X)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 676, in _predict
self._forward_pass(activations)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 102, in _forward_pass
self.coefs_[i])
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\utils\extmath.py", line 173, in safe_sparse_dot
return np.dot(a, b)
**ValueError: shapes (2000,13231) and (12299,1000) not aligned: 13231 (dim 1) != 12299 (dim 0)**
我知道错误与特征向量的差异有关大小——因为向量是根据数据中的文本创建的。我对 NLP 或机器学习了解不够,无法设计一个解决此问题的解决方案。我怎样才能创造一种方式来拥有模型使用测试数据中的特征集进行预测?
我尝试对以下答案进行编辑以保存特征向量:
Train.py
现在看起来像:
import nltk, re, pandas as pd
from nltk.corpus import stopwords
import sklearn, string
import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from itertools import islice
import time
import pickle
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath\\train.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
test = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=test.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp = MLPClassifier()
rf = RandomForestClassifier()
mlp_opt = MLPClassifier(
activation = 'tanh',
hidden_layer_sizes = (1000,),
alpha = 0.009,
learning_rate = 'adaptive',
learning_rate_init = 0.01,
max_iter = 250,
momentum = 0.9,
solver = 'lbfgs',
warm_start = False
)
print("Training Classifiers")
mlp_opt.fit(X, Y)
mlp.fit(X, Y)
rf.fit(X, Y)
dump(mlp_opt, "filepath\\Models\\mlp_opt.joblib")
dump(mlp, "filepath\\Models\\mlp.joblib")
dump(rf, "filepath\\Models\\rf.joblib")
pickle.dump(test, open("filepath\\tfidf_vectorizer.pkl", 'wb'))
print("Trained Classifiers")
main()
Test.py
现在看起来像:
from nltk.corpus import stopwords
import sklearn, string, nltk, re, pandas as pd, numpy, time
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from joblib import dump, load
import pickle
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tfidf_vectorizer = pickle.load(open("filepath\\tfidf_vectorizer.pkl", 'rb'))
tsv_file = "filepath\\dev.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
print(type(corpus))
print(corpus.head())
X = tfidf_vectorizer.transform(corpus)
print(X)
df = pd.DataFrame(data=X.todense(), columns=tfidf_vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp_opt = load("filepath\\Models\\mlp_opt.joblib")
mlp = load("filepath\\Models\\mlp.joblib")
rf = load("filepath\\Models\\rf.joblib")
print("Testing Classifiers")
mlp_opt_preds = mlp_opt.predict(X)
mlp_preds = mlp.predict(X)
rf_preds = rf.predict(X)
mlp_opt_performance = check_performance(mlp_opt_preds, Y)
mlp_performance = check_performance(mlp_preds, Y)
rf_performance = check_performance(rf_preds, Y)
print("MLP OPT PERF: {}".format(mlp_opt_performance))
print("MLP PERF: {}".format(mlp_performance))
print("RF PERF: {}".format(rf_performance))
main()
但这会产生:
Traceback (most recent call last):
File "Filepath\Reader.py", line 128, in <module>
main()
File "Filepath\Reader.py", line 95, in main
X = tfidf_vectorizer.transform(corpus)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\sparse\base.py", line 689, in __getattr__
raise AttributeError(attr + " not found")
AttributeError: transform not found
最佳答案
您不应该在测试数据集上使用fit_transform()
。您应该只使用从训练数据集中学到的词汇。
这是一个示例解决方案,
import pickle
tfidf_vectorizer = TfidfVectorizer()
train_data = tfidf_vectorizer.fit_transform(train_corpus) # fit on train
# You could just save the vectorizer with pickle
pickle.dump(tfidf_vectorizer, open('tfidf_vectorizer.pkl', 'wb'))
# then later load the vectorizer and transform on test-dataset.
tfidf_vectorizer = pickle.load(open('tfidf_vectorizer.pkl', 'rb'))
test_data = tfidf_vectorizer.transform(test_corpus)
当您使用transform()
时,它仅考虑从训练语料库中学习的词汇,忽略测试集中发现的任何新单词。
关于python - 忽略训练数据中不存在的测试特征,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58574136/
我正在尝试设置我的 git 配置,以便我可以使用工作环境和个人环境。 这是我的 ~.gitconfig 文件的内容(碰巧 work 和 private 在 github 上): [url "git@
我有以下情况。我在 Sheet1 上有一个项目列表,我想将项目复制到 Sheet2 并排除特定项目。 假设我在 Sheet1 上有以下项目列表: 我想将“梨”单元格留在 Sheet2 上。 它应该完全
我试图让 gcc 以不同的语言提供错误消息。但它仍然给我英文的错误信息。 我的语言环境输出 varun@varun-desktop:$ 语言环境 LANG=en_IN LC_CTYPE="es_EC.
我在 Linux x86 上使用 gcc。 我的程序将指向 C 函数的指针导出到 LLVM JIT 函数。调用约定是 cdecl。它在 Windows 上的 MingW 上运行良好。但是奇怪的事情发生
windows 上 php 的奇怪问题...我的应用程序加载了一个“核心”文件,该文件加载了一个设置文件、注册自动加载、进行初始化等。在核心文件的顶部我有 include_once("config.p
在工具|选项|调试器选项 |语言异常可以忽略特定的异常类型。是否可以为每个项目定义这个?例如在调试构建配置中(Delphi 2009 和/或 2010)? /编辑:Reported in QC 最佳答
我在一个文本框旁边有 2 个按钮,在这 2 个按钮后面还有另一个文本框。第一个文本框的 tabindex 为 1000,第一个按钮为 1001,第二个按钮为 1002。第二个文本框的 tabindex
我是 python 新手,正在尝试类型提示,但它们似乎只在某些情况下起作用。它们似乎在属性返回类型上按预期工作,但是当我尝试将整数分配给字符串值(即 self._my_string = 4)时,我没有
问题陈述 我有一些国家和这些国家的州的依赖组合框。我使用 VBA 在第一个组合框中填充唯一值,然后在第二个组合框中动态填充唯一值。该代码似乎忽略了初始传递中的条件。 例如,该代码适用于第一个国家/地区
我对 Javascript 有点陌生。我试图做到这一点,以便单击一个页面上的图像会将您带到一个新页面,并在该新页面上显示特定的 div,因此我使用 sessionStorage 来记住并使用 bool
我不确定我是否正确地处理了这个问题。 我有一个 ASP.NET MVC Web 应用程序。有 4 个主要“页面”通过单击菜单选项,可以选择一个页面,并将该页面选项存储在本地存储中。 现在,如果我刷新页
我的页面工作正常,并按预期显示日期和时间,直到我不得不添加 new Date() 以避免 momentjs deprecation warning 。现在我的约会比应有的时间晚了 5 个小时。 我该如
我需要合并一个 fork 项目。不幸的是,CVS $Id 行不同,因此我尝试的合并工具报告所有文件都不同(其中 95% 只有这一行不同) 是否有一个合并工具可以配置为忽略基于模式的行比较结果? [编辑
我是 python 新手,正在尝试类型提示,但它们似乎只在某些情况下起作用。它们似乎在属性返回类型上按预期工作,但是当我尝试将整数分配给字符串值(即 self._my_string = 4)时,我没有
我正在尝试根据 How do a send an HTTPS request through a proxy in Java? 使用代理访问 https 网页 但是我遇到了一个奇怪的问题:HttpsU
我有一个简单的 CMakeLists.txt 文件: cmake_minimum_required(VERSION 2.8.9) project (sample) add_library(Shared
这个问题在这里已经有了答案: typedef pointer const weirdness (6 个答案) 关闭 8 年前。 我有一个结构体 type_s。然后我将指向 struct type_s
我正在尝试制作一个使用 AES 256 加密的应用程序。不幸的是我无法让它工作。也许我没有完全理解密码逻辑。 所以它正在工作,但据我了解,哈希包含密码。但如果我更改密码,输出是相同的。因此,Crypt
我的文件包含一些行,例如 "This is a string." = "This is a string's content." " Another \" example \"" = " New ex
我尝试使用此查询来获取所选健身房的所有用户。 我的问题是查询忽略了这部分:ual.user_id = weekUsers.user_id 查询似乎获取了与我选择的日期匹配的所有用户 ID,而不检查该用
我是一名优秀的程序员,十分优秀!