作者热门文章
- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
下面的代码片段给了我一些tensorType错误
TypeError:无法将类型 TensorType(float32, 3D)(Variable Subtensor{:int64:}.0)转换为类型 TensorType(float32, (False, False, True))。您可以尝试手动将 Subtensor{:int64:}.0 转换为 TensorType(float32, (False, False, True))。
这是我正在阅读的教程网站之一的基本示例。你能帮我理解这个错误吗?我是机器学习和 keras 的新手
import itertools
import numpy as np
# put together a model to predict
from keras.layers import Input, Embedding, merge, Flatten, SimpleRNN
from keras.models import Model
sentences = '''
sam is red
hannah not red
hannah is green
bob is green
bob not red
sam not green
sarah is red
sarah not green'''.strip().split('\n')
is_green = np.asarray([[0, 1, 1, 1, 1, 0, 0, 0]], dtype='int32').T
lemma = lambda x: x.strip().lower().split(' ')
sentences_lemmatized = [lemma(sentence) for sentence in sentences]
words = set(itertools.chain(*sentences_lemmatized))
# set(['boy', 'fed', 'ate', 'cat', 'kicked', 'hat'])
# dictionaries for converting words to integers and vice versa
word2idx = dict((v, i) for i, v in enumerate(words))
idx2word = list(words)
# convert the sentences a numpy array
to_idx = lambda x: [word2idx[word] for word in x]
sentences_idx = [to_idx(sentence) for sentence in sentences_lemmatized]
sentences_array = np.asarray(sentences_idx, dtype='int32')
# parameters for the model
sentence_maxlen = 3
n_words = len(words)
n_embed_dims = 5
input_sentence = Input(shape=(sentence_maxlen,), dtype='int32')
input_embedding = Embedding(n_words, n_embed_dims)(input_sentence)
#color_prediction = SimpleRNN(init='uniform',output_dim=1,input_dim=3)(input_embedding)
#color_prediction = SimpleRNN(output_dim=1,input_dim=5,
# init='glorot_uniform', inner_init='orthogonal', activation='sigmoid', weights=None, return_sequences=False)(input_embedding);
color_prediction = SimpleRNN(1, return_sequences=False, batch_input_shape=(10, 2, 3))(input_embedding);
predict_green = Model(input=[input_sentence], output=[color_prediction])
predict_green.compile(optimizer='sgd', loss='binary_crossentropy')
# fit the model to predict what color each person is
predict_green.fit([sentences_array], [is_green], nb_epoch=5000, verbose=1)
embeddings = predict_green.layers[1].W.get_value()
# print out the embedding vector associated with each word
for i in range(n_words):
print('{}: {}'.format(idx2word[i], embeddings[i]))
最佳答案
我是机器学习新手,也遇到了你的问题。我按如下方式更改了代码并运行了,但我不确定它是否正确。
import itertools
import os
import numpy as np
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Input, Embedding, merge, Flatten, SimpleRNN
sentences = '''
sam is red
hannah not red
hannah is green
bob is green
bob not red
sam not green
sarah is red
sarah not green'''.strip().split('\n')
is_green = np.asarray([[0, 1, 1, 1, 1, 0, 0, 0]], dtype='int32').T
lemma = lambda x: x.strip().lower().split(' ')
sentences_lemmatized = [lemma(sentence) for sentence in sentences]
words = set(itertools.chain(*sentences_lemmatized))
# set(['boy', 'fed', 'ate', 'cat', 'kicked', 'hat'])
# dictionaries for converting words to integers and vice versa
word2idx = dict((v, i) for i, v in enumerate(words))
idx2word = list(words)
# convert the sentences a numpy array
to_idx = lambda x: [word2idx[word] for word in x]
sentences_idx = [to_idx(sentence) for sentence in sentences_lemmatized]
sentences_array = np.asarray(sentences_idx, dtype='int32')
# parameters for the model
sentence_maxlen = 3
n_words = len(words)
n_embed_dims = 3
model = Sequential()
model.add(Embedding(n_words, n_embed_dims,input_length=sentence_maxlen))
model.add(SimpleRNN(3))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.fit([sentences_array], [is_green], nb_epoch=5000, verbose=1)
predictions = model.predict(sentences_array)
print predictions.shape
embeddings = model.layers[0].W.get_value()
# print out the embedding vector associated with each word
for i in range(n_words):
print('{}: {}'.format(idx2word[i], embeddings[i]))
输出:
sarah: [-0.51089537 -0.30958903 -0.17312947] sam: [-0.47487321
-0.33426151 -0.18260512] hannah: [ 0.51548952 0.33343625 0.18121554] is: [ 0.02989657 -0.02573686 0.01081978] green: [ 0.0155487
-0.02551323 0.00846179] not: [ 0.01339869 -0.02586824 0.01932905] bob: [ 0.47654441 0.37283263 0.17969941] red: [-0.02136148
0.04420395 -0.03119873]
关于machine-learning - Keras TypeError : Cannot convert Type TensorType(float32, 3D)(变量子张量{:int64:}. 0),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39563426/
我是一名优秀的程序员,十分优秀!