gpt4 book ai didi

python - 将 Universal Sentence Encoder 保存到 Tflite 或将其提供给 tensorflow api

转载 作者:行者123 更新时间:2023-12-05 06:19:04 31 4
gpt4 key购买 nike

我有这段代码可以使用预构建的通用句子编码器来查找句子相似度。它需要一个 .txt 文件作为输入。执行余弦相似度,然后接受用户的输出以根据用户输入查询找到最相似的句子。这是代码:

# tensroflow hub module for Universal sentence Encoder
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3" #@param ["https://tfhub.dev/google/universal-sentence-encoder/2", "https://tfhub.dev/google/universal-sentence-encoder-large/3"]

def get_features(texts):
if type(texts) is str:
texts = [texts]
with tf.Session() as sess:
sess.run([tf.global_variables_initializer(), tf.tables_initializer()])
return sess.run(embed(texts))
def remove_stopwords(stop_words, tokens):
res = []
for token in tokens:
if not token in stop_words:
res.append(token)
return res

def process_text(text):
text = text.encode('ascii', errors='ignore').decode()
text = text.lower()
text = re.sub(r'http\S+', ' ', text)
text = re.sub(r'#+', ' ', text )
text = re.sub(r'@[A-Za-z0-9]+', ' ', text)
text = re.sub(r"([A-Za-z]+)'s", r"\1 is", text)
#text = re.sub(r"\'s", " ", text)
text = re.sub(r"\'ve", " have ", text)
text = re.sub(r"won't", "will not ", text)
text = re.sub(r"isn't", "is not ", text)
text = re.sub(r"can't", "can not ", text)
text = re.sub(r"n't", " not ", text)
text = re.sub(r"i'm", "i am ", text)
text = re.sub(r"\'re", " are ", text)
text = re.sub(r"\'d", " would ", text)
text = re.sub(r"\'ll", " will ", text)
text = re.sub('\W', ' ', text)
text = re.sub(r'\d+', ' ', text)
text = re.sub('\s+', ' ', text)
text = text.strip()
return text

def lemmatize(tokens):
lemmatizer = nltk.stem.WordNetLemmatizer()
lemma_list = []
for token in tokens:
lemma = lemmatizer.lemmatize(token, 'v')
if lemma == token:
lemma = lemmatizer.lemmatize(token)
lemma_list.append(lemma)
# return [ lemmatizer.lemmatize(token, 'v') for token in tokens ]
return lemma_list


def process_all(text):
text = process_text(text)
return ' '.join(remove_stopwords(stop_words, text.split()))

process_text("Hello! Who are you?")

with open('/content/sample_data/training.txt') as f:
... text = [i.strip() for i in f]
...

data_processed = list(map(process_text, text))
len(data_processed)

BASE_VECTORS = get_features(text)

def cosine_similarity(v1, v2):
mag1 = np.linalg.norm(v1)
mag2 = np.linalg.norm(v2)
if (not mag1) or (not mag2):
return 0
return np.dot(v1, v2) / (mag1 * mag2)

def test_similiarity(text1, text2):
vec1 = get_features(text1)[0]
vec2 = get_features(text2)[0]
print(vec1.shape)
return cosine_similarity(vec1, vec2)

def semantic_search(query, data, vectors):
query = process_text(query)
print("Extracting features...")
query_vec = get_features(query)[0].ravel()
res = []
for i, d in enumerate(data):
qvec = vectors[i].ravel()
sim = cosine_similarity(query_vec, qvec)
res.append((sim, d[:100], i))
return sorted(res, key=lambda x : x[0], reverse=True)

semantic_search("da vinci", data_processed, BASE_VECTORS)

我想保存模型并将其转换为 tflite。我进行了很多研究,但未能找到任何解决方案。或者如何将其提供给 tensorflow api。

最佳答案

继续的一个选项是将模型保存在 SavedModel format 中,然后将生成的模型转换为 tflite。请注意,转换模型的能力可能取决于模型正在使用的操作,并且某些模型架构可能无法转换为 tflite format .

关于python - 将 Universal Sentence Encoder 保存到 Tflite 或将其提供给 tensorflow api,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60991417/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com