gpt4 book ai didi

python - 在 Google App Engine 中使用 python 进行文本预测的建议解决方案是什么?

转载 作者:塔克拉玛干 更新时间:2023-11-03 05:02:47 24 4
gpt4 key购买 nike

我正在使用 Google App Engine 和 Python 开发一个网站。我希望在网站上添加一个功能,用户可以在其中输入一个单词,系统将根据该单词(基于用法)给出最接近的匹配单词/句子作为对用户的建议。现在我已经实现了一个基于 Peter Norvig 方法拼写检查算法的算法。但我觉得从长远来看,这不是一个非常可扩展的解决方案。我正在寻找在 Google App Engine 上实现此类功能的建议方法。预测 Api 是可行的方法吗?或者编写我自己的算法是最好的方法?如果编写我自己的算法是一种方式,任何人都可以给我一些关于如何使解决方案稳健的指示吗?

代码片段:

import re, collections
from bp_includes.models import User, SocialUser
from bp_includes.lib.basehandler import BaseHandler
from google.appengine.ext import ndb
import utils.ndb_json as ndb_json

class TextPredictionHandler(BaseHandler):
alphabet_list = 'abcdefghijklmnopqrstuvwxyz' #list of alphabets

#Creates corpus with frequency/probability distribution
def trainForText(self,features):
search_dict = collections.defaultdict(lambda: 1)
for f in features:
search_dict[f] += 1
return search_dict

#Heart of the code. Decides how many words can be formed by modifying a given word by one letter
def edit_dist_one(self,word):
splits = [(word[:i],word[i:]) for i in range(len(word) + 1)]
deletes = [a + b[1:] for a,b in splits if b]
transposes = [a + b[1] + b[0] + b[2:] for a,b in splits if (len(b) > 1)]
replaces = [a + c + b[1:] for a, b in splits for c in self.alphabet_list if b]
inserts = [a + c + b for a, b in splits for c in self.alphabet_list]
return set(deletes + transposes + replaces + inserts)

#Checks for exact matches in Corpus for words
def existing_words(self,words,trainSet):
return set(w for w in words if w in trainSet)

#Checks for partial matches in Corpus for a word.
def partial_words(self,word,trainSet):
regex = re.compile(".*("+word+").*")
return set(str(m.group(0)) for l in trainSet for m in [regex.search(l)] if m)

def found_words(self,word):
word = word.lower()
data = []
q = models.SampleModel.query() #This line will not work as I had to mask out the model I am using
#Really bad way of making a Corpus. Needs to modified to be scalable. So many loops. Corpus can be stored in google cloud storage to reduce processing time.
for upost in q.fetch():
if upost.text!="":
tempTextData = re.sub("[^\w]", " ", upost.text).split()
for t in range(len(tempTextData)):
data.append(tempTextData[t].lower())
# data.append(upost.text.lower())
if upost.definition!="":
tempData = re.sub("[^\w]", " ", upost.definition).split()
for t in range(len(tempData)):
data.append(tempData[t].lower())
if upost.TextPhrases:
for e in upost.TextPhrases:
for p in e.get().phrases:
data.append(p.lower())
if upost.Tags:
for h in upost.Tags:
if h.get().text.replace("#","")!="" :
data.append(h.get().text.replace("#","").lower())
trainSet = self.trainForText(data)
set_of_words = self.existing_words([word],trainSet).union(self.existing_words(self.edit_dist_one(word),trainSet))
set_of_words = set_of_words.union(self.partial_words(word,trainSet))
set_of_words = set_of_words.union([word])
return set_of_words

def get(self, search_text):
outputData = self.found_words(search_text)
data = {"texts":[]}
for dat in outputData:
pdata = {}
pdata["text"] = dat;
data["texts"].append(pdata)
self.response.out.write(ndb_json.dumps(data))

最佳答案

与自己制作相比,使用 Prediction API 是最可靠和可扩展的。无需重新发明轮子。
如果您要编写自己的代码,这可能是一个漫长的复杂过程,并且道路上有很多颠簸,除非您对学习和编写该系统有浓厚的兴趣,否则我建议您使用现有工具。
这是一个 example来自谷歌自己。
这是 documentation for the Prediction API .
Hello World program与预测 API。

关于python - 在 Google App Engine 中使用 python 进行文本预测的建议解决方案是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31815857/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com