- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
elif self.path == "/recQuery":
content_length = int(self.headers.getheader('content-length'))
cont_Length = content_length
print "Query Received"
body = self.rfile.read(content_length)
keywords = body.replace("\\", "")
result = json.loads(keywords)
query = result['query']
r = requests.get('http://example.com') // This returns the JSON
print r.json()
self.wfile.write(r.json()) // Send response back to the javascript
{
u 'debug': [u 'time to fit model 0.02 s', u 'time to generate suggestions 0.06 s', u 'time to search documents 0.70 s', u 'time to misc operations 0.02 s'], u 'articles': [{
u 'is-saved': False,
u 'title': u 'Reinforcement and learning',
u 'abstract': u 'Evidence has been accumulating to support the process of reinforcement as a potential mechanism in speciation. In many species, mate choice decisionsare influenced by cultural factors, including learned mating preferences (sexual imprinting) or learned mate attraction signals (e.g., bird song). Ithas been postulated that learning can have a strong impact on the likelihood of speciation and perhaps on the process of reinforcement, but no modelshave explicitly considered learning in a reinforcement context. We review the evidence that suggests that learning may be involved in speciation and reinforcement, and present a model of reinforcement via learned preferences. We show that not only can reinforcement occur when preferences are learned by imprinting, but that such preferences can maintain species differenceseasily in comparison with both autosomal and sex-linked genetically inherited preferences. We highlight the need for more explicit study of the connection between the behavioral process of learning and the evolutionary process of reinforcement in natural systems.',
u 'date': u '2009-01-01T00:00:00',
u 'publication-forum': u 'EVOLUTIONARY ECOLOGY',
u 'publication-forum-type': u 'article',
u 'authors': u 'M R Servedio, S A Saether, G P Saetre',
u 'keywords': u 'imprinting, learning, preferences, model, reinforcement, speciation',
u 'id': u '572749dd12a0854514c1f764'
}, {
u 'is-saved': False,
u 'title': u 'Relational reinforcement learning',
u 'abstract': u 'Then, relational reinforcement learning is presented as a combination of reinforcement learning with relational learning. Its advantages - such as the possibility of using structural representations, making abstraction from specific goals pursued and exploiting the results of previous learning phases - are discussed.',
u 'date': u '2001-01-01T00:00:00',
u 'publication-forum': u 'MULTI-AGENT SYSTEMS AND APPLICATIONS',
u 'publication-forum-type': u 'article',
u 'authors': u 'K Driessens',
u 'keywords': u 'reinforcement, learning, reinforcement learning',
u 'id': u '572749dd12a0854514c1f765'
}, {
u 'is-saved': False,
u 'title': u 'Meta-learning in Reinforcement Learning',
u 'abstract': u 'Meta-parameters in reinforcement learning should be tuned to the environmental dynamics and the animal performance. Here, we propose a biologically plausible meta-reinforcement learning algorithm for tuning these meta-parameters in a dynamic, adaptive manner. We tested our algorithm in both a simulation of a Markov decision task and in a non-linear control task. Our results show that the algorithm robustly finds appropriate meta-parameter values, and controls the meta-parameter time course, in both static and dynamic environments. We suggest that the phasic and tonic components of dopamine neuron firing can encode the signal required for meta-learning of reinforcement learning. (C) 2002 Elsevier Science Ltd. All rights reserved.',
u 'date': u '2003-01-01T00:00:00',
u 'publication-forum': u 'NEURAL NETWORKS',
u 'publication-forum-type': u 'article',
u 'authors': u 'N Schweighofer, K Doya',
u 'keywords': u 'reinforcement learning, dopamine, dynamic environment, meta-learning, meta-parameters, neuromodulation, td error, reinforcement, learning',
u 'id': u '572749dd12a0854514c1f766'
}, {
u 'is-saved': False,
u 'title': u 'Evolutionary adaptive-critic methods for reinforcement learning',
u 'abstract': u 'In this paper, a novel hybrid learning method is proposed for reinforcement learning problems with continuous state and action spaces. The reinforcement learning problems are modeled as Markov decision processes (MDPs) and the hybrid learning method combines evolutionary algorithms with gradient-based Adaptive Heuristic Critic (AHC) algorithms to approximate the optimal policy of MDPs. The suggested method takes the advantages of evolutionary learning and gradient-based reinforcement learning to solve reinforcement learning problems. Simulation results on the learning control of an acrobot illustrate the efficiency of the presented method.',
u 'date': u '2002-01-01T00:00:00',
u 'publication-forum': u "CEC'02: PROCEEDINGS OF THE 2002 CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS1 AND 2",
u 'publication-forum-type': u 'article',
u 'authors': u 'X Xu, H G He, D W Hu',
u 'keywords': u 'markov decision process, reinforcement, learning, model, reinforcement learning, robotics',
u 'id': u '572749dd12a0854514c1f767'
}, {
u 'is-saved': False,
u 'title': u 'Stable Fitted Reinforcement Learning',
u 'url': u 'http://books.nips.cc/papers/files/nips08/1052.pdf',
u 'abstract': u 'We describe the reinforcement learning problem, motivate algorithms which seek an approximation to the Q function, and present new convergence results for two such algorithms.',
u 'date': u '1995-01-01T00:00:00',
u 'publication-forum': u 'NIPS 1995',
u 'authors': u 'G. J. GORDON',
u 'keywords': u 'reinforcement, learning, reinforcement learning',
u 'id': u '572749dd12a0854514c1f768'
}, {
u 'is-saved': False,
u 'title': u 'Feudal Reinforcement Learning',
u 'url': u 'http://books.nips.cc/papers/files/nips05/0271.pdf',
u 'abstract': u "One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-learning managerial hierarchy in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Sub-managers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command. We illustrate the system using a simple maze task.. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-learning and builds a more comprehensive map.",
u 'date': u '1992-01-01T00:00:00',
u 'publication-forum': u 'NIPS 1992',
u 'authors': u 'Peter Dayan, Geoffrey E. Hinton',
u 'keywords': u 'reinforcement, learning, reinforcement learning',
u 'id': u '572749dd12a0854514c1f769'
}, {
u 'is-saved': False,
u 'title': u 'Reinforcement learning in the multi-robot domain',
u 'abstract': u 'This paper describes a formulation of reinforcement learning that enables learning in noisy, dynamic environments such as in the complex concurrent multi-robot learning domain. The methodology involves minimizing the learning space through the use of behaviors and conditions, and dealing with the credit assignment problem through shaped reinforcement in the form of heterogeneous reinforcement functions and progress estimators. We experimentally validate the approach on a group of four mobile robots learning a foraging task.',
u 'date': u '1997-01-01T00:00:00',
u 'publication-forum': u 'AUTONOMOUS ROBOTS',
u 'publication-forum-type': u 'article',
u 'authors': u 'M J Mataric',
u 'keywords': u 'robotics, robot learning, group behavior, multi-agent systems, reinforcement learning, dynamic environment, reinforcement, learning',
u 'id': u '572749dd12a0854514c1f76a'
}, {
u 'is-saved': False,
u 'title': u 'A reinforcement learning approach to online clustering',
u 'abstract': u 'A general technique is proposed for embedding online clustering algorithmsbased on competitive learning in a reinforcement learning framework. The basic idea is that the clustering system can be viewed as a reinforcement learning system that learns through reinforcements to follow the clustering strategy we wish to implement. In this sense, the reinforcement guided competitive learning (RC;CL) algorithm is proposed that constitutes a reinforcement-based adaptation of learning vector quantization (LVQ) with enhanced clustering capabilities. In addition, we suggest extensions of RGCL and LVQ that are characterized by the property of sustained exploration and significantly improve the performance of those algorithms, as indicated by experimental tests on well-known data sets.',
u 'date': u '1999-01-01T00:00:00',
u 'publication-forum': u 'NEURAL COMPUTATION',
u 'publication-forum-type': u 'article',
u 'authors': u 'A Likas',
u 'keywords': u 'reinforcement, learning, reinforcement learning',
u 'id': u '572749dd12a0854514c1f76b'
}, {
u 'is-saved': False,
u 'title': u 'Kernel-Based Reinforcement Learning',
u 'abstract': u 'We consider the problem of approximating the cost-to-go functions in reinforcement learning. By mapping the state implicitly into a feature space, weperform a simple algorithm in the feature space, which corresponds to a complex algorithm in the original state space. Two kernel-based reinforcementlearning algorithms, the e-insensitive kernel based reinforcement learning(epsilon-KRL) and the least squares kernel based reinforcement learning (LS-KRL) are proposed. An example shows that the proposed methods can deal effectively with the reinforcement learning problem without having to exploremany states.',
u 'date': u '2006-01-01T00:00:00',
u 'publication-forum': u 'INTELLIGENT COMPUTING, PART I',
u 'publication-forum-type': u 'article',
u 'authors': u 'G H Hu, Y Q Qiu, L M Xiang',
u 'keywords': u 'reinforcement, learning, reinforcement learning',
u 'id': u '572749dd12a0854514c1f76c'
}, {
u 'is-saved': False,
u 'title': u 'Reinforcement Learning for Adaptive Routing',
u 'url': u 'http://arxiv.org/abs/cs/0703138',
u 'abstract': u 'Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.',
u 'date': u '2007-01-01T00:00:00',
u 'publication-forum': u 'arXiv.org',
u 'authors': u 'Leonid Peshkin, Virginia Savova',
u 'keywords': u 'reinforcement, learning, reinforcement learning',
u 'id': u '572749dd12a0854514c1f76d'
}], u 'keywords_local': {
u 'dynamic programming': {
u 'distance': 0.6078647488472677,
u 'angle': 150.8840432613797
},
u 'on-line learning': {
u 'distance': 0.7752212048381117,
u 'angle': 51.8728440344057
},
u 'reinforcement learning': {
u 'distance': 1.0,
u 'angle': 132.93204012494624
},
u 'reinforcement': {
u 'distance': 0.8544341892190607,
u 'angle': 94.75966624638419
},
u 'neural dynamic programming': {
u 'distance': 0.8898672614396893,
u 'angle': 103.76832781320546
},
u 'genetic algorithms': {
u 'distance': 0.5448835956783193,
u 'angle': 0.0
},
u 'learning': {
u 'distance': 0.8544341892190607,
u 'angle': 180.0
},
u 'model': {
u 'distance': 0.6424412547642948,
u 'angle': 114.45637264648838
},
u 'navigation': {
u 'distance': 0.6125205579210247,
u 'angle': 88.55814464422271
},
u 'fuzzy logic': {
u 'distance': 0.6204073568578674,
u 'angle': 180.0
}
}, u 'keywords_semi_local': {
u 'latent learning': {
u 'distance': 0.0,
u 'angle': 132.93204012494624
},
u 'neural networks': {
u 'distance': 1.0,
u 'angle': 114.45637264648838
},
u 'meta-learning': {
u 'distance': 0.5606272601392779,
u 'angle': 121.07077066747541
},
u 'neuromodulation': {
u 'distance': 0.5606272601392779,
u 'angle': 121.07077066747541
},
u 'imprinting': {
u 'distance': 0.3549922259116784,
u 'angle': 51.8728440344057
},
u 'rough sets': {
u 'distance': 0.7556870841637823,
u 'angle': 0.0
},
u 'speciation': {
u 'distance': 0.3549922259116784,
u 'angle': 51.8728440344057
},
u 'robot learning': {
u 'distance': 0.5732466205043193,
u 'angle': 75.01844366338882
},
u 'multi-agent learning': {
u 'distance': 0.3539033107593776,
u 'angle': 165.77500580957724
},
u 'supply chain management': {
u 'distance': 0.7412680693648454,
u 'angle': 180.0
},
u 'td error': {
u 'distance': 0.5606272601392779,
u 'angle': 121.07077066747541
},
u 'robocup': {
u 'distance': 0.8025792169619675,
u 'angle': 88.55814464422271
},
u 'kernel-based learning': {
u 'distance': 0.7404347021238603,
u 'angle': 41.29183304013004
},
u 'swarm': {
u 'distance': 0.7556870841637823,
u 'angle': 0.0
},
u 'risk-sensitive control': {
u 'distance': 0.8340971241377915,
u 'angle': 94.75966624638419
},
u 'adaptive control': {
u 'distance': 0.34596782799450027,
u 'angle': 125.34609947124422
},
u 'group behavior': {
u 'distance': 0.5732466205043193,
u 'angle': 75.01844366338882
},
u 'meta-parameters': {
u 'distance': 0.5606272601392779,
u 'angle': 121.07077066747541
},
u "bellman's equation": {
u 'distance': 0.9584860393532658,
u 'angle': 71.16343972789532
},
u 'dynamic environment': {
u 'distance': 0.7014728291381438,
u 'angle': 103.76832781320546
},
u 'neural control': {
u 'distance': 0.8025792169619675,
u 'angle': 88.55814464422271
},
u 'transfer learning': {
u 'distance': 0.6876390048950136,
u 'angle': 150.8840432613797
},
u 'multi-agent systems': {
u 'distance': 0.5732466205043193,
u 'angle': 75.01844366338882
},
u 'monte carlo method': {
u 'distance': 0.7556870841637823,
u 'angle': 0.0
},
u 'learning mobile robots': {
u 'distance': 0.8025792169619675,
u 'angle': 88.55814464422271
},
u 'ethology': {
u 'distance': 0.7556870841637823,
u 'angle': 0.0
},
u 'parallel agents': {
u 'distance': 0.3539033107593776,
u 'angle': 165.7750058095772
},
u 'multi-task learning': {
u 'distance': 0.6876390048950136,
u 'angle': 150.8840432613797
},
u 'autonomous learning robots': {
u 'distance': 0.8025792169619675,
u 'angle': 88.55814464422271
},
u 'optimal control': {
u 'distance': 0.5327106780845866,
u 'angle': 37.59122818518838
}
}, u 'inputs': [
[u 'learning', 1.0, 0.8544341892190607, 1.1491961201808072, -1],
[u 'reinforcement learning', 0.978719279361022, 1.0, 1.1256696437503226, -1],
[u 'reinforcement', 1.0, 0.8544341892190607, 1.1491961201808072, -1]
]
}
function notifyServerForQuery()
{
if(search_query != "")
{
var http = new XMLHttpRequest();
var url = SERVER + "/recQuery";
var params = JSON.stringify({query: search_query});
http.onreadystatechange = function() {
if (http.readyState == 4 && http.status == 200) {
console.log(http.responseText);
}
};
http.open("POST", url, true);
http.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
http.send(params);
}
}
我面临的问题是,当我将响应发送回 JavaScript 时,JSON 中包含 Unicode 字符,这些字符也会被发回。因此,当我尝试在 JavaScript 端解析 JSON 时,它会抛出错误。
我想要实现的主要目标是使用 Python 服务器端或 JavaScript 删除 JSON 中看到的 Unicode 字符。任何能达到目的的东西都是受欢迎的。
最佳答案
您需要对输出进行编码。
如果我是你,我会使用 python3,因为 python2 编码很令人头疼。不管怎样,我做了一个 super 编码函数来帮助你:
def encode_dict(dic, encoding='utf-8'):
new_dict={}
for key, value in dic.items():
new_key=key.encode(encoding)
if isinstance(value, list):
new_dict[new_key]=[]
for item in value:
if isinstance(item, unicode):
new_dict[new_key].append(item.encode(encoding))
elif isinstance(item, dict):
new_dict[new_key].append(decode_dict(item))
else:
new_dict[new_key].append(item)
elif isinstance(value, unicode):
new_dict[new_key]=value.encode(encoding)
elif isinstance(value, dict):
new_dict[new_key]=decode_dict(value)
return new_dict
所以你这样做:self.wfile.write(encode_dict(r.json()))
关于javascript - 将响应发送回 javascript 时从 python 中删除 unicode 字符,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37675941/
我有一个 html 格式的表单: 我需要得到 JavaScript在value input 字段执行,但只能通过表单的 submit .原因是页面是一个模板所以我不控制它(不能有
我管理的论坛是托管软件,因此我无法访问源代码,我只能向页面添加 JavaScript 来实现我需要完成的任务。 我正在尝试用超链接替换所有页面上某些文本关键字的第一个实例。我还根据国家/地区代码对这些
我正在使用 JS 打开新页面并将 HTML 代码写入其中,但是当我尝试使用 document.write() 在新页面中编写 JS 时功能不起作用。显然,一旦看到 ,主 JS 就会关闭。用于即将打开的
提问不是为了解决问题,提问是为了更好地理解系统 专家!我知道每当你将 javascript 代码输入 javascript 引擎时,它会立即由 javascript 引擎执行。由于没有看过Engi
我在一个文件夹中有两个 javascript 文件。我想将一个变量的 javascript 文件传递到另一个。我应该使用什么程序? 最佳答案 window.postMessage用于跨文档消息。使
我有一个练习,我需要输入两个输入并检查它们是否都等于一个。 如果是 console.log 正则 console.log false 我试过这样的事情: function isPositive(fir
我正在做一个Web应用程序,计划允许其他网站(客户端)在其页面上嵌入以下javascript: 我的网络应用程序位于 http://example.org 。 我不能假设客户端网站的页面有 JQue
目前我正在使用三个外部 JS 文件。 我喜欢将所有三个 JS 文件合而为一。 尽一切可能。我创建 aio.js 并在 aio.js 中 src="https://code.jquery.com/
我有例如像这样的数组: var myArray = []; var item1 = { start: '08:00', end: '09:30' } var item2 = {
所以我正在制作一个 Chrome 扩展,它使用我制作的一些 TamperMonkey 脚本。我想要一个“主”javascript 文件,您可以在其中包含并执行其他脚本。我很擅长使用以下行将其他 jav
我有 A、B html 和 A、B javascript 文件。 并且,如何将 A JavaScript 中使用的全局变量直接移动到 B JavaScript 中? 示例 JavaScript) va
我需要将以下整个代码放入名为 activate.js 的 JavaScript 中。你能告诉我怎么做吗? var int = new int({ seconds: 30, mark
我已经为我的 .net Web 应用程序创建了母版页 EXAMPLE1.Master。他们的 I 将值存储在 JavaScript 变量中。我想在另一个 JS 文件中检索该变量。 示例1.大师:-
是否有任何库可以用来转换这样的代码: function () { var a = 1; } 像这样的代码: function () { var a = 1; } 在我的浏览器中。因为我在 Gi
我收到语法缺失 ) 错误 $(document).ready(function changeText() { var p = document.getElementById('bidp
我正在制作进度条。它有一个标签。我想调整某个脚本完成的标签。在找到可能的解决方案的一些答案后,我想出了以下脚本。第一个启动并按预期工作。然而,第二个却没有。它出什么问题了?代码如下: HTML:
这里有一个很简单的问题,我简单的头脑无法回答:为什么我在外部库中加载时,下面的匿名和onload函数没有运行?我错过了一些非常非常基本的东西。 Library.js 只有一行:console.log(
我知道 javascript 是一种客户端语言,但如果实际代码中嵌入的 javascript 代码以某种方式与在控制台上运行的代码不同,我会尝试找到答案。让我用一个例子来解释它: 我想创建一个像 Mi
我如何将这个内联 javascript 更改为 Unobtrusive JavaScript? 谢谢! 感谢您的回答,但它不起作用。我的代码是: PHP js文件 document.getElem
我正在寻找将简单的 JavaScript 对象“转储”到动态生成的 JavaScript 源代码中的最优雅的方法。 目的:假设我们有 node.js 服务器生成 HTML。我们在服务器端有一个对象x。
我是一名优秀的程序员,十分优秀!