gpt4 book ai didi

python - 探测 python 函数

转载 作者:行者123 更新时间:2023-12-01 04:56:19 26 4
gpt4 key购买 nike

我可以在 python 中执行此操作,它为我提供了函数内可用的子模块/参数。

在解释器中,我可以这样做:

>>> from nltk import pos_tag
>>> dir(pos_tag)
['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__doc__', '__format__', '__get__', '__getattribute__', '__globals__', '__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name']

顺便说一句,dir(function) 调用是什么?

我如何知道调用该函数需要哪些参数?对于pos_tag,源代码说它需要token,请参阅https://github.com/nltk/nltk/blob/develop/nltk/tag/init.py

def pos_tag(tokens):
"""
Use NLTK's currently recommended part of speech tagger to
tag the given list of tokens.
>>> from nltk.tag import pos_tag # doctest: +SKIP
>>> from nltk.tokenize import word_tokenize # doctest: +SKIP
>>> pos_tag(word_tokenize("John's big idea isn't all that bad.")) # doctest: +SKIP
[('John', 'NNP'), ("'s", 'POS'), ('big', 'JJ'), ('idea', 'NN'), ('is',
'VBZ'), ("n't", 'RB'), ('all', 'DT'), ('that', 'DT'), ('bad', 'JJ'),
('.', '.')]
:param tokens: Sequence of tokens to be tagged
:type tokens: list(str)
:return: The tagged tokens
:rtype: list(tuple(str, str))
"""
tagger = load(_POS_TAGGER)
return tagger.tag(tokens)

如果文档字符串可用于该函数,是否有办法知道该函数期望特定参数的参数类型是什么?,例如在上面的 pos_tag 情况下,它是 :param tokens: Sequence of tokens to be tagged:type tokens: list(str) 这些信息可以是在不阅读代码的情况下运行解释器时得到的结果?

最后,有没有办法知道返回类型是什么?

需要明确的是,我并不期待文档字符串的打印输出,但上面的问题是为了我可以稍后使用 isinstance(output_object, type) 进行某种类型检查

最佳答案

以下是您的四个问题的答案。恐怕你想做的一些事情在标准库中是不可能的,除非你想自己解析文档字符串。

(1) BTW, what's dir(function) call?

如果我正确理解这个问题,我相信文档回答了这个问题 here :

If the object has a method named __dir__(), this method will be called and must return the list of attributes. This allows objects that implement a custom __getattr__() or __getattribute__() function to customize the way dir() reports their attributes.

If the object does not provide __dir__(), the function tries its best to gather information from the object’s __dict__ attribute, if defined, and from its type object.

(2) How do I know what parameters is necessary to call the function?

最好的方法是使用inspect:

>>> from nltk import pos_tag
>>> from inspect import getargspec
>>> getargspec(pos_tag)
ArgSpec(args=['tokens'], varargs=None, keywords=None, defaults=None) # a named tuple
>>> getargspec(pos_tag).args
['tokens']

(3) If a docstring is available for the function is there a way to know what is the parameter type that the function is expecting for a specific parameter?

不在标准库中,除非您想自己解析文档字符串。您可能已经知道可以像这样访问文档字符串:

>>> from inspect import getdoc
>>> print getdoc(pos_tag)
Use NLTK's currently recommended part of speech tagger to
tag the given list of tokens.

>>> from nltk.tag import pos_tag
>>> from nltk.tokenize import word_tokenize
>>> pos_tag(word_tokenize("John's big idea isn't all that bad."))
[('John', 'NNP'), ("'s", 'POS'), ('big', 'JJ'), ('idea', 'NN'), ('is',
'VBZ'), ("n't", 'RB'), ('all', 'DT'), ('that', 'DT'), ('bad', 'JJ'),
('.', '.')]

:param tokens: Sequence of tokens to be tagged
:type tokens: list(str)
:return: The tagged tokens
:rtype: list(tuple(str, str))

或者这个:

>>> print pos_tag.func_code.co_consts[0]

Use NLTK's currently recommended part of speech tagger to
tag the given list of tokens.

>>> from nltk.tag import pos_tag
>>> from nltk.tokenize import word_tokenize
>>> pos_tag(word_tokenize("John's big idea isn't all that bad."))
[('John', 'NNP'), ("'s", 'POS'), ('big', 'JJ'), ('idea', 'NN'), ('is',
'VBZ'), ("n't", 'RB'), ('all', 'DT'), ('that', 'DT'), ('bad', 'JJ'),
('.', '.')]

:param tokens: Sequence of tokens to be tagged
:type tokens: list(str)
:return: The tagged tokens
:rtype: list(tuple(str, str))

如果您想尝试自己解析参数和“类型”,您可以从正则表达式开始。但显然,我宽松地使用了“类型”这个词。此外,这种方法仅适用于以这种特定方式列出参数和类型的文档字符串:

>>> import re
>>> params = re.findall(r'(?<=:)type\s+([\w]+):\s*(.*?)(?=\n|$)', getdoc(pos_tag))
>>> for param, type_ in params:
print param, '=>', type_

tokens => list(str)

这种方法的结果当然会给你参数及其相应的描述。您还可以通过拆分字符串并仅保留满足以下要求的单词来检查描述中的每个单词:

>>> isinstance(eval(word), type)
True
>>> isinstance(eval('list'), type)
True

但是这种方法很快就会变得复杂,尤其是在尝试解析 pos_tag 的最后一个参数时。此外,文档字符串通常根本不具有这种格式。因此,这可能只适用于 nltk,但即便如此,也并非一直有效。

(4) And lastly, is there a way to know what is the return type?

再说一遍,恐怕不行,除非您想使用上面的正则表达式示例来梳理文档字符串。返回类型可能会根据 arg(s) 类型而有所不同。 (考虑任何可与任何迭代一起使用的函数。)如果您想尝试从文档字符串中提取此信息(同样,以 pos_tag 文档字符串的精确格式),您可以尝试另一个正则表达式:

>>> return_ = re.search(r'(?<=:)rtype:\s*(.*?)(?=\n|$)', getdoc(pos_tag))
>>> if return_:
print 'return "type" =', return_.group()

return "type" = rtype: list(tuple(str, str))

否则,我们在这里能做的最好的事情就是获取源代码(这又是您明确不想要的):

>>> import inspect
>>> print inspect.getsource(pos_tag)
def pos_tag(tokens):
"""
Use NLTK's currently recommended part of speech tagger to
tag the given list of tokens.

>>> from nltk.tag import pos_tag
>>> from nltk.tokenize import word_tokenize
>>> pos_tag(word_tokenize("John's big idea isn't all that bad."))
[('John', 'NNP'), ("'s", 'POS'), ('big', 'JJ'), ('idea', 'NN'), ('is',
'VBZ'), ("n't", 'RB'), ('all', 'DT'), ('that', 'DT'), ('bad', 'JJ'),
('.', '.')]

:param tokens: Sequence of tokens to be tagged
:type tokens: list(str)
:return: The tagged tokens
:rtype: list(tuple(str, str))
"""
tagger = load(_POS_TAGGER)
return tagger.tag(tokens)

关于python - 探测 python 函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27263605/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com