- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我正在尝试加快我的一项功能。
def get_scale_local_maximas(cube_coordinates, laplacian_cube):
"""
Check provided cube coordinate for scale space local maximas.
Returns only the points that satisfy the criteria.
A point is considered to be a local maxima if its value is greater
than the value of the point on the next scale level and the point
on the previous scale level. If the tested point is located on the
first scale level or on the last one, then only one inequality should
hold in order for this point to be local scale maxima.
Parameters
----------
cube_coordinates : (n, 3) ndarray
A 2d array with each row representing 3 values, ``(y,x,scale_level)``
where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the
position of a point in scale space.
laplacian_cube : ndarray of floats
Laplacian of Gaussian scale space.
Returns
-------
output : (n, 3) ndarray
cube_coordinates that satisfy the local maximum criteria in
scale space.
Examples
--------
>>> one = np.array([[1, 2, 3], [4, 5, 6]])
>>> two = np.array([[7, 8, 9], [10, 11, 12]])
>>> three = np.array([[0, 0, 0], [0, 0, 0]])
>>> check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]])
>>> lapl_dummy = np.dstack([one, two, three])
>>> get_scale_local_maximas(check_coords, lapl_dummy)
array([[1, 0, 1]])
"""
amount_of_layers = laplacian_cube.shape[2]
amount_of_points = cube_coordinates.shape[0]
# Preallocate index. Fill it with False.
accepted_points_index = np.ones(amount_of_points, dtype=bool)
for point_index, interest_point_coords in enumerate(cube_coordinates):
# Row coordinate
y_coord = interest_point_coords[0]
# Column coordinate
x_coord = interest_point_coords[1]
# Layer number starting from the smallest sigma
point_layer = interest_point_coords[2]
point_response = laplacian_cube[y_coord, x_coord, point_layer]
# Check the point under the current one
if point_layer != 0:
lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1]
if lower_point_response >= point_response:
accepted_points_index[point_index] = False
continue
# Check the point above the current one
if point_layer != (amount_of_layers-1):
upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1]
if upper_point_response >= point_response:
accepted_points_index[point_index] = False
continue
# Return only accepted points
return cube_coordinates[accepted_points_index]
这是我使用 Cython 加速它的尝试:
# cython: cdivision=True
# cython: boundscheck=False
# cython: nonecheck=False
# cython: wraparound=False
import numpy as np
cimport numpy as cnp
def get_scale_local_maximas(cube_coordinates, cnp.ndarray[cnp.double_t, ndim=3] laplacian_cube):
"""
Check provided cube coordinate for scale space local maximas.
Returns only the points that satisfy the criteria.
A point is considered to be a local maxima if its value is greater
than the value of the point on the next scale level and the point
on the previous scale level. If the tested point is located on the
first scale level or on the last one, then only one inequality should
hold in order for this point to be local scale maxima.
Parameters
----------
cube_coordinates : (n, 3) ndarray
A 2d array with each row representing 3 values, ``(y,x,scale_level)``
where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the
position of a point in scale space.
laplacian_cube : ndarray of floats
Laplacian of Gaussian scale space.
Returns
-------
output : (n, 3) ndarray
cube_coordinates that satisfy the local maximum criteria in
scale space.
Examples
--------
>>> one = np.array([[1, 2, 3], [4, 5, 6]])
>>> two = np.array([[7, 8, 9], [10, 11, 12]])
>>> three = np.array([[0, 0, 0], [0, 0, 0]])
>>> check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]])
>>> lapl_dummy = np.dstack([one, two, three])
>>> get_scale_local_maximas(check_coords, lapl_dummy)
array([[1, 0, 1]])
"""
cdef Py_ssize_t y_coord, x_coord, point_layer, point_index
cdef cnp.double_t point_response, lower_point_response, upper_point_response
cdef Py_ssize_t amount_of_layers = laplacian_cube.shape[2]
cdef Py_ssize_t amount_of_points = cube_coordinates.shape[0]
# amount_of_layers = laplacian_cube.shape[2]
# amount_of_points = cube_coordinates.shape[0]
# Preallocate index. Fill it with False.
accepted_points_index = np.ones(amount_of_points, dtype=bool)
for point_index in range(amount_of_points):
interest_point_coords = cube_coordinates[point_index]
# Row coordinate
y_coord = interest_point_coords[0]
# Column coordinate
x_coord = interest_point_coords[1]
# Layer number starting from the smallest sigma
point_layer = interest_point_coords[2]
point_response = laplacian_cube[y_coord, x_coord, point_layer]
# Check the point under the current one
if point_layer != 0:
lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1]
if lower_point_response >= point_response:
accepted_points_index[point_index] = False
continue
# Check the point above the current one
if point_layer != (amount_of_layers-1):
upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1]
if upper_point_response >= point_response:
accepted_points_index[point_index] = False
continue
# Return only accepted points
return cube_coordinates[accepted_points_index]
但我看不出速度有任何提升。我还尝试用 memoryview cnp.double_t[:, :,::1]
替换 cnp.ndarray[cnp.double_t, ndim=3]
但它只会减慢速度下来整个代码。我将感谢对我的代码的任何提示或更正。我对 Cython 比较陌生,我可能做错了什么。
编辑:
我在 Cython 中完全重写了我的函数:
def get_scale_local_maximas(cnp.ndarray[cnp.int_t, ndim=2] cube_coordinates, cnp.ndarray[cnp.double_t, ndim=3] laplacian_cube):
"""
Check provided cube coordinate for scale space local maximas.
Returns only the points that satisfy the criteria.
A point is considered to be a local maxima if its value is greater
than the value of the point on the next scale level and the point
on the previous scale level. If the tested point is located on the
first scale level or on the last one, then only one inequality should
hold in order for this point to be local scale maxima.
Parameters
----------
cube_coordinates : (n, 3) ndarray
A 2d array with each row representing 3 values, ``(y,x,scale_level)``
where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the
position of a point in scale space.
laplacian_cube : ndarray of floats
Laplacian of Gaussian scale space.
Returns
-------
output : (n, 3) ndarray
cube_coordinates that satisfy the local maximum criteria in
scale space.
Examples
--------
>>> one = np.array([[1, 2, 3], [4, 5, 6]])
>>> two = np.array([[7, 8, 9], [10, 11, 12]])
>>> three = np.array([[0, 0, 0], [0, 0, 0]])
>>> check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]])
>>> lapl_dummy = np.dstack([one, two, three])
>>> get_scale_local_maximas(check_coords, lapl_dummy)
array([[1, 0, 1]])
"""
cdef Py_ssize_t y_coord, x_coord, point_layer, point_index
cdef cnp.double_t point_response, lower_point_response, upper_point_response
cdef Py_ssize_t amount_of_layers = laplacian_cube.shape[2]
cdef Py_ssize_t amount_of_points = cube_coordinates.shape[0]
# Preallocate index. Fill it with False.
accepted_points_index = np.ones(amount_of_points, dtype=bool)
for point_index in range(amount_of_points):
interest_point_coords = cube_coordinates[point_index]
# Row coordinate
y_coord = interest_point_coords[0]
# Column coordinate
x_coord = interest_point_coords[1]
# Layer number starting from the smallest sigma
point_layer = interest_point_coords[2]
point_response = laplacian_cube[y_coord, x_coord, point_layer]
# Check the point under the current one
if point_layer != 0:
lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1]
if lower_point_response >= point_response:
accepted_points_index[point_index] = False
continue
# Check the point above the current one
if point_layer != (amount_of_layers-1):
upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1]
if upper_point_response >= point_response:
accepted_points_index[point_index] = False
continue
# Return only accepted points
return cube_coordinates[accepted_points_index]
然后我用我的函数和建议的矢量化函数做了一些基准测试:
%timeit compiled.get_scale_local_maximas_np(coords, lapl_dummy)
%timeit compiled.get_scale_local_maximas(coords, lapl_dummy)
%timeit dynamic.get_scale_local_maximas_np(coords, lapl_dummy)
%timeit dynamic.get_scale_local_maximas(coords, lapl_dummy)
10000 loops, best of 3: 101 µs per loop
1000 loops, best of 3: 328 µs per loop
10000 loops, best of 3: 103 µs per loop
1000 loops, best of 3: 1.6 ms per loop
compiled
命名空间代表这两个使用 Cython 编译的函数。
dynamic
命名空间代表普通的 Python 文件。
因此,我得出结论,在这种情况下,numpy 方法更好。
最佳答案
您的 Python 代码仍然可以改进,因为您还没有“在 numpy 中完成 98%”:您仍在迭代坐标数组的行并对每行执行 1-2 次检查。
您可以使用 numpy 的“花式索引”和掩码以矢量化形式完全获取它:
def get_scale_local_maximas_full_np(coords, cube):
x, y, z = [ coords[:, ind] for ind in range(3) ]
point_responses = cube[x, y, z]
lowers = point_responses.copy()
uppers = point_responses.copy()
not_layer_0 = z > 0
lower_responses = cube[x[not_layer_0], y[not_layer_0], z[not_layer_0]-1]
lowers[not_layer_0] = lower_responses
not_max_layer = z < (cube.shape[2] - 1)
upper_responses = cube[x[not_max_layer], y[not_max_layer], z[not_max_layer]+1]
uppers[not_max_layer] = upper_responses
lo_check = np.ones(z.shape, dtype=np.bool)
lo_check[not_layer_0] = (point_responses > lowers)[not_layer_0]
hi_check = np.ones(z.shape, dtype=np.bool)
hi_check[not_max_layer] = (point_responses > uppers)[not_max_layer]
return coords[lo_check & hi_check]
我生成了一组更大的数据来测试性能:
lapl_dummy = np.random.rand(100,100,100)
coords = np.random.random_integers(0,99, size=(1000,3))
我得到以下计时结果:
In [146]: %timeit get_scale_local_maximas_full_np(coords, lapl_dummy)
10000 loops, best of 3: 175 µs per loop
In [147]: %timeit get_scale_local_maximas(coords, lapl_dummy)
100 loops, best of 3: 2.24 ms per loop
当然,性能测试要小心,因为它通常取决于所使用的数据。
我对 Cython 没有什么经验,无法帮助你。
关于python - 使用cython加速功能,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29155745/
我正在处理一组标记为 160 个组的 173k 点。我想通过合并最接近的(到 9 或 10 个组)来减少组/集群的数量。我搜索过 sklearn 或类似的库,但没有成功。 我猜它只是通过 knn 聚类
我有一个扁平数字列表,这些数字逻辑上以 3 为一组,其中每个三元组是 (number, __ignored, flag[0 or 1]),例如: [7,56,1, 8,0,0, 2,0,0, 6,1,
我正在使用 pipenv 来管理我的包。我想编写一个 python 脚本来调用另一个使用不同虚拟环境(VE)的 python 脚本。 如何运行使用 VE1 的 python 脚本 1 并调用另一个 p
假设我有一个文件 script.py 位于 path = "foo/bar/script.py"。我正在寻找一种在 Python 中通过函数 execute_script() 从我的主要 Python
这听起来像是谜语或笑话,但实际上我还没有找到这个问题的答案。 问题到底是什么? 我想运行 2 个脚本。在第一个脚本中,我调用另一个脚本,但我希望它们继续并行,而不是在两个单独的线程中。主要是我不希望第
我有一个带有 python 2.5.5 的软件。我想发送一个命令,该命令将在 python 2.7.5 中启动一个脚本,然后继续执行该脚本。 我试过用 #!python2.7.5 和http://re
我在 python 命令行(使用 python 2.7)中,并尝试运行 Python 脚本。我的操作系统是 Windows 7。我已将我的目录设置为包含我所有脚本的文件夹,使用: os.chdir("
剧透:部分解决(见最后)。 以下是使用 Python 嵌入的代码示例: #include int main(int argc, char** argv) { Py_SetPythonHome
假设我有以下列表,对应于及时的股票价格: prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11] 我想确定以下总体上最
所以我试图在选择某个单选按钮时更改此框架的背景。 我的框架位于一个类中,并且单选按钮的功能位于该类之外。 (这样我就可以在所有其他框架上调用它们。) 问题是每当我选择单选按钮时都会出现以下错误: co
我正在尝试将字符串与 python 中的正则表达式进行比较,如下所示, #!/usr/bin/env python3 import re str1 = "Expecting property name
考虑以下原型(prototype) Boost.Python 模块,该模块从单独的 C++ 头文件中引入类“D”。 /* file: a/b.cpp */ BOOST_PYTHON_MODULE(c)
如何编写一个程序来“识别函数调用的行号?” python 检查模块提供了定位行号的选项,但是, def di(): return inspect.currentframe().f_back.f_l
我已经使用 macports 安装了 Python 2.7,并且由于我的 $PATH 变量,这就是我输入 $ python 时得到的变量。然而,virtualenv 默认使用 Python 2.6,除
我只想问如何加快 python 上的 re.search 速度。 我有一个很长的字符串行,长度为 176861(即带有一些符号的字母数字字符),我使用此函数测试了该行以进行研究: def getExe
list1= [u'%app%%General%%Council%', u'%people%', u'%people%%Regional%%Council%%Mandate%', u'%ppp%%Ge
这个问题在这里已经有了答案: Is it Pythonic to use list comprehensions for just side effects? (7 个答案) 关闭 4 个月前。 告
我想用 Python 将两个列表组合成一个列表,方法如下: a = [1,1,1,2,2,2,3,3,3,3] b= ["Sun", "is", "bright", "June","and" ,"Ju
我正在运行带有最新 Boost 发行版 (1.55.0) 的 Mac OS X 10.8.4 (Darwin 12.4.0)。我正在按照说明 here构建包含在我的发行版中的教程 Boost-Pyth
学习 Python,我正在尝试制作一个没有任何第 3 方库的网络抓取工具,这样过程对我来说并没有简化,而且我知道我在做什么。我浏览了一些在线资源,但所有这些都让我对某些事情感到困惑。 html 看起来
我是一名优秀的程序员,十分优秀!