gpt4 book ai didi

python - 如何使用 scipy 的 affine_transform 对彩色图像进行任意仿射变换?

转载 作者:太空宇宙 更新时间:2023-11-04 05:05:16 27 4
gpt4 key购买 nike

我的目标 是以这样一种方式转换图像,即三个源点映射到一个空数组中的三个目标点。我已经解决了找到正确仿射矩阵的问题,但是我无法对彩色图像应用仿射变换。

更具体地说,我正在努力正确使用 scipy.ndimage.interpolation.affine_transform方法。作为这个question答案指出,affine_transform 方法可能有些不直观(尤其是在偏移计算方面),但是,用户 timday 显示了如何在图像上应用旋转和剪切并将其定位在另一个数组中,而用户地理数据提供了更多背景信息.

我的问题是将此处显示的方法(1)推广到彩色图像,以及(2)推广到我自己计算的任意变换。

这是我的代码(应该在您的计算机上运行):

import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt


def calcAffineMatrix(sourcePoints, targetPoints):
# For three source- and three target points, find the affine transformation
# Function works correctly, not part of the question
A = []
b = []
for sp, trg in zip(sourcePoints, targetPoints):
A.append([sp[0], 0, sp[1], 0, 1, 0])
A.append([0, sp[0], 0, sp[1], 0, 1])
b.append(trg[0])
b.append(trg[1])
result, resids, rank, s = np.linalg.lstsq(np.array(A), np.array(b))

a0, a1, a2, a3, a4, a5 = result
# Ignoring offset here, later use timday's suggested offset calculation
affineTrafo = np.array([[a0, a1, 0], [a2, a3, 0], [0, 0, 1]], 'd')

# Testing the correctness of transformation matrix
for i, _ in enumerate(sourcePoints):
src = sourcePoints[i]
src.append(1.)
trg = targetPoints[i]
trg.append(1.)
at = affineTrafo.copy()
at[2, 0:2] = [a4, a5]
assert(np.array_equal(np.round(np.array(src).dot(at)), np.array(trg)))
return affineTrafo


# Prepare source image
sourcePoints = [[162., 112.], [130., 112.], [162., 240.]]
targetPoints = [[180., 102.], [101., 101.], [190., 200.]]
image = np.empty((300, 300, 3), dtype='uint8')
image[:] = 255
# Mark border for better visibility
image[0:2, :] = 0
image[-3:-1, :] = 0
image[:, 0:2] = 0
image[:, -3:-1] = 0
# Mark source points in red
for sp in sourcePoints:
sp = [int(u) for u in sp]
image[sp[1] - 5:sp[1] + 5, sp[0] - 5:sp[0] + 5, :] = np.array([255, 0, 0])

# Show image
plt.subplot(3, 1, 1)
plt.imshow(image)

# Prepare array in which the image is placed
array = np.empty((400, 300, 3), dtype='uint8')
array[:] = 255
a2 = array.copy()
# Mark target points in blue
for tp in targetPoints:
tp = [int(u) for u in tp]
a2[tp[1] - 2:tp[1] + 2, tp[0] - 2:tp[0] + 2] = [0, 0, 255]

# Show array
plt.subplot(3, 1, 2)
plt.imshow(a2)

# Next 5 program lines are actually relevant for question:

# Calculate affine matrix
affineTrafo = calcAffineMatrix(sourcePoints, targetPoints)

# This follows the c_in-c_out method proposed in linked stackoverflow issue
# extended for color channel (no translation here)
c_in = np.array([sourcePoints[0][0], sourcePoints[0][1], 0])
c_out = np.array([targetPoints[0][0], targetPoints[0][1], 0])
offset = (c_in - np.dot(c_out, affineTrafo))

# Affine transform!
ndimage.interpolation.affine_transform(image, affineTrafo, order=2, offset=offset,
output=array, output_shape=array.shape,
cval=255)
# Mark blue target points in array, expected to be above red source points
for tp in targetPoints:
tp = [int(u) for u in tp]
array[tp[1] - 2:tp[1] + 2, tp[0] - 2:tp[0] + 2] = [0, 0, 255]

plt.subplot(3, 1, 3)
plt.imshow(array)

plt.show()

我尝试过的其他方法包括使用 affineTrafo 的逆、转置或两者:

affineTrafo = np.linalg.inv(affineTrafo)
affineTrafo = affineTrafo.T
affineTrafo = np.linalg.inv(affineTrafo.T)
affineTrafo = np.linalg.inv(affineTrafo).T

在他的回答中,geodata 显示了如何计算 affine_trafo 进行缩放和旋转所需的矩阵:

If one wants a scaling S first and then a rotation R it holds that T=R*S and therefore T.inv=S.inv*R.inv (note the reversed order).

我尝试使用矩阵分解(将我的仿射变换分解为旋转、剪切和另一个旋转)进行复制:

u, s, v = np.linalg.svd(affineTrafo[:2,:2])
uInv = np.linalg.inv(u)
sInv = np.linalg.inv(np.diag((s)))
vInv = np.linalg.inv(v)
affineTrafo[:2, :2] = uInv.dot(sInv).dot(vInv)

同样,没有成功。

对于我所有的结果,这不仅仅是(仅)偏移问题。从图中可以明显看出源点和目标点的相对位置不对应。

我搜索了网络和 stackoverflow,但没有找到我的问题的答案。请帮我! :)

最佳答案

感谢 AlexanderReynolds 提示使用另一个库,我终于让它工作了。这当然是一种解决方法;我无法使用 scipy 的 affine_transform 让它工作,所以我改用 OpenCVs cv2.warpAffine。如果这对其他人有帮助,这是我的代码:

import numpy as np
import matplotlib.pyplot as plt
import cv2

# Prepare source image
sourcePoints = [[162., 112.], [130., 112.], [162., 240.]]
targetPoints = [[180., 102.], [101., 101.], [190., 200.]]
image = np.empty((300, 300, 3), dtype='uint8')
image[:] = 255
# Mark border for better visibility
image[0:2, :] = 0
image[-3:-1, :] = 0
image[:, 0:2] = 0
image[:, -3:-1] = 0
# Mark source points in red
for sp in sourcePoints:
sp = [int(u) for u in sp]
image[sp[1] - 5:sp[1] + 5, sp[0] - 5:sp[0] + 5, :] = np.array([255, 0, 0])

# Show image
plt.subplot(3, 1, 1)
plt.imshow(image)

# Prepare array in which the image is placed
array = np.empty((400, 300, 3), dtype='uint8')
array[:] = 255
a2 = array.copy()
# Mark target points in blue
for tp in targetPoints:
tp = [int(u) for u in tp]
a2[tp[1] - 2:tp[1] + 2, tp[0] - 2:tp[0] + 2] = [0, 0, 255]

# Show array
plt.subplot(3, 1, 2)
plt.imshow(a2)

# Calculate affine matrix and transform image
M = cv2.getAffineTransform(np.float32(sourcePoints), np.float32(targetPoints))
array = cv2.warpAffine(image, M, array.shape[:2], borderValue=[255, 255, 255])

# Mark blue target points in array, expected to be above red source points
for tp in targetPoints:
tp = [int(u) for u in tp]
array[tp[1] - 2:tp[1] + 2, tp[0] - 2:tp[0] + 2] = [0, 0, 255]

plt.subplot(3, 1, 3)
plt.imshow(array)

plt.show()

评论:

  • 有趣的是它是如何在更改库后几乎立即运行的。在花了一天多的时间尝试让它与 scipy 一起工作之后,这是我自己的一个教训,可以更快地更改库。
  • 如果有人想找到基于三个以上点的仿射变换的(最小二乘)近似值,这就是您如何获得适用于 cv2.warpAffine 的矩阵:

代码:

def calcAffineMatrix(sourcePoints, targetPoints):
# For three or more source and target points, find the affine transformation
A = []
b = []
for sp, trg in zip(sourcePoints, targetPoints):
A.append([sp[0], 0, sp[1], 0, 1, 0])
A.append([0, sp[0], 0, sp[1], 0, 1])
b.append(trg[0])
b.append(trg[1])
result, resids, rank, s = np.linalg.lstsq(np.array(A), np.array(b))

a0, a1, a2, a3, a4, a5 = result
affineTrafo = np.float32([[a0, a2, a4], [a1, a3, a5]])
return affineTrafo

关于python - 如何使用 scipy 的 affine_transform 对彩色图像进行任意仿射变换?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44674129/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com