gpt4 book ai didi

android - 使用枢轴点缩放 Canvas 后,x 和 y 坐标错误

转载 作者:太空宇宙 更新时间:2023-11-03 11:49:04 25 4
gpt4 key购买 nike

我正在尝试在应该关注轴心点的 Canvas 上实现缩放。缩放效果很好,但之后用户应该能够在 Canvas 上选择元素。问题是,我的翻译值似乎不正确,因为它们的偏移量与我不缩放到轴心点的值不同(没有轴心点缩放和拖动效果很好)。我使用了 this example 中的一些代码.

相关代码为:

class DragView extends View {

private static float MIN_ZOOM = 0.2f;
private static float MAX_ZOOM = 2f;

// These constants specify the mode that we're in
private static int NONE = 0;
private int mode = NONE;
private static int DRAG = 1;
private static int ZOOM = 2;
public ArrayList<ProcessElement> elements;

// Visualization
private boolean checkDisplay = false;
private float displayWidth;
private float displayHeight;
// These two variables keep track of the X and Y coordinate of the finger when it first
// touches the screen
private float startX = 0f;
private float startY = 0f;
// These two variables keep track of the amount we need to translate the canvas along the X
//and the Y coordinate
// Also the offset from initial 0,0
private float translateX = 0f;
private float translateY = 0f;

private float lastGestureX = 0;
private float lastGestureY = 0;

private float scaleFactor = 1.f;
private ScaleGestureDetector detector;
...

private void sharedConstructor() {
elements = new ArrayList<ProcessElement>();
flowElements = new ArrayList<ProcessFlow>();
detector = new ScaleGestureDetector(getContext(), new ScaleListener());
}

/**
* checked once to get the measured screen height/width
* @param hasWindowFocus
*/
@Override
public void onWindowFocusChanged(boolean hasWindowFocus) {
super.onWindowFocusChanged(hasWindowFocus);
if (!checkDisplay) {
displayHeight = getMeasuredHeight();
displayWidth = getMeasuredWidth();
checkDisplay = true;
}
}

@Override
public boolean onTouchEvent(MotionEvent event) {
ProcessBaseElement lastElement = null;

switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
mode = DRAG;

// Check if an Element has been touched.
// Need to use the absolute Position that's why we take the offset into consideration
touchedElement = isElementTouched(((translateX * -1) + event.getX()) / scaleFactor, (translateY * -1 + event.getY()) / scaleFactor);


//We assign the current X and Y coordinate of the finger to startX and startY minus the previously translated
//amount for each coordinates This works even when we are translating the first time because the initial
//values for these two variables is zero.
startX = event.getX() - translateX;
startY = event.getY() - translateY;
}
// if an element has been touched -> no need to take offset into consideration, because there's no dragging possible
else {
startX = event.getX();
startY = event.getY();
}

break;

case MotionEvent.ACTION_MOVE:
if (mode != ZOOM) {
if (touchedElement == null) {
translateX = event.getX() - startX;
translateY = event.getY() - startY;
} else {
startX = event.getX();
startY = event.getY();
}
}

if(detector.isInProgress()) {
lastGestureX = detector.getFocusX();
lastGestureY = detector.getFocusY();
}

break;

case MotionEvent.ACTION_UP:
mode = NONE;

break;
case MotionEvent.ACTION_POINTER_DOWN:
mode = ZOOM;

break;
case MotionEvent.ACTION_POINTER_UP:
break;
}

detector.onTouchEvent(event);
invalidate();

return true;
}

private ProcessBaseElement isElementTouched(float x, float y) {
for (int i = elements.size() - 1; i >= 0; i--) {
if (elements.get(i).isTouched(x, y))
return elements.get(i);
}
return null;
}

@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);

canvas.save();

if(detector.isInProgress()) {
canvas.scale(scaleFactor,scaleFactor,detector.getFocusX(),detector.getFocusY());
} else
canvas.scale(scaleFactor, scaleFactor,lastGestureX,lastGestureY); // zoom

// canvas.scale(scaleFactor,scaleFactor);

//We need to divide by the scale factor here, otherwise we end up with excessive panning based on our zoom level
//because the translation amount also gets scaled according to how much we've zoomed into the canvas.
canvas.translate(translateX / scaleFactor, translateY / scaleFactor);

drawContent(canvas);

canvas.restore();
}

/**
* scales the canvas
*/
private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
@Override
public boolean onScale(ScaleGestureDetector detector) {
scaleFactor *= detector.getScaleFactor();
scaleFactor = Math.max(MIN_ZOOM, Math.min(scaleFactor, MAX_ZOOM));
return true;
}
}
}

元素以它们在 Canvas 上的绝对位置保存(考虑到拖动)。我怀疑我没有考虑从枢轴点到 translateXtranslateY 的新偏移量,但我不知道应该在哪里以及如何做.任何帮助将不胜感激。

最佳答案

好的,所以您基本上是在围绕某个轴心点 {Px, Py} 缩放 View 后,找出某个屏幕 X/Y 坐标对应的位置。

那么,让我们试着分解一下。

为了便于讨论,我们假设 Px & Py = 0,并且 s = 2。这意味着 View 在 View 的左上角周围缩放了 2 倍。

在这种情况下,屏幕坐标 {0, 0} 对应于 View 中的 {0, 0},因为该点是唯一未更改的点。一般来说,如果屏幕坐标等于轴心点,则没有变化。

如果用户点击其他点会发生什么,比方说 {2, 3}?在这种情况下,曾经是 {2, 3} 的位置现在已经从枢轴点(即 {0, 0})移动了 2 倍,因此对应的位置是 {4, 6}。

当轴心点为 {0, 0} 时,这一切都很容易,但如果不是,会发生什么?

好吧,让我们看另一种情况 - 轴心点现在位于 View 的右下角(宽度 = w,高度 = h - {w, h})。同样,如果用户点击相同的位置,那么相应的位置也是 {w, h},但是假设用户点击了其他位置,例如 {w - 2, h - 3}?同样的逻辑也出现在这里:翻译后的位置是 {w - 4, h - 6}。

总而言之,我们要做的是将屏幕坐标转换为平移坐标。我们需要对收到的 X/Y 坐标执行与对缩放 View 中的每个像素执行的操作相同的操作。

第 1 步 - 我们要根据枢轴点平移 X/Y 位置:

X = X - Px
Y = Y - Py

第 2 步 - 然后我们缩放 X 和 Y:

X = X * s
Y = Y * s

第 3 步 - 然后我们翻译回去:

X = X + Px
Y = Y + Py

如果我们将此应用于我给出的最后一个示例(我将仅针对 X 进行演示):

Original value: X = w - 2, Px = w
Step 1: X <-- X - Px = w - 2 - w = -2
Step 2: X <-- X * s = -2 * 2 = -4
Step 3: X <-- X + Px = -4 + w = w - 4

一旦您将此应用于您收到的任何在缩放之前相关的 X/Y,该点将被平移,以便它相对于缩放状态。

希望这对您有所帮助。

关于android - 使用枢轴点缩放 Canvas 后,x 和 y 坐标错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29366418/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com