gpt4 book ai didi

java - 如何使用 AccessibilityService 在 Android 上执行拖动(基于 X、Y 鼠标坐标)?

转载 作者:行者123 更新时间:2023-12-02 00:04:27 29 4
gpt4 key购买 nike

我想知道如何在 Android 上基于 X、Y 鼠标坐标执行拖动?考虑两个简单的例子,Team Viewer/QuickSupport 分别在远程智能手机和 Windows Paint 上绘制“密码图案”。

enter image description here

enter image description here

我能做的就是 simulate touch (使用 dispatchGesture() 以及 AccessibilityNodeInfo.ACTION_CLICK)。

我找到了这些相关链接,但不知道它们是否有用:

下面是我的工作代码,用于将鼠标坐标(在 PictureBox 控件内部)发送到远程手机并模拟触摸。

Windows 窗体应用程序:

private void pictureBox1_MouseDown(object sender, MouseEventArgs e)
{
foreach (ListViewItem item in lvConnections.SelectedItems)
{
// Remote screen resolution
string[] tokens = item.SubItems[5].Text.Split('x'); // Ex: 1080x1920

int xClick = (e.X * int.Parse(tokens[0].ToString())) / (pictureBox1.Size.Width);
int yClick = (e.Y * int.Parse(tokens[1].ToString())) / (pictureBox1.Size.Height);

Client client = (Client)item.Tag;

if (e.Button == MouseButtons.Left)
client.sock.Send(Encoding.UTF8.GetBytes("TOUCH" + xClick + "<|>" + yClick + Environment.NewLine));
}
}
<小时/>

编辑:

我的最后一次尝试是分别使用鼠标坐标(C# Windows 窗体应用程序)和自定义 Android 例程(引用上面链接的“滑动屏幕”代码)进行“滑动屏幕”:

private Point mdownPoint = new Point();

private void pictureBox1_MouseDown(object sender, MouseEventArgs e)
{
foreach (ListViewItem item in lvConnections.SelectedItems)
{
// Remote screen resolution
string[] tokens = item.SubItems[5].Text.Split('x'); // Ex: 1080x1920

Client client = (Client)item.Tag;

if (e.Button == MouseButtons.Left)
{
xClick = (e.X * int.Parse(tokens[0].ToString())) / (pictureBox1.Size.Width);
yClick = (e.Y * int.Parse(tokens[1].ToString())) / (pictureBox1.Size.Height);

// Saving start position:

mdownPoint.X = xClick;
mdownPoint.Y = yClick;

client.sock.Send(Encoding.UTF8.GetBytes("TOUCH" + xClick + "<|>" + yClick + Environment.NewLine));
}
}
}

private void PictureBox1_MouseMove(object sender, MouseEventArgs e)
{
foreach (ListViewItem item in lvConnections.SelectedItems)
{
// Remote screen resolution
string[] tokens = item.SubItems[5].Text.Split('x'); // Ex: 1080x1920

Client client = (Client)item.Tag;

if (e.Button == MouseButtons.Left)
{
xClick = (e.X * int.Parse(tokens[0].ToString())) / (pictureBox1.Size.Width);
yClick = (e.Y * int.Parse(tokens[1].ToString())) / (pictureBox1.Size.Height);

client.sock.Send(Encoding.UTF8.GetBytes("MOUSESWIPESCREEN" + mdownPoint.X + "<|>" + mdownPoint.Y + "<|>" + xClick + "<|>" + yClick + Environment.NewLine));
}
}
}

android AccessibilityService:

public void Swipe(int x1, int y1, int x2, int y2, int time) {

if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.N) {
System.out.println(" ======= Swipe =======");

GestureDescription.Builder gestureBuilder = new GestureDescription.Builder();
Path path = new Path();
path.moveTo(x1, y1);
path.lineTo(x2, y2);

gestureBuilder.addStroke(new GestureDescription.StrokeDescription(path, 100, time));
dispatchGesture(gestureBuilder.build(), new GestureResultCallback() {
@Override
public void onCompleted(GestureDescription gestureDescription) {
System.out.println("SWIPE Gesture Completed :D");
super.onCompleted(gestureDescription);
}
}, null);
}

}

产生以下结果(但仍然无法像 TeamViewer 那样绘制“图案密码”)。但就像下面的评论所说,我认为通过类似的方法可以使用 Continued gestures 来实现这一点。大概。欢迎任何这方面的建议。

enter image description here

enter image description here

<小时/>

编辑2:

当然,解决方案是continued gestures就像之前编辑中所说的那样。

下面是我发现的一个假定的固定代码 here =>

android AccessibilityService:

// Simulates an L-shaped drag path: 200 pixels right, then 200 pixels down.
Path path = new Path();
path.moveTo(200,200);
path.lineTo(400,200);

final GestureDescription.StrokeDescription sd = new GestureDescription.StrokeDescription(path, 0, 500, true);

// The starting point of the second path must match
// the ending point of the first path.
Path path2 = new Path();
path2.moveTo(400,200);
path2.lineTo(400,400);

final GestureDescription.StrokeDescription sd2 = sd.continueStroke(path2, 0, 500, false); // 0.5 second

HongBaoService.mService.dispatchGesture(new GestureDescription.Builder().addStroke(sd).build(), new AccessibilityService.GestureResultCallback(){

@Override
public void onCompleted(GestureDescription gestureDescription){
super.onCompleted(gestureDescription);
HongBaoService.mService.dispatchGesture(new GestureDescription.Builder().addStroke(sd2).build(),null,null);
}

@Override
public void onCancelled(GestureDescription gestureDescription){
super.onCancelled(gestureDescription);
}
},null);

然后,我的疑问是:如何为上面的代码正确发送鼠标坐标,以及可以向任何方向拖动的方式?有什么想法吗?

<小时/>

编辑3:

我发现了两个用于执行拖动的例程,但它们使用 UiAutomation + injectInputEvent() 。 AFAIK,事件注入(inject)仅在系统应用程序中有效,如所述 herehere我不想要它。

这是找到的例程:

然后为了实现我的目标,我认为第二个例程更适合使用(遵循逻辑,排除事件注入(inject)),代码显示在编辑2上,并发送pictureBox1_MouseDown的所有点pictureBox1_MouseMove (C# Windows 窗体应用程序)分别动态填充 Point[] 并在 pictureBox1_MouseUp 上发送 cmd 来执行例程并使用这个数组填充。如果您对第一个例程有任何想法,请告诉我:D。

如果阅读此编辑后您有一个可能的解决方案,请在答案中向我展示,同时我将尝试测试这个想法。

最佳答案

以下是基于问题的编辑 3 的解决方案示例。

<小时/>

C# Windows Froms 应用程序“formMain.cs”:

using System.Net.Sockets;

private List<Point> lstPoints;

private void pictureBox1_MouseDown(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
lstPoints = new List<Point>();
lstPoints.Add(new Point(e.X, e.Y));
}
}

private void PictureBox1_MouseMove(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
lstPoints.Add(new Point(e.X, e.Y));
}
}

private void PictureBox1_MouseUp(object sender, MouseEventArgs e)
{
lstPoints.Add(new Point(e.X, e.Y));

StringBuilder sb = new StringBuilder();

foreach (Point obj in lstPoints)
{
sb.Append(Convert.ToString(obj) + ":");
}

serverSocket.Send("MDRAWEVENT" + sb.ToString() + Environment.NewLine);
}

android 服务“SocketBackground.java”:

import java.net.Socket;

String xline;

while (clientSocket.isConnected()) {

BufferedReader xreader = new BufferedReader(new InputStreamReader(clientSocket.getInputStream(), StandardCharsets.UTF_8));

if (xreader.ready()) {

while ((xline = xreader.readLine()) != null) {
xline = xline.trim();

if (xline != null && !xline.trim().isEmpty()) {

if (xline.contains("MDRAWEVENT")) {

String coordinates = xline.replace("MDRAWEVENT", "");
String[] tokens = coordinates.split(Pattern.quote(":"));
Point[] moviments = new Point[tokens.length];

for (int i = 0; i < tokens.length; i++) {
String[] coordinates = tokens[i].replace("{", "").replace("}", "").split(",");

int x = Integer.parseInt(coordinates[0].split("=")[1]);
int y = Integer.parseInt(coordinates[1].split("=")[1]);

moviments[i] = new Point(x, y);
}

MyAccessibilityService.instance.mouseDraw(moviments, 2000);
}
}
}
}
}

安卓 AccessibilityServiceMyAccessibilityService.java”:

public void mouseDraw(Point[] segments, int time) {
if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {

Path path = new Path();
path.moveTo(segments[0].x, segments[0].y);

for (int i = 1; i < segments.length; i++) {

path.lineTo(segments[i].x, segments[i].y);

GestureDescription.StrokeDescription sd = new GestureDescription.StrokeDescription(path, 0, time);

dispatchGesture(new GestureDescription.Builder().addStroke(sd).build(), new AccessibilityService.GestureResultCallback() {

@Override
public void onCompleted(GestureDescription gestureDescription) {
super.onCompleted(gestureDescription);
}

@Override
public void onCancelled(GestureDescription gestureDescription) {
super.onCancelled(gestureDescription);
}
}, null);
}
}
}

关于java - 如何使用 AccessibilityService 在 Android 上执行拖动(基于 X、Y 鼠标坐标)?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59278085/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com