gpt4 book ai didi

c#-4.0 - 如何在Kinect SDK V1.7 (Avateering XNA) 中捕捉运动数据?

转载 作者:行者123 更新时间:2023-12-02 01:52:47 25 4
gpt4 key购买 nike

我是 kinect sdk v1.7 的新手。

我想知道如何从样本中捕捉运动数据。( http://msdn.microsoft.com/en-us/library/jj131041.aspx )

那么,我怎样才能制作一个可以将骨架数据捕获到文件中的程序呢? (记录)

然后,将文件读回示例程序并对其建模。(播放)?

我的想法是将骨架数据记录到文件中,然后从文件中获取骨架数据并让Avatar播放。

我可以在另一个示例程序中做我想做的事。 ( http://msdn.microsoft.com/en-us/library/hh855381 ) ,导致示例程序只绘制线和骨架点。

例如,

00001 00:00:00.0110006@353,349,354,332,358,249,353,202,310,278,286,349,269,407,266,430,401,279,425,349,445,408,453,433,332,369,301,460,276,539,269,565,372,370,379,466,387,548,389,575,

00002 00:00:00.0150008@352,349,353,332,356,249,352,202,309,278,284,349,266,406,263,430,398,279,424,349,445,408,453,433,331,369,301,461,277,541,271,566,371,371,379,466,387,548,390,575,

[帧号][时间戳]@[骨架位置坐标]

在这个例子中,我假设骨架位置是关节 ID 顺序。

谢谢(请原谅我糟糕的英语)。

最佳答案

你可以使用一个StreamWriter,在选择的路径初始化它,然后对于每一帧,增加帧计数器,将它写入文件,将时间戳写入文件,然后循环遍历关节并将它们写入文件。我会这样做:

using System.IO;

StreamWriter writer = new StreamWriter(@path);
int frames = 0;

...

void AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
frames++;
using (SkeletonFrame sFrame = e.OpenSkeletonFrameData())
{
if (sFrame == null)
return;

skeletonFrame.CopySkeletonDataTo(skeletons);

Skeleton skeleton = (from s in skeletons
where s.TrackingState == SkeletonTrackingState.Tracked
select s);
if (skeleton == null)
return;

if (skeleton.TrackingState == SkeletonTrackingState.Tracked)
{
writer.Write("{0} {1}@", frames, timestamp);//I dont know how you want to do this
foreach (Joint joint in skeleton.Joints)
{
writer.Write(joint.Position.X + "," + joint.Position.Y + "," joint.Position.Z + ",");
}
writer.Write(Environment.NewLine);
}
}
}

然后从文件中读取:

StreamReader reader = new StreamReader(@path);
int frame = -1;
JointCollection joints;

...

string[] lines = reader.ReadAllLines();

...

void AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
canvas.Children.Clear();
string[] coords = lines[frame].Split('@')[1].Split(',');
int jointIndex = 0;
for (int i = 0; i < coords.Length; i += 3)
{
joints[jointIndex].Position.X = int.Parse(coords[i]);
joints[jointIndex].Position.Y = int.Parse(coords[i + 1]);
joints[jointIndex].Position.X = int.Parse(coords[i + 2]);
jointIndex++;
}

DepthImageFrame depthFrame = e.OpenDepthImageFrame();
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.Spine, JointType.ShoulderCenter, JointType.Head }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.ShoulderCenter, JointType.ShoulderLeft, JointType.ElbowLeft, JointType.WristLeft, JointType.HandLeft }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.ShoulderCenter, JointType.ShoulderRight, JointType.ElbowRight, JointType.WristRight, JointType.HandRight }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.HipLeft, JointType.KneeLeft, JointType.AnkleLeft, JointType.FootLeft }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.HipRight, JointType.KneeRight, JointType.AnkleRight, JointType.FootRight }, depthFrame, canvas));
depthFrame.Dispose();

frame++;
}

Point GetDisplayPosition(Joint joint, DepthImageFrame depthFrame, Canvas skeleton)
{
float depthX, depthY;
KinectSensor sensor = KinectSensor.KinectSensors[0];
DepthImageFormat depthImageFormat = sensor.DepthStream.Format;
DepthImagePoint depthPoint = sensor.CoordinateMapper.MapSkeletonPointToDepthPoint(joint.Position, depthImageFormat);

depthX = depthPoint.X;
depthY = depthPoint.Y;

depthX = Math.Max(0, Math.Min(depthX * 320, 320));
depthY = Math.Max(0, Math.Min(depthY * 240, 240));

int colorX, colorY;
ColorImagePoint colorPoint = sensor.CoordinateMapper.MapDepthPointToColorPoint(depthImageFormat, depthPoint, ColorImageFormat.RgbResolution640x480Fps30);
colorX = colorPoint.X;
colorY = colorPoint.Y;

return new System.Windows.Point((int)(skeleton.Width * colorX / 640.0), (int)(skeleton.Height * colorY / 480));
}

Polyline GetBodySegment(Joint[] joints, Brush brush, JointType[] ids, DepthImageFrame depthFrame, Canvas canvas)
{
PointCollection points = new PointCollection(ids.Length);
for (int i = 0; i < ids.Length; ++i)
{
points.Add(GetDisplayPosition(joints[i], depthFrame, canvas));
}
Polyline polyline = new Polyline();
polyline.Points = points;
polyline.Stroke = brush;
polyline.StrokeThickness = 5;
return polyline;
}

当然,这只适用于wpf。您只需要更改使用代码:

    DepthImageFrame depthFrame = e.OpenDepthImageFrame();
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.Spine, JointType.ShoulderCenter, JointType.Head }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.ShoulderCenter, JointType.ShoulderLeft, JointType.ElbowLeft, JointType.WristLeft, JointType.HandLeft }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.ShoulderCenter, JointType.ShoulderRight, JointType.ElbowRight, JointType.WristRight, JointType.HandRight }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.HipLeft, JointType.KneeLeft, JointType.AnkleLeft, JointType.FootLeft }, depthFrame, canvas));
canvas.Children.Add(GetBodySegment(joints, brush, new JointType[] { JointType.HipCenter, JointType.HipRight, JointType.KneeRight, JointType.AnkleRight, JointType.FootRight }, depthFrame, canvas));
depthFrame.Dispose();

对于 avateering 示例如何为模型设置动画,您甚至可以创建一个新的 Skeleton 并将 joints 复制到 Skeleton.Joints,然后只需将该骨架作为“检测到的”骨架传递。请注意,您需要更改此示例中使用的函数所需的任何其他所需变量。我对示例不熟悉,所以我不能给出具体的方法名称,但您可以将全局 Skeleton 替换为您在开始时创建的那个并更新每一帧。所以我会推荐这个:

//in the game class (AvateeringXNA.cs)
StreamReader reader = new StreamReader(@path);
int frame = -1;
JointCollection joints;
Skeleton recorded = new Skeleton();

...

string[] lines = reader.ReadAllLines();

...

void Update(...)
{
string[] coords = lines[frame].Split('@')[1].Split(',');
int jointIndex = 0;
for (int i = 0; i < coords.Length; i += 3)
{
joints[jointIndex].Position.X = int.Parse(coords[i]);
joints[jointIndex].Position.Y = int.Parse(coords[i + 1]);
joints[jointIndex].Position.X = int.Parse(coords[i + 2]);
jointIndex++;
}

recorded.Joints = joints;

...

//preform necessary methods, except with recorded skeleton instead of detected, I think it is:
this.animator.CopySkeleton(recorded);
this.animator.FloorClipPlane = skeletonFrame.FloorClipPlane;

// Reset the filters if the skeleton was not seen before now
if (this.skeletonDetected == false)
{
this.animator.Reset();
}

this.skeletonDetected = true;
this.animator.SkeletonVisible = true;

...

frame++;
}

编辑

当您读取初始地板裁剪平面 (clipPlanes[0]) 时,它将获得直到第一个空间的整个帧信息。请参阅下文以了解它将如何拆分以及我将如何阅读它:

var newFloorClipPlane = Tuple.Create(Single.Parse(clipPlanes[2]), Single.Parse(clipPlanes[3]), Single.Parse(clipPlanes[4]), Single.Parse(clipPlanes[5]));

以下是您如何布置框架:

frame# timestam@joint1Posx,joint1posy,joint1posz,...jointNPosx,jointNposy,jointNposz floorX floorY floorZ floorW

这里是 `.Split(' ') 生成的数组

["frame#", "timestam@joint1Posx,joint1posy,joint1posz,...jointNPosx,jointNposy,jointNposz", "floorX", "floorY", "floorZ", "floorW"]

因此,示例输入为:

00000002 10112@10,10,10... 11 12 13 14

使用您的代码您将获得:

[2, 10112101010..., 11, 12]

使用我的代码中更正的索引:

[11, 12, 13, 14]

将这一行非常快速地放入控制台应用程序并查看它的输出:

Console.WriteLine(Convert.ToSingle("10,10"));

输出为 1010 对于您要实现的目标,这会创建错误的地板剪裁平面。您需要针对您要实现的目标使用适当的索引。

注意:我将 Convert.ToSingle 更改为 Single.Parse 因为这是更好的做法并且在堆栈跟踪中它们都执行相同的函数

关于c#-4.0 - 如何在Kinect SDK V1.7 (Avateering XNA) 中捕捉运动数据?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21796703/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com