gpt4 book ai didi

Geotrellis,获取落在多边形网格 Rdd 中的点

转载 作者:行者123 更新时间:2023-12-04 19:29:02 27 4
gpt4 key购买 nike

我需要计算落在多边形网格中的点的平均值。

就像基于条件 Polyogon.contains(point) 的一对多连接

//In:
val pointValueRdd : RDD[Feature[Point,Double]]

val squareGridRdd : RDD[Feature[Polygon]]

//Needed Out:

val squareGridRdd : RDD[Feature[Polygon,(accum:Double,count:Int)]]
//or
val squareGridRdd : RDD[Feature[Polygon,average:Double]]

可以使用一些四叉树索引吗?

我读过:剪辑到网格,但我不知道它是否是正确的工具。

http://geotrellis.readthedocs.io/en/latest/guide/vectors.html#cliptogrid

下图以蓝色显示网格,点

enter image description here

我们欢迎一些建议

最佳答案

如果您的多边形网格可以表示为 LayoutDefinition,则有一种简单的方法可以实现此目的。在 GeoTrellis 中。一个 LayoutDefinition定义由 GeoTrellis 图层用来表示大量栅格切片的切片网格。它还可以用于在网格空间中的网格键( SpatialKey )和 map 空间中的边界框( Extent s)之间执行转换。

我不会假设您可以通过 LayoutDefinition 来表示网格,而是会展示一个解决更一般情况的示例。如果您可以通过 LayoutDefinition 表示多边形网格,则该方法将更加简单。但是,这里是更通用方法的代码片段。这是编译但没有测试,所以如果你发现它有问题,请告诉我。这被包含在我们的示例中 doc-examples此 PR 的项目:https://github.com/locationtech/geotrellis/pull/2489

import geotrellis.raster._
import geotrellis.spark._
import geotrellis.spark.tiling._
import geotrellis.vector._

import org.apache.spark.HashPartitioner
import org.apache.spark.rdd.RDD

import java.util.UUID

// see https://stackoverflow.com/questions/47359243/geotrellis-get-the-points-that-fall-in-a-polygon-grid-rdd

val pointValueRdd : RDD[Feature[Point,Double]] = ???
val squareGridRdd : RDD[Polygon] = ???

// Since we'll be grouping by key and then joining, it's work defining a partitioner
// up front.
val partitioner = new HashPartitioner(100)

// First, we'll determine the bounds of the Polygon grid
// and the average height and width, to create a GridExtent
val (extent, totalHeight, totalWidth, totalCount) =
squareGridRdd
.map { poly =>
val e = poly.envelope
(e, e.height, e.width, 1)
}
.reduce { case ((extent1, height1, width1, count1), (extent2, height2, width2, count2)) =>
(extent1.combine(extent2), height1 + height2, width1 + width2, count1 + count2)
}

val gridExtent = GridExtent(extent, totalHeight / totalCount, totalWidth / totalCount)

// We then use this for to construct a LayoutDefinition, that represents "tiles"
// that are 1x1.
val layoutDefinition = LayoutDefinition(gridExtent, tileCols = 1, tileRows = 1)

// Now that we have a layout, we can cut the polygons up per SpatialKey, as well as
// assign points to a SpatialKey, using ClipToGrid

// In order to keep track of which polygons go where, we'll assign a UUID to each
// polygon, so that they can be reconstructed. If we were dealing with PolygonFeatures,
// we could store the feature data as well. If those features already had IDs, we could
// also just use those IDs instead of UUIDs.
// We'll also store the original polygon, as they are not too big and it makes
// the reconstruction process (which might be prone to floating point errors) a little
// easier. For more complex polygons this might not be the most performant strategy.
// We then group by key to produce a set of polygons that intersect each key.
val cutPolygons: RDD[(SpatialKey, Iterable[Feature[Geometry, (Polygon, UUID)]])] =
squareGridRdd
.map { poly => Feature(poly, (poly, UUID.randomUUID)) }
.clipToGrid(layoutDefinition)
.groupByKey(partitioner)

// While we could also use clipToGrid for the points, we can
// simply use the mapTransform on the layout to determien what SpatialKey each
// point should be assigned to.
// We also group this by key to produce the set of points that intersect the key.
val pointsToKeys: RDD[(SpatialKey, Iterable[PointFeature[Double]])] =
pointValueRdd
.map { pointFeature =>
(layoutDefinition.mapTransform.pointToKey(pointFeature.geom), pointFeature)
}
.groupByKey(partitioner)

// Now we can join those two RDDs and perform our point in polygon tests.
// Use a left outer join so that polygons with no points can be recorded.
// Once we have the point information, we key the RDD by the UUID and
// reduce the results.
val result: RDD[Feature[Polygon, Double]] =
cutPolygons
.leftOuterJoin(pointsToKeys)
.flatMap { case (_, (polyFeatures, pointsOption)) =>
pointsOption match {
case Some(points) =>
for(
Feature(geom, (poly, uuid)) <- polyFeatures;
Feature(point, value) <- points if geom.intersects(point)
) yield {
(uuid, Feature(poly, (value, 1)))
}
case None =>
for(Feature(geom, (poly, uuid)) <- polyFeatures) yield {
(uuid, Feature(poly, (0.0, 0)))
}
}
}
.reduceByKey { case (Feature(poly1, (accum1, count1)), Feature(poly2, (accum2, count2))) =>
Feature(poly1, (accum1 + accum2, count1 + count2))
}
.map { case (_, feature) =>
// We no longer need the UUID; also compute the mean
feature.mapData { case (acc, c) => acc / c }
}

关于Geotrellis,获取落在多边形网格 Rdd 中的点,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47359243/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com