gpt4 book ai didi

python - Arcpy 脚本循环 - 如何循环遍历文件夹中的表并对每个表执行 arcpy join 函数?

转载 作者:行者123 更新时间:2023-12-01 08:06:40 26 4
gpt4 key购买 nike

我正在尝试清理我的脚本,这样我就不必每次使用它时都更改变量。

我有美国每个州(加上华盛顿特区和波多黎各)的 arcgis 表。我想迭代文件夹中的这些表,一次将每个表连接到一个 shapefile,将连接的要素复制到不同地理数据库中的新要素类,在相应的州名称之后命名该要素类,然后删除联接并继续到下一个状态。

当谈到Python时,我认为自己是一个新手。多年来我一直在尝试自学,但一直没有很好的机会参加深入的类(class),而且与我一起工作的人也没有对此非常了解。我知道有更有效的脚本编写方法,例如循环、函数和条件语句,但我不知道如何正确设置它们。

因此,我创建了一个脚本来完成我今天需要完成的工作,但我想让该脚本更加动态。我不想更改每个表名称或新要素类名称。我试图查找如何为部分代码创建自定义函数,但同时也有一个循环,以便它知道迭代文件夹中的每个表。我不确定是否需要先有循环,然后是函数,或者在函数内部有循环。然后我不确定如何获取输出要素类的正确名称,我知道有一种方法可以用 %s 引用动态变量,但不确定如何将其合并到这里。

layer = arcpy.GetParameterAsText(0)
inField = "GEOID"
jTable = r'k:\geospatial\data\census\national\census_fact_finder_data\census_tract_year_built\aa_by_state\xls_pcts\tables'
jField = "GEOID"
outFC = r'K:\GEOSPATIAL\DATA\Census\National\Census_Fact_Finder_Data\Shapefiles\CFF_Census_Tracts\PCTs\FCC_CT_YB_PCT.gdb'

arcpy.AddMessage("Processing Arizona...")
#join table to census tract layer
arcpy.AddMessage("Joining Arizona table to Census Tracts...")
tract_join = arcpy.AddJoin_management(layer, inField,jTable + "\\az_pcts", jField, "KEEP_COMMON")

#Copy joined features to new feature class in geodatabase
arcpy.AddMessage("Exporting joined features to FCC_CT_YB_PCT geodatabase...")
arcpy.CopyFeatures_management(tract_join, outFC + "\Arizona_PCT")

#remove all joins
arcpy.AddMessage("Removing joins to process next table...")
arcpy.RemoveJoin_management(layer)
arcpy.AddMessage("Arizona Complete")

因此,在上面的示例中,它将亚利桑那州表 (az_pcts) 连接到人口普查区图层 (layer = arcpy.GetParameterAsText(0)),复制连接的数据添加到新数据库的功能并将其命名为 Arizona_PCT,然后删除连接并继续处理下一个表。我为每个状态表重复了同样的结构,并将所有路径结尾更改为我想要的内容。如果有人有任何建议,甚至是片段,我们将不胜感激。

最佳答案

首先,欢迎来到 Stack Overflow。您可以使用模型构建器完成所有这些操作。看: What is model builder and quick tutorial

但是我不明白你在文件夹中找到了什么样的表?通常,您需要在文件地理数据库、个人地理数据库或 RDBMS 表中指定文件夹,而不是文件夹。但我跳过了这个问题,并在上​​面为您提供了示例代码。

无论如何,我为你准备了一些代码。您可以关注:

import arcpy
# iterating all tables in an environment , and make join them with a shapefile

# these are constant variables
shapefilepath = r"c:\users\someplace\someshape.shp"
commoncolumn = "SAMECOLUMN" # this column must be same in other shapefiles too
# If all shapefile samecolumns are different each other, you need to make a list like this
commoncolumns_ordered = ['SAMECOLUMN1', 'SAMECOLUMN2', ] # .. goes away
mainfolder = r"c:\users\someplace"
tablegdb = r"c:\users\someplace\somegdb.gdb" # we'll search our tables here

arcpy.env.workspace = tablegdb # we will work on here
mytables = arcpy.ListTables("*") # you can filter your tables as starting or ending with a letter.

for table in mytables:
# you need to make view from all tables
name = arcpy.Describe(table).name.encode('utf-8') # my table name
table_view = arcpy.MakeTableView_management(table, name)

# ok so we have our view. Otherwise, we would not be able to use this as an input for add join tool
"""
There are couple differences between add join and join field tools. Read them:
Add join help : https://pro.arcgis.com/en/pro-app/tool-reference/data-management/add-join.htm
Join field help : https://pro.arcgis.com/en/pro-app/tool-reference/data-management/join-field.htm

* We don't have to make table view if we use join field
"""

# i assume that both common columns, # fields are same.
out_join = arcpy.AddJoin_management(table_view, commoncolumn, shapefilepath, commoncolumn)

# extracting them is not useful. But I'll write it down:
arcpy.Copy_management(out_join, out_data="%s\\%s" % (out_gdb, name))

# some notes:
# If your samecolumn fields are different between each other in tables
# you need to iterate them like this:
for table, column in zip(mytables, commoncolumns_ordered):
print (table)
print (column)
# do others

关于python - Arcpy 脚本循环 - 如何循环遍历文件夹中的表并对每个表执行 arcpy join 函数?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55504614/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com