gpt4 book ai didi

apache-spark - 无法解析 'columnname' 给定的输入列 : Spark-SQL

转载 作者:行者123 更新时间:2023-12-04 08:20:50 32 4
gpt4 key购买 nike

我每天都在运行此代码,并将输出保存在某个选定的位置:

  def aggConnections()(input: DataFrame) = {
input
.groupBy (
$"spid" (0).as ("domain_userid"),
$"cxid" (0).as ("key_id") )
.agg (
min ($"Time").as ("first_seen"),
max ($"Time").as ("last_seen") )
.select ($"domain_userid", $"key_id",
lit ("cxense_id").as ("key_source"),
lit (1).as ("times_seen"),
$"first_seen", $"last_seen")
.filter($"domain_userid".isNotNull && $"key_id".isNotNull)
}

val requests = spark.read.parquet("day1").transform(aggConnections())
spid是一个数组,它看起来像这样:
spid:array
element:string
这就是为什么我必须像 $"spid" (0).as ("domain_userid") 一样访问它
不幸的是, 在某些日子在运行 Spark 3.0.1 的 AWS EMR 上运行此作业时出现以下错误:

diagnostics: User class threw exception:org.apache.spark.sql.AnalysisException: cannot resolve '`spid`' giveninput columns: [AdUnitId, AudienceSegmentIds, BandWidth,BandwidthGroupId, BandwidthId, Browser, BrowserId, City, CityId,CmsMetadata, Country, CountryId, DeviceCategory, Domain, GfpContentId,IsCompanion, IsFilledRequest, IsInterstitial, IsVideoFallbackRequest,KeyPart, Metro, MetroId, MobileAppId, MobileCapability, MobileCarrier,MobileDevice, OS, OSId, OSVersion, PodPosition, PostalCode,PostalCodeId, PublisherProvidedID, RefererURL, Region, RegionId,RequestLanguage, RequestedAdUnitSizes, Time, TimeUsec2, UserId,VideoPosition, addefend, ads_enabled, app_name, app_version,bereidingstijd, brand, careerlevel, cat, category, categorycode,categoryname, channel_id, channel_title, cid, cmssource, companyname,compid, content_url, contentid, contentsubstype, crt_cpm, crt_size,cxid, cxsg, dc_yt, deviceid, dos, dossier, educationlevel, floor, fsr,gang, gdpr_applies, gdpr_consentstring, gelegenheid, hb_pb, hb_size,hour, ifa, industry, ingredient, iom, itemid, ix_apnx_om, ix_cdb_om,ix_imdi_cpm, jobtitle, k21, kage, kar, kauth, kembed, keuken, kgender,klg, ko, kpid, kvlg, kvz, kw, lat, long, mc_asset_type, mediation,model, moeilijkheid, origin, pag, pagetype, path, pay, pos, positie,production_id, productname, retailercode, retailername, rpfl_12108,screen, sector, shopid, show_id, show_title, soort, src, stad,starttype, station, subforum, tag, theme, top, video_duration,video_label, yt_vrallowed, ytdevice];; 'Aggregate ['spid[0],cxid#136[0]], ['spid[0] AS domain_userid#276, cxid#136[0] ASkey_id#277, min(Time#0) AS first_seen#417, max(Time#0) ASlast_seen#419]


我以为问题是因为 spid列不存在,但是当我检查架构时它在数据中。因此,我决定在 Databricks 上更准确地测试它。奇怪的是,如果我在运行相同 Spark 3.0.1 版的 Databricks 上运行此代码,则在每天的数据上一切正常,包括在 EMR 上失败的数据。我真的无法解释会发生什么。

最佳答案

我发现我的每日分区也保存在每小时分区中。这些每小时分区的模式可能略有不同,因为有些小时错过了我的 spid柱子。因此,有时我的生产环境因上面报告的错误而失败。
这个错误很难发现,因为我的测试实例有 option("mergeSchema", "true")作为默认的 Spark 选项,因此它在没有遇到错误的情况下合并架构。

关于apache-spark - 无法解析 'columnname' 给定的输入列 : Spark-SQL,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65505941/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com