gpt4 book ai didi

使用临时表时 pyspark 中的 SQL 查询错误

转载 作者:行者123 更新时间:2023-12-02 04:27:38 29 4
gpt4 key购买 nike

我有一个 SQL 查询,我必须在 PySpark(DataBricks) 中访问它。由于复杂的查询,PySpark 无法读取相同的内容。有人可以检查我的查询并帮助我将这个查询写在单个“SELECT”语句中,而不是使用“WITH”语句。

Stage:- 1
promotions="""
(WITH VCTE_Promotions as (SELECT v.Shortname, v.Employee_ID_ALT, v.Job_Level,
v.Management_Level, CAST(sysdatetime() AS date) AS PIT_Date, v.Employee_Status_Alt as Employee_Status,
v.Work_Location_Region, v.Work_Location_Country_Desc, v.HML,
[DM_GlobalStaff].[dbo].[V_Worker_PIT].Is_Manager
FROM [DM_GlobalStaff].[dbo].[V_Worker_CUR] as v
LEFT OUTER JOIN
[DM_GlobalStaff].[dbo].[V_Worker_PIT] ON v.Management_Level = [DM_GlobalStaff].[dbo].[V_Worker_PIT].Management_Level),

VCTE_Promotion_v2_Eval as (
SELECT Employee_ID_ALT,
( SELECT max([pit_date]) AS prior_data
FROM [DM_GlobalStaff].[dbo].[V_Worker_PIT] AS t
WHERE (employee_id_alt = a.Employee_ID_ALT) AND (PIT_Date < a.PIT_Date) AND (Is_Manager <> a.Is_Manager) OR
(employee_id_alt = a.Employee_ID_ALT) AND (PIT_Date < a.PIT_Date) AND (Job_Level <> a.Job_Level)) AS prev_job_change_date, Is_Manager
FROM VCTE_Promotions AS a)

SELECT VCTE_Promotion_v2_Eval.Employee_ID_ALT, COALESCE (v_cur.Employee_Status_ALT, N'') AS Curr_Emp_Status,
COALESCE (v_cur.Employee_Type, N'') AS Curr_Employee_Type, v_cur.Hire_Date_Alt AS Curr_Hire_Date,
v_cur.Termination_Date_ALT AS Curr_Termination_Date, COALESCE (v_cur.Termination_Action_ALT, N'')
AS Curr_Termination_Action, cast (v_cur.Job_Level as int) AS Curr_Job_Level,
COALESCE (v_cur.Management_Level, N'') AS Curr_Management_Level,
COALESCE (VCTE_Promotion_v2_Eval.Is_Manager, N'') AS Curr_Ismanager,
CASE WHEN v_m.Job_Level < v_cur.Job_Level OR
(VCTE_Promotion_v2_Eval.Is_Manager = 1 AND v_m.Is_Manager = 0 AND v_m.Job_Level <= v_cur.Job_Level)
THEN 'Promotion' WHEN v_m.Job_Level <> v_cur.Job_Level OR
VCTE_Promotion_v2_Eval.Is_Manager <> v_m.Is_Manager THEN 'Other' ELSE '' END AS Promotion, v_cur.Tenure,
v_cur.Review_Rating_Current
FROM VCTE_Promotion_v2_Eval INNER JOIN
[DM_GlobalStaff].[dbo].[V_Worker_CUR] as v_cur ON VCTE_Promotion_v2_Eval.Employee_ID_ALT = v_cur.Employee_ID_ALT LEFT OUTER JOIN
[DM_GlobalStaff].[dbo].[V_Worker_PIT] as v_m ON VCTE_Promotion_v2_Eval.prev_job_change_date = v_m.PIT_Date AND
VCTE_Promotion_v2_Eval.Employee_ID_ALT = v_m.employee_id_alt
) as pr """

stage-2
promotions = spark.read.jdbc(url=jdbcUrl, table=promotions, properties=connectionProperties)

stage-3
promotions.count()
promotions.show()

从 Stage-2 查询中获得以下错误:-
com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword &apos;WITH&apos;.

---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<command-2532359884208251> in <module>()
----> 1 promotions = spark.read.jdbc(url=jdbcUrl, table=promotions, properties=connectionProperties)

/databricks/spark/python/pyspark/sql/readwriter.py in jdbc(self, url, table, column, lowerBound, upperBound, numPartitions, predicates, properties)
533 jpredicates = utils.toJArray(gateway, gateway.jvm.java.lang.String, predicates)
534 return self._df(self._jreader.jdbc(url, table, jpredicates, jprop))
--> 535 return self._df(self._jreader.jdbc(url, table, jprop))
536
537

我的查询没有问题,这与我的 SQL 提示完美配合。但是一旦我在 PYSPARK(DataBricks) 中使用相同的查询,我就会收到语法错误。你能帮助我使用 PySpark 语法吗?

您的及时帮助将不胜感激。

最佳答案

我没有办法测试,但请尝试一下,并比较结果,看看是否一切都匹配。

另外,我使用 cross appy 而不是相关子查询,因为没有简单的连接并且相关子查询效率不高,
所以交叉申请应该可以完成这项工作

(
SELECT
VCTE_Promotion_v2_Eval.Employee_ID_ALT
,COALESCE(v_cur.Employee_Type, N'') AS Curr_Employee_Type
,v_cur.Review_Rating_Current
(
SELECT
Employee_ID_ALT,
pr.prev_job_change_date,
IsManager
From
( SELECT
v.Shortname
,v.Employee_ID_ALT
,v.Job_Level
,v.Management_Level
,CAST(SYSDATETIME() AS DATE) AS PIT_Date
,v.Employee_Status_Alt AS Employee_Status
,v.Work_Location_Region
,v.Work_Location_Country_Desc
,v.HML
,dbo.T_Mngmt_Level_IsManager_Mapping.IsManager
FROM Worker_CUR AS v
LEFT OUTER JOIN dbo.T_Mngmt_Level_IsManager_Mapping
ON v.Management_Level = dbo.T_Mngmt_Level_IsManager_Mapping.Management_Level
) as VCTE_Promotions a
Cross APPLY (
SELECT
MAX(PIT_Date) AS prior_data
FROM dbo.V_Worker_PIT_with_IsManager AS t
WHERE (employee_id_alt = a.Employee_ID_ALT)
AND (PIT_Date < a.PIT_Date)
AND (IsManager <> a.IsManager)
OR (employee_id_alt = a.Employee_ID_ALT)
AND (PIT_Date < a.PIT_Date)
AND (Job_Level <> a.Job_Level)
)
AS pr
) as VCTE_Promotion_v2_Eval
INNER JOIN [DM_GlobalStaff].[dbo].[V_Worker_CUR] AS v_cur
ON VCTE_Promotion_v2_Eval.Employee_ID_ALT = v_cur.Employee_ID_ALT
LEFT OUTER JOIN dbo.V_Worker_PIT_with_IsManager AS v_m
ON VCTE_Promotion_v2_Eval.prev_job_change_date = v_m.PIT_Date
AND VCTE_Promotion_v2_Eval.Employee_ID_ALT = v_m.employee_id_alt ) as promotions

关于使用临时表时 pyspark 中的 SQL 查询错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52389945/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com