gpt4 book ai didi

amazon-web-services - 使用 Redshift Copy 命令进行合并

转载 作者:行者123 更新时间:2023-12-04 08:10:15 27 4
gpt4 key购买 nike

我有一个流程,可以迭代输入并将数据吐出到 AWS Firehose,我已将其配置为上传到我创建的 Redshift 表。一个问题是有时行可能会重复,因为该过程需要重新评估数据。像这样的东西:

Event_date, event_id, event_cost
2015-06-25, 123, 3
2015-06-25, 123, 4

http://docs.aws.amazon.com/redshift/latest/dg/t_updating-inserting-using-staging-tables-.html

看那里,我想用新值替换旧行,所以类似于:

insert into event_table_staging  
select event_date,event_id, event_cost from <s3 location>;

delete from event_table
using event_table_staging
where event_table.event_id = event_table_staging.event_id;

insert into target
select * from event_table_staging;

delete from event_table_staging
select * from event_table_staging;

是否可以做这样的事情:

Redshift columns: event_date,event_id,cost
copy event_table from <s3>
(update event_table
select c_source.event_date,c_source.event_id,c_source.cost from <s3 source> as c_source join event_table on c_source.event_id = event_table.event_id)
CSV


copy event_table from <s3>
(insert into event_table
select c_source.event_date,c_source.event_id,c_source.cost from event_table left outer join<s3 source> as c_source join on c_source.event_id = event_table.event_id where c_source.event_id is NULL)
CSV

最佳答案

您无法直接从 COPY 进行合并。

但是,您的初始方法可以使用临时表包装在事务中,以暂存加载数据以获得最佳性能。

BEGIN
;
CREATE TEMP TABLE event_table_staging (
event_date TIMESTAMP NULL
,event_id BIGINT NULL
,event_cost INTEGER NULL )
DISTSTYLE KEY
DISTKEY (event_id)
SORTKEY (event_id)
;
COPY event_table_staging
FROM <s3 location>
COMPUDATE ON
;
UPDATE event_table
SET event_date = new.event_date
,event_cost = new.event_cost
FROM event_table AS trg
INNER JOIN event_table_staging AS new
ON trg.event_id = new.event_id
WHERE COALESCE(trg.event_date,0) <> COALESCE(new.event_date,0)
AND COALESCE(trg.event_cost,0) <> COALESCE(new.event_cost,0)
;
INSERT INTO event_table
SELECT event_date
,event_id
,event_cost
FROM event_table_staging AS new
LEFT JOIN event_table AS trg
ON trg.event_id = new.event_id
WHERE trg.event_id IS NULL
;
COMMIT
;

只要您使用事务并且更新总量相对较低(个位数百分比),这种方法实际上表现得非常好。唯一需要注意的是,您的目标需要定期进行 VACUUM 清理 - 对我们来说每月一次就足够了。

我们每小时对数百个数百万行范围内的多个表执行此操作,即将数百个数百万行合并为数百个数百万行。用户对合并表的查询仍然表现良好。

关于amazon-web-services - 使用 Redshift Copy 命令进行合并,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35663783/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com