gpt4 book ai didi

Teradata DELETE ALL 与 DROP+CREATE

转载 作者:行者123 更新时间:2023-12-01 07:56:56 25 4
gpt4 key购买 nike

我最近被分配到一个使用 Teradata 的项目。我被告知要严格使用 DROP+CREATE 而不是 DELETE ALL,因为后者“留下一些分配的空间”。这对我来说是违反直觉的,我认为这可能是错误的。我在网上搜索了两种方法之间的比较,但一无所获。这只会加强我的信念,即 DELETE ALL 不会遇到上述问题。但是,如果是这样的话,我必须证明这一点(无论是在实践上还是理论上)。

所以,我的问题是:这两种方法在空间分配上有区别吗?如果不是,是否有官方文件(用户指南、技术规范等)证明这一点?

谢谢!

最佳答案

这里有一个讨论:http://teradataforum.com/teradata/20120403_105705.htm关于同一主题(尽管它并没有真正回答“以某种方式分配一些空间”部分)。他们实际上推荐 DELETE ALL 但出于其他(性能)原因:

我会引用以防链接失效:

"Delete all" will be quicker, although being practical there often isn't a lot of difference in the performance of them.

However, especially for a process that is run regularly (say a daily batch process) then I recommend the "delete all" approach. This will do less work as it only removes the data and leaves the definition in place. Remember that if you remove the definition then this requires accessing multiple dictionary tables, and of course you then have to access those same tables (typically) when you re-create the object.

Apart from the performance aspect, the downside of the drop/create approach is that every time you create an object Teradata inserts "default rows" into the AccessRights table, even if subsequent access to the object is controlled via Role security and/or database level security. As you may well know the AccessRights table can easily get large and very skewed. In my experience many sites have a process which cleans this table on a regular basis, removing redundant rows. If your (typically batch) processes regularly drop/create objects, then you're simply adding rows into the table which have previously been removed by a clean process, and which will be removed in the future by the same process. This all sounds like a complete waste of time to me.

关于Teradata DELETE ALL 与 DROP+CREATE,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27818798/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com