- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在编写一个脚本,其中应根据某些条件查询大量数据并分别移动到某些存档表。我有超过 5000 万条记录需要扫描并选择匹配的记录,以便对六个存档表执行 INSERT 操作。下面的脚本适用于大约 50 万条记录,但在运行数百万条记录时抛出以下异常
Error report:
ORA-04036: PGA memory used by the instance exceeds PGA_ AGGREGATE _LIMIT
除了增加 PGA_ AGGREGATE _LIMIT 之外,我想改进我的脚本,避免将所有记录加载到内存中,而是运行脚本并将值分块插入表中。目前我不知道应该怎么做。有人可以建议我通过让脚本以批处理方式运行来避免内存不足的问题
下面是我的脚本的一部分(显示了将值插入到四个表中)。
CREATE OR REPLACE TYPE R1_ID_TYPE IS TABLE OF NUMBER;
/
CREATE OR REPLACE TYPE R5_ID_TYPE IS TABLE OF NUMBER;
/
DECLARE
R01_IDS R1_ID_TYPE;
R05_IDS R5_ID_TYPE;
BEGIN
--add the R05_IDs which are older than five years from R5_TABLE and R6_TABLE to R5_ID_TYPE nested table
SELECT R5.R05_ID AS R05_ID
BULK COLLECT INTO R05_IDS
FROM R6_TABLE R6 , R5_TABLE R5
WHERE R5.R05_ID = R6.R06_R05_ID_FK
AND R5.R05_DATE_TIME_CAPTURED <= TRUNC(SYSDATE) - 1825
AND R5.R05_STATUS = 'D'
AND R6.R06_STATUS = 'D';
-- Inserts all the deregistered records which are older than five years from R5_TABLE and R6_TABLE tables to the relevant archive tables
INSERT ALL
INTO R5_TABLE_archived(
R05_ID,
R05_R01_ID_FK,
R05_NUMBER,
R05_NUMBER_TYPE,
R05_STATUS,
R05_GSM_SUBSCRIBER_TYPE
R05_DATE_TIME_CAPTURED)
values (
R5_R05_ID,
R5_R05_R01_ID_FK,
R5_NUMBER,
R5_NUMBER_TYPE,
R5_R05_STATUS,
R5_R05_GSM_SUBSCRIBER_TYPE,
R5_R05_DATE_TIME_CAPTURED)
INTO R6_TABLE_archived(
R06_ID,
R06_R05_ID_FK,
R06_R08_ID_FK,
R06_STATUS,
R06_REFERENCE_NUMBER,
R06_DATE_TIME_CAPTURED,
R06_DATE_EXPIRED)
values (
R6_R06_ID,
R6_R06_R05_ID_FK,
R6_R06_R08_ID_FK,
R6_R06_STATUS,
R6_R06_REFERENCE_NUMBER,
R6_R06_DATE_TIME_CAPTURED,
R6_R06_DATE_EXPIRED)
SELECT R5_R05_ID,
R5_R05_R01_ID_FK,
R5_NUMBER,
R5_NUMBER_TYPE,
R5_R05_STATUS,
R5_R05_GSM_SUBSCRIBER_TYPE,
R5_R05_DATE_TIME_CAPTURED,
R6_R06_ID,
R6_R06_R05_ID_FK,
R6_R06_R08_ID_FK,
R6_R06_CHANGE_SOURCE,
R6_R06_REFERENCE_NUMBER,
R6_R06_DATE_TIME_CAPTURED,
R6_R06_DATE_EXPIRED
FROM
(
SELECT R5.R05_ID R5_R05_ID,
R5.R05_R01_ID_FK R5_R05_R01_ID_FK,
R5.R05_NUMBER R5_NUMBER,
R5.R05_NUMBER_TYPE R5_NUMBER_TYPE,
R5.R05_STATUS R5_R05_STATUS,
R5.R05_GSM_SUBSCRIBER_TYPE R5_R05_GSM_SUBSCRIBER_TYPE,
R5.R05_DATE_TIME_CAPTURED R5_R05_DATE_TIME_CAPTURED,
R6.R06_ID R6_R06_ID,
R6.R06_R05_ID_FK R6_R06_R05_ID_FK,
R6.R06_R08_ID_FK R6_R06_R08_ID_FK,
R6.R06_STATUS R6_R06_STATUS,
R6.R06_REFERENCE_NUMBER R6_R06_REFERENCE_NUMBER,
R6.R06_DATE_TIME_CAPTURED R6_R06_DATE_TIME_CAPTURED,
R6.R06_DATE_EXPIRED R6_R06_DATE_EXPIRED
FROM R6_TABLE R6 , R5_TABLE R5
WHERE R5.R05_ID = R6.R06_R05_ID_FK
AND R5.R05_DATE_TIME_CAPTURED <= TRUNC(SYSDATE) - 1825
AND R5.R05_STATUS = 'D'
AND R6.R06_STATUS = 'D');
--selects all the R01 IDs which matches with the above criteria and copy values to respective archive tables
SELECT UNIQUE R1.R01_ID AS R01_ID
BULK COLLECT INTO R01_IDS
FROM R1_TABLE R1, R5_TABLE R5
WHERE R5.R05_ID IN (Select column_value from table(R05_IDS))
AND R1.R01_ID NOT IN (
SELECT R01.R01_ID
FROM R1_TABLE R01,
R5_TABLE R05
WHERE R05.R05_STATUS != 'D'
AND R01.R01_ID = R05.R05_R01_ID_FK)
AND R1.R01_ID = R5.R05_R01_ID_FK;
--insert R1_TABLE tables values which matches with the above criteria into the R1_TABLE_ARCHIVED table
INSERT ALL
INTO R1_TABLE_ARCHIVED(R01_ID,R01_ID_TYPE,R01_IDENTITY_NUMBER,R01_PASSPORT_COUNTRY,R01_DATE_TIME_CAPTURED)
VALUES (RA1_R01_ID,RA1_R01_ID_TYPE,RA1_R01_IDENTITY_NUMBER,RA1_R01_PASSPORT_COUNTRY,RA1_R01_DATE_TIME_CAPTURED)
SELECT RA1_R01_ID,RA1_R01_ID_TYPE,RA1_R01_IDENTITY_NUMBER,RA1_R01_PASSPORT_COUNTRY,RA1_R01_DATE_TIME_CAPTURED
FROM (
SELECT
r1.R01_ID RA1_R01_ID,
r1.R01_ID_TYPE RA1_R01_ID_TYPE,
r1.R01_IDENTITY_NUMBER RA1_R01_IDENTITY_NUMBER,
r1.R01_PASSPORT_COUNTRY RA1_R01_PASSPORT_COUNTRY,
r1.R01_DATE_TIME_CAPTURED RA1_R01_DATE_TIME_CAPTURED
FROM
R1_TABLE r1
WHERE
r1.R01_ID IN (Select column_value from table(R01_IDS))
);
--insert R2_TABLE tables values which matches with the above criteria into the R2_TABLE_ARCHIVED table
INSERT ALL
INTO R2_TABLE_ARCHIVED(R02_ID,R02_R01_ID_FK,R02_fname,R02_SURNAME,R02_CONTACT_NUMBER,R02_DATE_TIME_CAPTURED)
VALUES(RA2_R02_ID,RA2_R02_R01_ID_FK,RA2_R02_fname,RA2_R02_SURNAME,RA2_R02_CONTACT_NUMBER,RA2_R02_DATE_TIME_CAPTURED)
SELECT RA2_R02_ID,RA2_R02_R01_ID_FK,RA2_R02_fname,RA2_R02_SURNAME,RA2_R02_CONTACT_NUMBER,RA2_R02_DATE_TIME_CAPTURED
FROM (
SELECT
r2.R02_ID RA2_R02_ID,
r2.R02_R01_ID_FK RA2_R02_R01_ID_FK,
r2.R02_fname RA2_R02_fname,
r2.R02_SURNAME RA2_R02_SURNAME,
r2.R02_CONTACT_NUMBER RA2_R02_CONTACT_NUMBER,
r2.R02_DATE_TIME_CAPTURED RA2_R02_DATE_TIME_CAPTURED
FROM
R2_TABLE r2
WHERE
r2.R02_R01_ID_FK IN (Select column_value from table(R01_IDS)));
--All the delete queries to remove the above copied values from the parent tables respectively
DELETE FROM R1_TABLE WHERE R01_ID IN (Select column_value from table(R01_IDS));
DELETE FROM R2_TABLE WHERE R02_R01_ID_FK IN (Select column_value from table(R01_IDS));
DELETE FROM R5_TABLE WHERE R05_R01_ID_FK IN (Select column_value from table(R05_IDS));
DELETE FROM R6_TABLE WHERE R06_R05_ID_FK IN (R05_IDS);
COMMIT;
END;
/
COMMIT;
最佳答案
集合(和其他 PL/SQL 构造)存储在 session 内存 中。 (与存储在全局内存中的查询数据不同)。因为 session 内存是基于每个用户分配的,所以必须有一个限制,因为 RAM 仍然是一种相对昂贵的资源。
所以,你收到了这个错误......
ORA-04036: PGA memory used by the instance exceeds PGA_ AGGREGATE _LIMIT
...因为您的 session 已经占用了分配给 PGA( session 可用的内存池)的所有内存。
问题是您正试图用数百万行填充一个集合。尽管那一排很窄,但仍然没有打开。幸运的是,PL/SQL 有一个解决方案:它是 LIMIT 子句。
使用 LIMIT 我们可以用结果集的一个 block 填充一个集合,处理它并获取下一个 block 。没什么可改变的:
DECLARE
R01_IDS R1_ID_TYPE;
R05_IDS R5_ID_TYPE;
cursor r5_cur is
SELECT R5.R05_ID
BULK COLLECT INTO R05_IDS
FROM R6_TABLE R6 , R5_TABLE R5
WHERE R5.R05_ID = R6.R06_R05_ID_FK
AND R5.R05_DATE_TIME_CAPTURED <= TRUNC(SYSDATE) - 1825
AND R5.R05_STATUS = 'D'
AND R6.R06_STATUS = 'D';
BEGIN
-- this is new
open r5_cur;
loop
fetch r5_cur
BULK COLLECT INTO R05_IDS limit 100000;
exit when R05_IDS.count() = 0;
-- this is all your code
-- Inserts all the deregistered records which are older than five years from R5_TABLE and R6_TABLE tables to the relevant archive tables
INSERT ALL
INTO R5_TABLE_archived(
R05_ID,
R05_R01_ID_FK,
R05_NUMBER,
R05_NUMBER_TYPE,
R05_STATUS,
R05_GSM_SUBSCRIBER_TYPE
R05_DATE_TIME_CAPTURED)
values (
R5_R05_ID,
R5_R05_R01_ID_FK,
R5_NUMBER,
R5_NUMBER_TYPE,
R5_R05_STATUS,
R5_R05_GSM_SUBSCRIBER_TYPE,
R5_R05_DATE_TIME_CAPTURED)
INTO R6_TABLE_archived(
R06_ID,
R06_R05_ID_FK,
R06_R08_ID_FK,
R06_STATUS,
R06_REFERENCE_NUMBER,
R06_DATE_TIME_CAPTURED,
R06_DATE_EXPIRED)
values (
R6_R06_ID,
R6_R06_R05_ID_FK,
R6_R06_R08_ID_FK,
R6_R06_STATUS,
R6_R06_REFERENCE_NUMBER,
R6_R06_DATE_TIME_CAPTURED,
R6_R06_DATE_EXPIRED)
SELECT R5_R05_ID,
R5_R05_R01_ID_FK,
R5_NUMBER,
R5_NUMBER_TYPE,
R5_R05_STATUS,
R5_R05_GSM_SUBSCRIBER_TYPE,
R5_R05_DATE_TIME_CAPTURED,
R6_R06_ID,
R6_R06_R05_ID_FK,
R6_R06_R08_ID_FK,
R6_R06_CHANGE_SOURCE,
R6_R06_REFERENCE_NUMBER,
R6_R06_DATE_TIME_CAPTURED,
R6_R06_DATE_EXPIRED
FROM
(
SELECT R5.R05_ID R5_R05_ID,
R5.R05_R01_ID_FK R5_R05_R01_ID_FK,
R5.R05_NUMBER R5_NUMBER,
R5.R05_NUMBER_TYPE R5_NUMBER_TYPE,
R5.R05_STATUS R5_R05_STATUS,
R5.R05_GSM_SUBSCRIBER_TYPE R5_R05_GSM_SUBSCRIBER_TYPE,
R5.R05_DATE_TIME_CAPTURED R5_R05_DATE_TIME_CAPTURED,
R6.R06_ID R6_R06_ID,
R6.R06_R05_ID_FK R6_R06_R05_ID_FK,
R6.R06_R08_ID_FK R6_R06_R08_ID_FK,
R6.R06_STATUS R6_R06_STATUS,
R6.R06_REFERENCE_NUMBER R6_R06_REFERENCE_NUMBER,
R6.R06_DATE_TIME_CAPTURED R6_R06_DATE_TIME_CAPTURED,
R6.R06_DATE_EXPIRED R6_R06_DATE_EXPIRED
FROM R6_TABLE R6 , R5_TABLE R5
WHERE R5.R05_ID = R6.R06_R05_ID_FK
AND R5.R05_DATE_TIME_CAPTURED <= TRUNC(SYSDATE) - 1825
AND R5.R05_STATUS = 'D'
AND R6.R06_STATUS = 'D');
--selects all the R01 IDs which matches with the above criteria and copy values to respective archive tables
SELECT UNIQUE R1.R01_ID AS R01_ID
BULK COLLECT INTO R01_IDS
FROM R1_TABLE R1, R5_TABLE R5
WHERE R5.R05_ID IN (Select column_value from table(R05_IDS))
AND R1.R01_ID NOT IN (
SELECT R01.R01_ID
FROM R1_TABLE R01,
R5_TABLE R05
WHERE R05.R05_STATUS != 'D'
AND R01.R01_ID = R05.R05_R01_ID_FK)
AND R1.R01_ID = R5.R05_R01_ID_FK;
--insert R1_TABLE tables values which matches with the above criteria into the R1_TABLE_ARCHIVED table
INSERT ALL
INTO R1_TABLE_ARCHIVED(R01_ID,R01_ID_TYPE,R01_IDENTITY_NUMBER,R01_PASSPORT_COUNTRY,R01_DATE_TIME_CAPTURED)
VALUES (RA1_R01_ID,RA1_R01_ID_TYPE,RA1_R01_IDENTITY_NUMBER,RA1_R01_PASSPORT_COUNTRY,RA1_R01_DATE_TIME_CAPTURED)
SELECT RA1_R01_ID,RA1_R01_ID_TYPE,RA1_R01_IDENTITY_NUMBER,RA1_R01_PASSPORT_COUNTRY,RA1_R01_DATE_TIME_CAPTURED
FROM (
SELECT
r1.R01_ID RA1_R01_ID,
r1.R01_ID_TYPE RA1_R01_ID_TYPE,
r1.R01_IDENTITY_NUMBER RA1_R01_IDENTITY_NUMBER,
r1.R01_PASSPORT_COUNTRY RA1_R01_PASSPORT_COUNTRY,
r1.R01_DATE_TIME_CAPTURED RA1_R01_DATE_TIME_CAPTURED
FROM
R1_TABLE r1
WHERE
r1.R01_ID IN (Select column_value from table(R01_IDS))
);
--insert R2_TABLE tables values which matches with the above criteria into the R2_TABLE_ARCHIVED table
INSERT ALL
INTO R2_TABLE_ARCHIVED(R02_ID,R02_R01_ID_FK,R02_fname,R02_SURNAME,R02_CONTACT_NUMBER,R02_DATE_TIME_CAPTURED)
VALUES(RA2_R02_ID,RA2_R02_R01_ID_FK,RA2_R02_fname,RA2_R02_SURNAME,RA2_R02_CONTACT_NUMBER,RA2_R02_DATE_TIME_CAPTURED)
SELECT RA2_R02_ID,RA2_R02_R01_ID_FK,RA2_R02_fname,RA2_R02_SURNAME,RA2_R02_CONTACT_NUMBER,RA2_R02_DATE_TIME_CAPTURED
FROM (
SELECT
r2.R02_ID RA2_R02_ID,
r2.R02_R01_ID_FK RA2_R02_R01_ID_FK,
r2.R02_fname RA2_R02_fname,
r2.R02_SURNAME RA2_R02_SURNAME,
r2.R02_CONTACT_NUMBER RA2_R02_CONTACT_NUMBER,
r2.R02_DATE_TIME_CAPTURED RA2_R02_DATE_TIME_CAPTURED
FROM
R2_TABLE r2
WHERE
r2.R02_R01_ID_FK IN (Select column_value from table(R01_IDS)));
--All the delete queries to remove the above copied values from the parent tables respectively
DELETE FROM R1_TABLE WHERE R01_ID IN (Select column_value from table(R01_IDS));
DELETE FROM R2_TABLE WHERE R02_R01_ID_FK IN (Select column_value from table(R01_IDS));
DELETE FROM R5_TABLE WHERE R05_R01_ID_FK IN (Select column_value from table(R05_IDS));
DELETE FROM R6_TABLE WHERE R06_R05_ID_FK IN (R05_IDS);
end loop;
close r5_cur;
COMMIT;
END;
/
不要忘记向下滚动到 END LOOP 并关闭光标!
关于sql - 以 block 的形式移动大量数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47807777/
我的 blockly.js 文件中有以下代码 Blockly.Blocks['account_number'] = { // Other type. init: function() {
首先抱歉我的英语不好,我正在开发 Image Splitter 应用程序并且已经完成,但是现在的要求是当图像被分割(分成几 block /chunks)那么图像 block 的每一 block (ch
#value: 消息的返回值,当发送到一个 block 时,是该 block 中最后一句话的值。所以 [ 1 + 2. 3 + 4. ] value 计算结果为 7。我发现有时很难使用。有没有办法显式
我想构建一个包含 3 div 的响应式导航栏相同的 width和 height . 我申请了 inline-block到每个 block ,我得到一个我不理解的行为。 问题是,第三 block 由 2
我希望使用 Blockly 来允许非技术人员用户指定测试脚本。 它的一部分需要一个文件选择器,但是,我看不到 Blockly 有一个。是吗? 实际上,我找不到完整的标准 block 列表。谁有网址?
仅当您位于父 block 内部时,父 block 的 props.isSelected 才为 true,但当您在该 block 的 innerBlocks 内进行编辑时则不然。 如何从父 block
仅当您位于父 block 内部时,父 block 的 props.isSelected 才为 true,但当您在该 block 的 innerBlocks 内进行编辑时则不然。 如何从父 block
我想创建一个具有不同背景颜色 block 和不同悬停颜色 block 的导航栏 block 。我可以分别创建不同的悬停颜色 block 或不同的背景颜色 block ,但不能一起创建。所以请告诉我如何
我正在使用看到的代码 here定期执行代码: #define DELAY_IN_MS 1000 __block dispatch_time_t next = dispatch_time(DISPATC
为什么 block 必须被复制而不是保留?两者在引擎盖下有什么区别?在什么情况下不需要复制 block (如果有)? 最佳答案 通常,当您分配一个类的实例时,它会进入堆并一直存在,直到它被释放。但是,
我想弄清楚我这样做是否正确: 如果我有一个 block ,我会这样做: __weak MyClass *weakSelf = self; [self performBlock:^{
我想制作一个 4 block 导航菜单,虽然我已经显示了一个 block ,然后单击打开第二个 block ,从第二个开始选择并再次单击出现第三个 block ,第四个 block 相同...这是我的
例如,这样更好吗? try { synchronized (bean) { // Write something } } catch (Int
我想让一只乌龟检查前方小块的颜色并决定移动到哪里。如果前面的补丁不是白色的,那么乌龟向左或向右旋转并移动。我的 If 决策结构中出现错误,显示“此处应为 TRUE?FALSE,而不是 block 列表
我想创建一个 block 对角矩阵,其中对角 block 重复一定次数,非对角 block 都是零矩阵。例如,假设我们从一个矩阵开始: > diag.matrix [,1] [,2] [
我是区 block 链新手。突然我有一个问题,我们是否可以通过区 block 号来访问以太坊区 block 链上之前的区 block 数据。 例如我创建了一个block1、block2。 block
我是区 block 链新手。突然我有一个问题,我们是否可以通过区 block 号来访问以太坊区 block 链上之前的区 block 数据。 例如我创建了一个block1、block2。 block
我创建了一个等距环境,全部使用 Javascript 和 HTML5 (2D Canvas),大部分情况下工作正常。我面临的问题是使用不同高度的图 block ,然后对图 block 上的对象索引进行
这是令我困惑的代码: public Integer getInteger(BlockingQueue queue) { boolean interrupted = false; try
我有一个基于 TPL 数据流的应用程序,它仅使用批处理 block 和操作 block 就可以正常工作。 我已经添加了一个 TransformBlock 以尝试在发布到批处理 block 之前从源中转
我是一名优秀的程序员,十分优秀!