gpt4 book ai didi

sql-server - Azure 数据工厂 - 从 Blob 批量导入到 Azure SQL

转载 作者:行者123 更新时间:2023-12-03 03:12:05 26 4
gpt4 key购买 nike

我有一个简单的文件FD_GROUP.TXT,其内容为:

~0100~^~乳制品和蛋制品~
~0200~^~香料和 Vanilla ~
~0300~^~婴儿食品~
~0400~^~油脂~
~0500~^~家禽产品~

我正在尝试使用 Azure 数据工厂将这些文件(有些文件有 700,000 行)批量导入到 SQL 数据库。

策略是首先用 ^ 分隔列,然后用空字符替换波浪线 (~),这样我就失去了波浪线 (~),然后进行插入。

1。 SQL解决方案:

DECLARE @CsvFilePath NVARCHAR(1000) = 'D:\CodePurehope\Dev\NutrientData\FD_GROUP.txt';

CREATE TABLE #TempTable
(
[FoodGroupCode] VARCHAR(666) NOT NULL,
[FoodGroupDescription] VARCHAR(60) NOT NULL
)

DECLARE @sql NVARCHAR(4000) = 'BULK INSERT #TempTable FROM ''' + @CsvFilePath + ''' WITH ( FIELDTERMINATOR =''^'', ROWTERMINATOR =''\n'' )';
EXEC(@sql);

UPDATE #TempTable
SET [FoodGroupCode] = REPLACE([FoodGroupCode], '~', ''),
[FoodGroupDescription] = REPLACE([FoodGroupDescription], '~', '')
GO

INSERT INTO [dbo].[FoodGroupDescriptions]
(
[FoodGroupCode],
[FoodGroupDescription]
)
SELECT
[FoodGroupCode],
[FoodGroupDescription]
FROM
#TempTable
GO

DROP TABLE #TempTable

2。 SSIS ETL包解决方案: enter image description here

平面文件源使用 ^ 进行分隔,并进行派生列转换以替换不必要的波形符 (~),如上图所示。

如何使用 Microsoft Azure 数据工厂做到这一点?
我已将 FD_GROUP.TXT 上传到 Azure 存储 Blob 作为输入,并在 Azure SQL Server 上准备好表用于输出

我有:
- 2 个链接服务:AzureStorage 和 AzureSQL。
- 2 个数据集:Blob 作为输入,SQL 作为输出
- 1 条管道

enter image description here

FoodGroupDescriptionsAzureBlob 设置

{
"name": "FoodGroupDescriptionsAzureBlob",
"properties": {
"structure": [
{
"name": "FoodGroupCode",
"type": "Int32"
},
{
"name": "FoodGroupDescription",
"type": "String"
}
],
"published": false,
"type": "AzureBlob",
"linkedServiceName": "AzureStorageLinkedService",
"typeProperties": {
"fileName": "FD_GROUP.txt",
"folderPath": "nutrition-data/NutrientData/",
"format": {
"type": "TextFormat",
"rowDelimiter": "\n",
"columnDelimiter": "^"
}
},
"availability": {
"frequency": "Minute",
"interval": 15
}
}
}

FoodGroupDescriptionsSQLAzure 设置

{
"name": "FoodGroupDescriptionsSQLAzure",
"properties": {
"structure": [
{
"name": "FoodGroupCode",
"type": "Int32"
},
{
"name": "FoodGroupDescription",
"type": "String"
}
],
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "AzureSqlLinkedService",
"typeProperties": {
"tableName": "FoodGroupDescriptions"
},
"availability": {
"frequency": "Minute",
"interval": 15
}
}
}

FoodGroupDescriptionsPipeline 设置

{
"name": "FoodGroupDescriptionsPipeline",
"properties": {
"description": "Copy data from a blob to Azure SQL table",
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "BlobSource"
},
"sink": {
"type": "SqlSink",
"writeBatchSize": 10000,
"writeBatchTimeout": "60.00:00:00"
}
},
"inputs": [
{
"name": "FoodGroupDescriptionsAzureBlob"
}
],
"outputs": [
{
"name": "FoodGroupDescriptionsSQLAzure"
}
],
"policy": {
"timeout": "01:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst"
},
"scheduler": {
"frequency": "Minute",
"interval": 15
},
"name": "CopyFromBlobToSQL",
"description": "Bulk Import FoodGroupDescriptions"
}
],
"start": "2015-07-13T00:00:00Z",
"end": "2015-07-14T00:00:00Z",
"isPaused": false,
"hubName": "gymappdatafactory_hub",
"pipelineMode": "Scheduled"
}
}

这个东西在 Azure 数据工厂上不起作用 + 我不知道如何在这种情况下使用替换。任何帮助表示赞赏。

最佳答案

我正在使用您的代码,并且通过执行以下操作可以使其正常工作:

在 FoodGroupDescriptionsAzureBlob json 定义中,您需要在属性节点中添加 "external": true。 Blob 输入文件是从外部源创建的,而不是从 azure 数据工厂管道创建的,通过将其设置为 true,它可以让 azure 数据工厂知道该输入应该可供使用。

还在 blob 输入定义中添加:"quoteChar": "~"到 "format"节点,因为看起来数据是用 "~"包裹的,这将从数据中删除这些内容,这样您定义的 INT 将正确插入到您的 sql 表中。

完整 Blob 定义:

{
"name": "FoodGroupDescriptionsAzureBlob",
"properties": {
"structure": [
{
"name": "FoodGroupCode",
"type": "Int32"
},
{
"name": "FoodGroupDescription",
"type": "String"
}
],
"published": false,
"type": "AzureBlob",
"linkedServiceName": "AzureStorageLinkedService",
"typeProperties": {
"fileName": "FD_Group.txt",
"folderPath": "nutrition-data/NutrientData/",
"format": {
"type": "TextFormat",
"rowDelimiter": "\n",
"columnDelimiter": "^",
"quoteChar": "~"
}
},
"availability": {
"frequency": "Minute",
"interval": 15
},
"external": true,
"policy": {}
}

}

由于您设置了每 15 分钟的间隔以及全天的管道开始和结束日期,因此这将为整个管道运行持续时间每 15 分钟创建一个切片,因为您只想在更改开始时运行此切片最后是这样的:

  "start": "2015-07-13T00:00:00Z",
"end": "2015-07-13T00:15:00Z",

这将创建 1 个切片。

希望这有帮助。

关于sql-server - Azure 数据工厂 - 从 Blob 批量导入到 Azure SQL,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35965183/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com