gpt4 book ai didi

ruby - 加快 csv 导入

转载 作者:数据小太阳 更新时间:2023-10-29 07:17:06 24 4
gpt4 key购买 nike

我想导入大量的 cvs 数据(不是直接导入 AR,而是在一些 fetch 之后),我的代码很慢。

def csv_import 
require 'csv'
file = File.open("/#{Rails.public_path}/uploads/shate.csv")
csv = CSV.open(file, "r:ISO-8859-15:UTF-8", {:col_sep => ";", :row_sep => :auto, :headers => :first_row})

csv.each do |row|
#ename,esupp= row[1].split(/_/)
#(ename,esupp,foo) = row[1]..split('_')
abrakadabra = row[0].to_s()
(ename,esupp) = abrakadabra.split(/_/)
eprice = row[6]
eqnt = row[1]
# logger.info("1) ")
# logger.info(ename)
# logger.info("---")
# logger.info(esupp)
#----
#ename = row[4]
#eprice = row[7]
#eqnt = row[10]
#esupp = row[12]

if ename.present? && ename.size>3
search_condition = "*" + ename.upcase + "*"

if esupp.present?
#supplier = @suppliers.find{|item| item['SUP_BRAND'] =~ Regexp.new(".*#{esupp}.*") }
supplier = Supplier.where("SUP_BRAND like ?", "%#{esupp}%").first
logger.warn("!!! *** supp !!!")
#logger.warn(supplier)
end

if supplier.present?

@search = ArtLookup.find(:all, :conditions => ['MATCH (ARL_SEARCH_NUMBER) AGAINST(? IN BOOLEAN MODE)', search_condition.gsub(/[^0-9A-Za-z]/, '')])
@articles = Article.find(:all, :conditions => { :ART_ID => @search.map(&:ARL_ART_ID)})
@art_concret = @articles.find_all{|item| item.ART_ARTICLE_NR.gsub(/[^0-9A-Za-z]/, '').include?(ename.gsub(/[^0-9A-Za-z]/, '')) }

@aa = @art_concret.find{|item| item['ART_SUP_ID']==supplier.SUP_ID} #| @articles
if @aa.present?
@art = Article.find_by_ART_ID(@aa)
end

if @art.present?
@art.PRICEM = eprice
@art.QUANTITYM = eqnt
@art.datetime_of_update = DateTime.now
@art.save
end

end
logger.warn("------------------------------")
end

#logger.warn(esupp)
end
end

即使我删除并只得到这个,它也很慢。

def csv_import 
require 'csv'
file = File.open("/#{Rails.public_path}/uploads/shate.csv")
csv = CSV.open(file, "r:ISO-8859-15:UTF-8", {:col_sep => ";", :row_sep => :auto, :headers => :first_row})

csv.each do |row|
end
end

谁能帮助我使用 fastercsv 提高速度?

最佳答案

我不认为它会变得更快。

也就是说,一些测试表明大部分时间都花在了转码上(我的测试用例大约占 15%)。因此,如果您可以跳过它(例如,通过已经使用 UTF-8 创建 CSV),您会看到一些改进。

此外,根据ruby-doc.org读取 CSV 的“主要”界面是foreach,所以这应该是首选:

def csv_import
import 'csv'
CSV.foreach("/#{Rails.public_path}/uploads/shate.csv", {:encoding => 'ISO-8859-15:UTF-8', :col_sep => ';', :row_sep => :auto, :headers => :first_row}) do | row |
# use row here...
end
end

更新

您也可以尝试将解析拆分为多个线程。我使用这段代码进行了一些性能提升(标题处理被遗漏):

N = 10000
def csv_import
all_lines = File.read("/#{Rails.public_path}/uploads/shate.csv").lines
# parts will contain the parsed CSV data of the different chunks/slices
# threads will contain the threads
parts, threads = [], []
# iterate over chunks/slices of N lines of the CSV file
all_lines.each_slice(N) do | plines |
# add an array object for the current chunk to parts
parts << result = []
# create a thread for parsing the current chunk, hand it over the chunk
# and the current parts sub-array
threads << Thread.new(plines.join, result) do | tsrc, tresult |
# parse the chunk
parsed = CSV.parse(tsrc, {:encoding => 'ISO-8859-15:UTF-8', :col_sep => ";", :row_sep => :auto})
# add the parsed data to the parts sub-array
tresult.replace(parsed.to_a)
end
end
# wait for all threads to finish
threads.each(&:join)
# merge all the parts sub-arrays into one big array and iterate over it
parts.flatten(1).each do | row |
# use row (Array)
end
end

这会将输入分成 10000 行的 block ,并为每个 block 创建一个解析线程。每个线程都在数组 parts 中移交一个子数组来存储其结果。当所有线程都完成时(在 threads.each(&:join) 之后),parts 中所有 block 的结果是联合的,就是这样。

关于ruby - 加快 csv 导入,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12166389/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com