gpt4 book ai didi

elasticsearch - 并发文件解析并插入到 Elastic Search 中

转载 作者:数据小太阳 更新时间:2023-10-29 03:12:57 25 4
gpt4 key购买 nike

我最近在玩 Go,想出了一个小脚本来解析日志文件并将它们插入到 Elasticsearch 中。对于每个文件,我都生成了一个这样的 goroutine:

var wg := sync.WaitGroup{}
wg.Add(len(files))
for _, file := range files {
go func(f os.FileInfo){
defer wg.Done()
ProcessFile(f.Name(), config.OriginFilePath, config.WorkingFilePath, config.ArchiveFilePath,fmt.Sprintf("http://%v:%v", config.ElasticSearch.Host, config.ElasticSearch.Port),config.ProviderIndex, config.NetworkData)
}(file)
}
wg.Wait()

在我的 processFile 中,我有发送到 Elasticsearch 的函数:

func BulkInsert(lines []string, ES *elastic.Client) (*elastic.Response, error){
r, err := ES.PerformRequest("POST", "/_bulk", url.Values{}, strings.Join(lines, "\n")+"\n")
if err != nil {
return nil, err
}
return r, nil
}

问题是我不完全理解 goroutines 是如何工作的。我的理解是发送到 Elasticsearch 会阻止我的一个 goroutines 执行。我尝试使用相同的方法生成另一个 goroutine 以使用批量插入进行 Elasticsearch :

WaitGroup, go func(){defer wg.Done(); BulkInsert(elems, ES);}()wg.Wait() 在我的函数返回之前。然而,我发现最终并不是我所有的事件都在 Elasticsearch 中结束。我认为这是由于 goroutines 在没有发送/等待批量请求完成的情况下返回。

我的问题是,我处理这个问题的方法是否正确?我可以获得更好的性能吗?

最佳答案

Can I achieve better performance?

不清楚,这取决于接收方和发送方的能力。

My question is, is my approach to this problem is correct?

这可能有助于您更好地理解 go routines,

package main

import (
"fmt"
"log"
"net/http"
"sync"
"time"
)

func main() {

addr := "127.0.0.1:2074"

srv := http.Server{
Addr: addr,
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Println("hit ", r.URL.String())
<-time.After(time.Second)
log.Println("done ", r.URL.String())
}),
}
fail(unblock(srv.ListenAndServe))

jobs := []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}

// case 1
// it creates 10 goroutines,
// that triggers 10 // concurrent get queries
{
wg := sync.WaitGroup{}
wg.Add(len(jobs))
log.Printf("starting %v jobs\n", len(jobs))
for _, job := range jobs {
go func(job int) {
defer wg.Done()
http.Get(fmt.Sprintf("http://%v/job/%v", addr, job))
}(job)
}
wg.Wait()
log.Printf("done %v jobs\n", len(jobs))
}

log.Println()
log.Println("=================")
log.Println()

// case 2
// it creates 3 goroutines,
// that triggers 3 // concurrent get queries
{
wg := sync.WaitGroup{}
wg.Add(len(jobs))
in := make(chan string)
limit := make(chan bool, 3)
log.Printf("starting %v jobs\n", len(jobs))
go func() {
for url := range in {
limit <- true
go func(url string) {
defer wg.Done()
http.Get(url)
<-limit
}(url)
}
}()
for _, job := range jobs {
in <- fmt.Sprintf("http://%v/job/%v", addr, job)
}
wg.Wait()
log.Printf("done %v jobs\n", len(jobs))
}

log.Println()
log.Println("=================")
log.Println()

// case 2: rewrite
// it creates 6 goroutines,
// that triggers 6 // concurrent get queries
{
wait, add := parallel(6)
log.Printf("starting %v jobs\n", len(jobs))
for _, job := range jobs {
url := fmt.Sprintf("http://%v/job/%v", addr, job)
add(func() {
http.Get(url)
})
}
wait()
log.Printf("done %v jobs\n", len(jobs))
}
}

func parallel(c int) (func(), func(block func())) {
wg := sync.WaitGroup{}
in := make(chan func())
limit := make(chan bool, c)
go func() {
for block := range in {
limit <- true
go func(block func()) {
defer wg.Done()
block()
<-limit
}(block)
}
}()
return wg.Wait, func(block func()) {
wg.Add(1)
in <- block
}
}

func unblock(block func() error) error {
w := make(chan error)
go func() { w <- block() }()
select {
case err := <-w:
return err
case <-time.After(time.Millisecond):
}
return nil
}

func fail(err error) {
if err != nil {
panic(err)
}
}

输出

$ go run main.go 
2017/09/14 01:30:50 starting 10 jobs
2017/09/14 01:30:50 hit /job/0
2017/09/14 01:30:50 hit /job/4
2017/09/14 01:30:50 hit /job/5
2017/09/14 01:30:50 hit /job/2
2017/09/14 01:30:50 hit /job/9
2017/09/14 01:30:50 hit /job/1
2017/09/14 01:30:50 hit /job/3
2017/09/14 01:30:50 hit /job/7
2017/09/14 01:30:50 hit /job/8
2017/09/14 01:30:50 hit /job/6
2017/09/14 01:30:51 done /job/5
2017/09/14 01:30:51 done /job/4
2017/09/14 01:30:51 done /job/2
2017/09/14 01:30:51 done /job/0
2017/09/14 01:30:51 done /job/6
2017/09/14 01:30:51 done /job/9
2017/09/14 01:30:51 done /job/1
2017/09/14 01:30:51 done /job/3
2017/09/14 01:30:51 done /job/7
2017/09/14 01:30:51 done /job/8
2017/09/14 01:30:51 done 10 jobs
2017/09/14 01:30:51
2017/09/14 01:30:51 =================
2017/09/14 01:30:51
2017/09/14 01:30:51 starting 10 jobs
2017/09/14 01:30:51 hit /job/0
2017/09/14 01:30:51 hit /job/2
2017/09/14 01:30:51 hit /job/1
2017/09/14 01:30:52 done /job/2
2017/09/14 01:30:52 done /job/0
2017/09/14 01:30:52 done /job/1
2017/09/14 01:30:52 hit /job/3
2017/09/14 01:30:52 hit /job/4
2017/09/14 01:30:52 hit /job/5
2017/09/14 01:30:53 done /job/3
2017/09/14 01:30:53 done /job/4
2017/09/14 01:30:53 done /job/5
2017/09/14 01:30:53 hit /job/6
2017/09/14 01:30:53 hit /job/7
2017/09/14 01:30:53 hit /job/8
2017/09/14 01:30:54 done /job/6
2017/09/14 01:30:54 done /job/7
2017/09/14 01:30:54 done /job/8
2017/09/14 01:30:54 hit /job/9
2017/09/14 01:30:55 done /job/9
2017/09/14 01:30:55 done 10 jobs
2017/09/14 01:30:55
2017/09/14 01:30:55 =================
2017/09/14 01:30:55
2017/09/14 01:30:55 starting 10 jobs
2017/09/14 01:30:55 hit /job/0
2017/09/14 01:30:55 hit /job/1
2017/09/14 01:30:55 hit /job/4
2017/09/14 01:30:55 hit /job/2
2017/09/14 01:30:55 hit /job/3
2017/09/14 01:30:55 hit /job/5
2017/09/14 01:30:56 done /job/0
2017/09/14 01:30:56 hit /job/6
2017/09/14 01:30:56 done /job/1
2017/09/14 01:30:56 done /job/2
2017/09/14 01:30:56 done /job/4
2017/09/14 01:30:56 hit /job/7
2017/09/14 01:30:56 done /job/3
2017/09/14 01:30:56 hit /job/9
2017/09/14 01:30:56 hit /job/8
2017/09/14 01:30:56 done /job/5
2017/09/14 01:30:57 done /job/6
2017/09/14 01:30:57 done /job/7
2017/09/14 01:30:57 done /job/9
2017/09/14 01:30:57 done /job/8
2017/09/14 01:30:57 done 10 jobs

关于elasticsearch - 并发文件解析并插入到 Elastic Search 中,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46204606/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com