- Java锁的逻辑(结合对象头和ObjectMonitor)
- 还在用饼状图?来瞧瞧这些炫酷的百分比可视化新图形(附代码实现)⛵
- 自动注册实体类到EntityFrameworkCore上下文,并适配ABP及ABPVNext
- 基于Sklearn机器学习代码实战
接上篇 通过一个示例形象地理解C# async await 非并行异步、并行异步、并行异步的并发量控制 。
前些天写了两篇关于C# async await异步的博客, 第一篇博客看的人多,点赞评论也多,我想应该都看懂了,比较简单。 第二篇博客看的人少,点赞的也少,没有评论.
我很纳闷,第二篇博客才是重点,如此吊炸天的代码,居然没人评论。 这个代码,就是.NET圈的顶级大佬也没有写过,为什么这么说,这就要说到C# async await的语法糖: 没有语法糖,代码一样写,java8没有语法糖,一样写出很厉害的代码。但有了C# async await语法糖,普通的水平一般的业务程序员,哪怕是菜B,也能写出高吞吐高性能的代码,这就是意义! 所以我说顶级大佬没写过,因为他们水平高,脑力好,手段多,自然不需要这么写。但普通程序员要那样写代码,麻烦不说,BUG频出。 标题我用了"探索"这个词,有没有更好的实践,让小白们都会写的并行异步的实践?
代码的实用价值,是查询es。 最近发现es的性能非常好!先给大家看个控制台输出的截图。服务我是部署在服务器上的,真实环境,不是自己电脑。 379次es查询,仅需0.185秒,当然耗时会有波动,零点几秒都是正常的,超过1秒也有可能。 es最怕的是什么,是慢查询,是条件复杂的查询,是范围查询。 我的策略是多次精确查询,这样可以利用es极高的吞吐能力.
既然查询次数多,单线程或者说同步肯定是不行的,必须并行。 并行代码,python能写吗?java能写吗?肯定能啊! 但我前同事写的python多次查询es写的就是同步代码,为什么不并行呢?并行肯定可以写,但是能不写就不写,为什么?因为写起来复杂,不好写。你以为自己技术好,脑力好没问题,但别人呢? 重点是什么?不仅要写并行代码,还要写的简单,不破坏代码原有逻辑结构.
大家都会写的,用async await就行了,很简单,放个我写的,代码主要是在双循环中多次异步请求(大致看一下先跳过):
/// <summary>
/// xxx查询
/// </summary>
public async Task<List<AccompanyInfo>> Query2(string strStartTime, string strEndTime, int kpCountThreshold, int countThreshold, int distanceThreshold, int timeThreshold, List<PeopleCluster> peopleClusterList)
{
List<AccompanyInfo> resultList = new List<AccompanyInfo>();
Stopwatch sw = Stopwatch.StartNew();
//创建字典
Dictionary<string, PeopleCluster> clusterIdPeopleDict = new Dictionary<string, PeopleCluster>();
foreach (PeopleCluster peopleCluster in peopleClusterList)
{
foreach (string clusterId in peopleCluster.ClusterIds)
{
if (!clusterIdPeopleDict.ContainsKey(clusterId))
{
clusterIdPeopleDict.Add(clusterId, peopleCluster);
}
}
}
int queryCount = 0;
Dictionary<string, AccompanyInfo> dict = new Dictionary<string, AccompanyInfo>();
foreach (PeopleCluster people1 in peopleClusterList)
{
List<PeopleFeatureInfo> peopleFeatureList = await ServiceFactory.Get<PeopleFeatureQueryService>().Query(strStartTime, strEndTime, people1);
queryCount++;
foreach (PeopleFeatureInfo peopleFeatureInfo1 in peopleFeatureList)
{
DateTime capturedTime = DateTime.ParseExact(peopleFeatureInfo1.captured_time, "yyyyMMddHHmmss", CultureInfo.InvariantCulture);
string strStartTime2 = capturedTime.AddSeconds(-timeThreshold).ToString("yyyyMMddHHmmss");
string strEndTime2 = capturedTime.AddSeconds(timeThreshold).ToString("yyyyMMddHHmmss");
List<PeopleFeatureInfo> peopleFeatureList2 = await ServiceFactory.Get<PeopleFeatureQueryService>().QueryExcludeSelf(strStartTime2, strEndTime2, people1);
queryCount++;
if (peopleFeatureList2.Count > 0)
{
foreach (PeopleFeatureInfo peopleFeatureInfo2 in peopleFeatureList2)
{
string key = null;
PeopleCluster people2 = null;
string people2ClusterId = null;
if (clusterIdPeopleDict.ContainsKey(peopleFeatureInfo2.cluster_id.ToString()))
{
people2 = clusterIdPeopleDict[peopleFeatureInfo2.cluster_id.ToString()];
key = $"{string.Join(",", people1.ClusterIds)}_{string.Join(",", people2.ClusterIds)}";
}
else
{
people2ClusterId = peopleFeatureInfo2.cluster_id.ToString();
key = $"{string.Join(",", people1.ClusterIds)}_{string.Join(",", people2ClusterId)}";
}
double distance = LngLatUtil.CalcDistance(peopleFeatureInfo1.Longitude, peopleFeatureInfo1.Latitude, peopleFeatureInfo2.Longitude, peopleFeatureInfo2.Latitude);
if (distance > distanceThreshold) continue;
AccompanyInfo accompanyInfo;
if (dict.ContainsKey(key))
{
accompanyInfo = dict[key];
}
else
{
accompanyInfo = new AccompanyInfo();
dict.Add(key, accompanyInfo);
}
accompanyInfo.People1 = people1;
if (people2 != null)
{
accompanyInfo.People2 = people2;
}
else
{
accompanyInfo.ClusterId2 = people2ClusterId;
}
AccompanyItem accompanyItem = new AccompanyItem();
accompanyItem.Info1 = peopleFeatureInfo1;
accompanyItem.Info2 = peopleFeatureInfo2;
accompanyInfo.List.Add(accompanyItem);
accompanyInfo.Count++;
resultList.Add(accompanyInfo);
}
}
}
}
resultList = resultList.FindAll(a => (a.People2 != null && a.Count >= kpCountThreshold) || a.Count >= countThreshold);
//去重
int beforeDistinctCount = resultList.Count;
resultList = resultList.DistinctBy(a =>
{
string str1 = string.Join(",", a.People1.ClusterIds);
string str2 = a.People2 != null ? string.Join(",", a.People2.ClusterIds) : string.Empty;
string str3 = a.ClusterId2 ?? string.Empty;
StringBuilder sb = new StringBuilder();
foreach (AccompanyItem item in a.List)
{
var info2 = item.Info2;
sb.Append($"{info2.camera_id},{info2.captured_time},{info2.cluster_id}");
}
return $"{str1}_{str2}_{str3}_{sb}";
}).ToList();
sw.Stop();
string msg = $"xxx查询,耗时:{sw.Elapsed.TotalSeconds:0.000} 秒,查询次数:{queryCount},去重:{beforeDistinctCount}-->{resultList.Count}";
Console.WriteLine(msg);
LogUtil.Info(msg);
return resultList;
}
上述代码是没有问题的,但也有问题。就是在双循环中多次请求,虽然用了async await,但不是并行,耗时会很长,如何优化?请看如下代码:
/// <summary>
/// xxx查询
/// </summary>
public async Task<List<AccompanyInfo>> Query(string strStartTime, string strEndTime, int kpCountThreshold, int countThreshold, int distanceThreshold, int timeThreshold, List<PeopleCluster> peopleClusterList)
{
List<AccompanyInfo> resultList = new List<AccompanyInfo>();
Stopwatch sw = Stopwatch.StartNew();
//创建字典
Dictionary<string, PeopleCluster> clusterIdPeopleDict = new Dictionary<string, PeopleCluster>();
foreach (PeopleCluster peopleCluster in peopleClusterList)
{
foreach (string clusterId in peopleCluster.ClusterIds)
{
if (!clusterIdPeopleDict.ContainsKey(clusterId))
{
clusterIdPeopleDict.Add(clusterId, peopleCluster);
}
}
}
//组织第一层循环task
Dictionary<PeopleCluster, Task<List<PeopleFeatureInfo>>> tasks1 = new Dictionary<PeopleCluster, Task<List<PeopleFeatureInfo>>>();
foreach (PeopleCluster people1 in peopleClusterList)
{
var task1 = ServiceFactory.Get<PeopleFeatureQueryService>().Query(strStartTime, strEndTime, people1);
tasks1.Add(people1, task1);
}
//计算第一层循环task并缓存结果,组织第二层循环task
Dictionary<string, Task<List<PeopleFeatureInfo>>> tasks2 = new Dictionary<string, Task<List<PeopleFeatureInfo>>>();
Dictionary<PeopleCluster, List<PeopleFeatureInfo>> cache1 = new Dictionary<PeopleCluster, List<PeopleFeatureInfo>>();
foreach (PeopleCluster people1 in peopleClusterList)
{
List<PeopleFeatureInfo> peopleFeatureList = await tasks1[people1];
cache1.Add(people1, peopleFeatureList);
foreach (PeopleFeatureInfo peopleFeatureInfo1 in peopleFeatureList)
{
DateTime capturedTime = DateTime.ParseExact(peopleFeatureInfo1.captured_time, "yyyyMMddHHmmss", CultureInfo.InvariantCulture);
string strStartTime2 = capturedTime.AddSeconds(-timeThreshold).ToString("yyyyMMddHHmmss");
string strEndTime2 = capturedTime.AddSeconds(timeThreshold).ToString("yyyyMMddHHmmss");
var task2 = ServiceFactory.Get<PeopleFeatureQueryService>().QueryExcludeSelf(strStartTime2, strEndTime2, people1);
string task2Key = $"{strStartTime2}_{strEndTime2}_{string.Join(",", people1.ClusterIds)}";
tasks2.TryAdd(task2Key, task2);
}
}
//读取第一层循环task缓存结果,计算第二层循环task
Dictionary<string, AccompanyInfo> dict = new Dictionary<string, AccompanyInfo>();
foreach (PeopleCluster people1 in peopleClusterList)
{
List<PeopleFeatureInfo> peopleFeatureList = cache1[people1];
foreach (PeopleFeatureInfo peopleFeatureInfo1 in peopleFeatureList)
{
DateTime capturedTime = DateTime.ParseExact(peopleFeatureInfo1.captured_time, "yyyyMMddHHmmss", CultureInfo.InvariantCulture);
string strStartTime2 = capturedTime.AddSeconds(-timeThreshold).ToString("yyyyMMddHHmmss");
string strEndTime2 = capturedTime.AddSeconds(timeThreshold).ToString("yyyyMMddHHmmss");
string task2Key = $"{strStartTime2}_{strEndTime2}_{string.Join(",", people1.ClusterIds)}";
List<PeopleFeatureInfo> peopleFeatureList2 = await tasks2[task2Key];
if (peopleFeatureList2.Count > 0)
{
foreach (PeopleFeatureInfo peopleFeatureInfo2 in peopleFeatureList2)
{
string key = null;
PeopleCluster people2 = null;
string people2ClusterId = null;
if (clusterIdPeopleDict.ContainsKey(peopleFeatureInfo2.cluster_id.ToString()))
{
people2 = clusterIdPeopleDict[peopleFeatureInfo2.cluster_id.ToString()];
key = $"{string.Join(",", people1.ClusterIds)}_{string.Join(",", people2.ClusterIds)}";
}
else
{
people2ClusterId = peopleFeatureInfo2.cluster_id.ToString();
key = $"{string.Join(",", people1.ClusterIds)}_{string.Join(",", people2ClusterId)}";
}
double distance = LngLatUtil.CalcDistance(peopleFeatureInfo1.Longitude, peopleFeatureInfo1.Latitude, peopleFeatureInfo2.Longitude, peopleFeatureInfo2.Latitude);
if (distance > distanceThreshold) continue;
AccompanyInfo accompanyInfo;
if (dict.ContainsKey(key))
{
accompanyInfo = dict[key];
}
else
{
accompanyInfo = new AccompanyInfo();
dict.Add(key, accompanyInfo);
}
accompanyInfo.People1 = people1;
if (people2 != null)
{
accompanyInfo.People2 = people2;
}
else
{
accompanyInfo.ClusterId2 = people2ClusterId;
}
AccompanyItem accompanyItem = new AccompanyItem();
accompanyItem.Info1 = peopleFeatureInfo1;
accompanyItem.Info2 = peopleFeatureInfo2;
accompanyInfo.List.Add(accompanyItem);
accompanyInfo.Count++;
resultList.Add(accompanyInfo);
}
}
}
}
resultList = resultList.FindAll(a => (a.People2 != null && a.Count >= kpCountThreshold) || a.Count >= countThreshold);
//去重
int beforeDistinctCount = resultList.Count;
resultList = resultList.DistinctBy(a =>
{
string str1 = string.Join(",", a.People1.ClusterIds);
string str2 = a.People2 != null ? string.Join(",", a.People2.ClusterIds) : string.Empty;
string str3 = a.ClusterId2 ?? string.Empty;
StringBuilder sb = new StringBuilder();
foreach (AccompanyItem item in a.List)
{
var info2 = item.Info2;
sb.Append($"{info2.camera_id},{info2.captured_time},{info2.cluster_id}");
}
return $"{str1}_{str2}_{str3}_{sb}";
}).ToList();
//排序
foreach (AccompanyInfo item in resultList)
{
item.List.Sort((a, b) => -string.Compare(a.Info1.captured_time, b.Info1.captured_time));
}
sw.Stop();
string msg = $"xxx查询,耗时:{sw.Elapsed.TotalSeconds:0.000} 秒,查询次数:{tasks1.Count + tasks2.Count},去重:{beforeDistinctCount}-->{resultList.Count}";
Console.WriteLine(msg);
LogUtil.Info(msg);
return resultList;
}
List<PeopleFeatureInfo>[] listArray = await Task.WhenAll(tasks2.Values);
在双循环体中,怎么拿结果?肯定能写,但又要思考怎么写了不是? 而我的写法,在双循环体中是可以直接拿结果的:
List<PeopleFeatureInfo> list = await tasks2[task2Key];
def get_es_multiprocess(index_list, people_list, core_percent, rev_clusterid_idcard_dict):
'''
多进程读取es数据,转为整个数据帧,按时间排序
:return: 规模较大的数据帧
'''
col_list = ["cluster_id", "camera_id", "captured_time"]
pool = Pool(processes=int(mp.cpu_count() * core_percent))
input_list = [(i, people_list, col_list) for i in index_list]
res = pool.map(get_es, input_list)
if not res:
return None
pool.close()
pool.join()
df_all = pd.DataFrame(columns=col_list+['longitude', 'latitude'])
for df in res:
df_all = pd.concat([df_all, df])
# 这里强制转换为字符串!
df_all['cluster_id_'] = df_all['cluster_id'].apply(lambda x: rev_clusterid_idcard_dict[str(x)])
del df_all['cluster_id']
df_all.rename(columns={'cluster_id_': 'cluster_id'}, inplace=True)
df_all.sort_values(by='captured_time', inplace=True)
print('=' * 100)
print('整个数据(聚类前):')
print(df_all.info())
cluster_id_list = [(i, df) for i, df in df_all.groupby(['cluster_id'])]
cluster_id_list_split = [j for j in func(cluster_id_list, 1000000)]
# todo 缩小数据集,用于调试!
data_all = df_all.iloc[:, :]
return data_all, cluster_id_list_split
res = pool.map(get_es, input_list)
pool.join()
...省略
其中get_es是查询es的方法,他写的应该不是异步方法,不过这个不是重点 2. res是查询结果,通过并行的方式一把查出来,放到res中,然后把结果再解出来 3. 注意,这只是单循环,想想双层循环怎么写 4. pool.join()是阻塞当前线程的,这个不好 5. 同事注释中写的是"多进程",是写错了吗?实际是多线程?还是就是多进程? 6. 当然,python是有async await异步写法的,应该不比C#差,只是同事没有用,可能是因为他用的python版本不够新 7. python代码,字符串太多,字符串是最不好维护的。C#中的字符串里面都是强类型.
照葫芦画瓢,把脑力活变成体力活,我又写了一个方法(业务逻辑不重要,看并行异步的使用):
/// <summary>
/// xxx查询
/// </summary>
public async Task<List<SameVehicleInfo>> Query(string strStartTime, string strEndTime, int kpCountThreshold, int timeThreshold, List<PeopleCluster> peopleClusterList)
{
List<SameVehicleInfo> resultList = new List<SameVehicleInfo>();
Stopwatch sw = Stopwatch.StartNew();
//组织第一层循环task,查xxx
Dictionary<PeopleCluster, Task<List<PeopleFeatureInfo>>> tasks1 = new Dictionary<PeopleCluster, Task<List<PeopleFeatureInfo>>>();
foreach (PeopleCluster people1 in peopleClusterList)
{
var task1 = ServiceFactory.Get<PeopleFeatureQueryService>().Query(strStartTime, strEndTime, people1);
tasks1.Add(people1, task1);
}
//计算第一层循环task并缓存结果,组织第二层循环task,精确搜xxx
Dictionary<string, Task<List<MotorVehicleInfo>>> tasks2 = new Dictionary<string, Task<List<MotorVehicleInfo>>>();
Dictionary<PeopleCluster, List<PeopleFeatureInfo>> cache1 = new Dictionary<PeopleCluster, List<PeopleFeatureInfo>>();
foreach (PeopleCluster people1 in peopleClusterList)
{
List<PeopleFeatureInfo> peopleFeatureList = await tasks1[people1];
cache1.Add(people1, peopleFeatureList);
foreach (PeopleFeatureInfo peopleFeatureInfo1 in peopleFeatureList)
{
string task2Key = $"{peopleFeatureInfo1.camera_id}_{peopleFeatureInfo1.captured_time}";
var task2 = ServiceFactory.Get<MotorVehicleQueryService>().QueryExact(peopleFeatureInfo1.camera_id, peopleFeatureInfo1.captured_time);
tasks2.TryAdd(task2Key, task2);
}
}
//读取第一层循环task缓存结果,计算第二层循环task
Dictionary<PersonVehicleKey, PersonVehicleInfo> dictPersonVehicle = new Dictionary<PersonVehicleKey, PersonVehicleInfo>();
foreach (PeopleCluster people1 in peopleClusterList)
{
List<PeopleFeatureInfo> peopleFeatureList = cache1[people1];
foreach (PeopleFeatureInfo peopleFeatureInfo1 in peopleFeatureList)
{
string task2Key = $"{peopleFeatureInfo1.camera_id}_{peopleFeatureInfo1.captured_time}";
List<MotorVehicleInfo> motorVehicleList = await tasks2[task2Key];
motorVehicleList = motorVehicleList.DistinctBy(a => a.plate_no).ToList();
foreach (MotorVehicleInfo motorVehicleInfo in motorVehicleList)
{
PersonVehicleKey key = new PersonVehicleKey(people1, motorVehicleInfo.plate_no);
PersonVehicleInfo personVehicleInfo;
if (dictPersonVehicle.ContainsKey(key))
{
personVehicleInfo = dictPersonVehicle[key];
}
else
{
personVehicleInfo = new PersonVehicleInfo()
{
People = people1,
PlateNo = motorVehicleInfo.plate_no,
List = new List<PeopleFeatureInfo>()
};
dictPersonVehicle.Add(key, personVehicleInfo);
}
personVehicleInfo.List.Add(peopleFeatureInfo1);
}
}
}
//比对xxx
List<PersonVehicleKey> keys = dictPersonVehicle.Keys.ToList();
Dictionary<string, SameVehicleInfo> dict = new Dictionary<string, SameVehicleInfo>();
for (int i = 0; i < keys.Count - 1; i++)
{
for (int j = i + 1; j < keys.Count; j++)
{
var key1 = keys[i];
var key2 = keys[j];
var personVehicle1 = dictPersonVehicle[key1];
var personVehicle2 = dictPersonVehicle[key2];
if (key1.PlateNo == key2.PlateNo)
{
foreach (PeopleFeatureInfo peopleFeature1 in personVehicle1.List)
{
double minTimeDiff = double.MaxValue;
int minIndex = -1;
for (int k = 0; k < personVehicle2.List.Count; k++)
{
PeopleFeatureInfo peopleFeature2 = personVehicle2.List[k];
DateTime capturedTime1 = DateTime.ParseExact(peopleFeature1.captured_time, "yyyyMMddHHmmss", CultureInfo.InvariantCulture);
DateTime capturedTime2 = DateTime.ParseExact(peopleFeature2.captured_time, "yyyyMMddHHmmss", CultureInfo.InvariantCulture);
var timeDiff = Math.Abs(capturedTime2.Subtract(capturedTime1).TotalSeconds);
if (timeDiff < minTimeDiff)
{
minTimeDiff = timeDiff;
minIndex = k;
}
}
if (minIndex >= 0 && minTimeDiff <= timeThreshold * 60)
{
PeopleCluster people1 = key1.People;
PeopleCluster people2 = key2.People;
PeopleFeatureInfo peopleFeatureInfo2 = personVehicle2.List[minIndex];
string key = $"{string.Join(",", people1.ClusterIds)}_{string.Join(",", people2.ClusterIds)}"; ;
SameVehicleInfo accompanyInfo;
if (dict.ContainsKey(key))
{
accompanyInfo = dict[key];
}
else
{
accompanyInfo = new SameVehicleInfo();
dict.Add(key, accompanyInfo);
}
accompanyInfo.People1 = people1;
accompanyInfo.People2 = people2;
SameVehicleItem accompanyItem = new SameVehicleItem();
accompanyItem.Info1 = peopleFeature1;
accompanyItem.Info2 = peopleFeatureInfo2;
accompanyInfo.List.Add(accompanyItem);
accompanyInfo.Count++;
resultList.Add(accompanyInfo);
}
}
}
}
}
resultList = resultList.FindAll(a => a.Count >= kpCountThreshold);
//筛选,排除xxx
resultList = resultList.FindAll(a =>
{
if (string.Join(",", a.People1.ClusterIds) == string.Join(",", a.People2.ClusterIds))
{
return false;
}
return true;
});
//去重
int beforeDistinctCount = resultList.Count;
resultList = resultList.DistinctBy(a =>
{
string str1 = string.Join(",", a.People1.ClusterIds);
string str2 = string.Join(",", a.People2.ClusterIds);
StringBuilder sb = new StringBuilder();
foreach (SameVehicleItem item in a.List)
{
var info2 = item.Info2;
sb.Append($"{info2.camera_id},{info2.captured_time},{info2.cluster_id}");
}
return $"{str1}_{str2}_{sb}";
}).ToList();
//排序
foreach (SameVehicleInfo item in resultList)
{
item.List.Sort((a, b) => -string.Compare(a.Info1.captured_time, b.Info1.captured_time));
}
sw.Stop();
string msg = $"xxx查询,耗时:{sw.Elapsed.TotalSeconds:0.000} 秒,查询次数:{tasks1.Count + tasks2.Count},去重:{beforeDistinctCount}-->{resultList.Count}";
Console.WriteLine(msg);
LogUtil.Info(msg);
return resultList;
}
最后此篇关于探索:优雅地实现异步方法的并行化的文章就讲到这里了,如果你想了解更多关于探索:优雅地实现异步方法的并行化的内容请搜索CFSDN的文章或继续浏览相关文章,希望大家以后支持我的博客! 。
有没有办法同时运行 2 个不同的代码块。我一直在研究 R 中的并行包,它们似乎都基于在循环中运行相同的函数。我正在寻找一种同时运行不同函数的方法(循环的 1 次迭代)。例如,我想在某个数据对象上创建一
无论如何增加 Parallel.For 启动后的循环次数?示例如下: var start = 0; var end = 5; Parallel.For(start, end, i => { C
我是 Golang 的新手,正在尝试了解并发和并行。我阅读了下面提到的关于并发和并行的文章。我执行了相同的程序。但没有得到相同的(混合字母和字符)输出。首先获取所有字母,然后获取字符。似乎并发不工作,
我正在寻找同时迭代 R 中两个或多个字符向量/列表的方法,例如。有没有办法做这样的事情: foo <- c('a','c','d') bar <- c('aa','cc','dd') for(i in
我对 Raku 很陌生,我对函数式方法有疑问,尤其是 reduce。 我最初有这样的方法: sub standardab{ my $mittel = mittel(@_); my $foo =
我最近花了很多时间来学习实时音频处理的细节,我发现的大多数库/工具都是c / c++代码或脚本/图形语言的形式,并在其中编译了c / c++代码。引擎盖。 使用基于回调的API,与GUI或App中的其
我正在使用 JMeter 进行图像负载测试。我有一个图像名称数组并遍历该数组,我通过 HTTP 请求获取所有图像。 -> loop_over_image - for loop controller
我整个晚上都在困惑这个问题...... makeflags = ['--prefix=/usr','--libdir=/usr/lib'] rootdir='/tmp/project' ps = se
我正在尝试提高计算图像平均值的方法的性能。 为此,我使用了两个 For 语句来迭代所有图像,因此我尝试使用一个 Parallel For 来改进它,但结果并不相同。 我做错了吗?或者是什么导致了差异?
假设您有一个并行 for 循环实现,例如ConcRT parallel_for,将所有工作放在一个 for 循环体内总是最好的吗? 举个例子: for(size_t i = 0; i < size()
我想并行运行一部分代码。目前我正在使用 Parallel.For 如何让10、20或40个线程同时运行 我当前的代码是: Parallel.For(1, total, (ii) =>
我使用 PAY API 进行了 PayPal 自适应并行支付,其中无论用户(买家)购买什么,都假设用户购买了总计 100 美元的商品。在我的自适应并行支付中,有 2 个接收方:Receiver1 和
我正在考虑让玩家加入游戏的高效算法。由于会有大量玩家,因此算法应该是异步的(即可扩展到集群中任意数量的机器)。有细节:想象有一个无向图(每个节点都是一个玩家)。玩家之间的每条边意味着玩家可以参加同一场
我有一个全局变量 volatile i = 0; 和两个线程。每个都执行以下操作: i++; System.out.print(i); 我收到以下组合。 12、21 和 22。 我理解为什么我没有得到
我有以下称为 pgain 的方法,它调用我试图并行化的方法 dist: /***************************************************************
我有一个 ruby 脚本读取一个巨大的表(约 2000 万行),进行一些处理并将其提供给 Solr 用于索引目的。这一直是我们流程中的一大瓶颈。我打算在这里加快速度,我想实现某种并行性。我对 Ru
我正在研究 Golang 并遇到一个问题,我已经研究了几天,我似乎无法理解 go routines 的概念以及它们的使用方式。 基本上我是在尝试生成数百万条随机记录。我有生成随机数据的函数,并将创建一
我希望 for 循环使用 go 例程并行。我尝试使用 channel ,但没有用。我的主要问题是,我想在继续之前等待所有迭代完成。这就是为什么在它不起作用之前简单地编写 go 的原因。我尝试使用 ch
我正在使用 import Control.Concurrent.ParallelIO.Global main = parallel_ (map processI [1..(sdNumber runPa
我正在尝试通过 makePSOCKcluster 连接到另一台计算机: library(parallel) cl ... doTryCatch -> recvData -> makeSOCKm
我是一名优秀的程序员,十分优秀!