gpt4 book ai didi

org.apache.helix.controller.rebalancer.constraint.dataprovider.ZkBasedPartitionWeightProvider类的使用及代码示例

转载 作者:知者 更新时间:2024-03-13 09:47:11 26 4
gpt4 key购买 nike

本文整理了Java中org.apache.helix.controller.rebalancer.constraint.dataprovider.ZkBasedPartitionWeightProvider类的一些代码示例,展示了ZkBasedPartitionWeightProvider类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZkBasedPartitionWeightProvider类的具体详情如下:
包路径:org.apache.helix.controller.rebalancer.constraint.dataprovider.ZkBasedPartitionWeightProvider
类名称:ZkBasedPartitionWeightProvider

ZkBasedPartitionWeightProvider介绍

[英]A resource weight provider based on ZK node. This class support persistent through Helix Property Store.
[中]基于ZK节点的资源权重提供者。此类支持通过Helix属性存储持久化。

代码示例

代码示例来源:origin: org.apache.helix/helix-core

new ZkBasedPartitionWeightProvider(ZK_ADDRESS, CLUSTER_NAME, "QPS");
qpsWeightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, resourceWeight);
ZkBasedCapacityProvider qpsCapacityProvider =
  new ZkBasedCapacityProvider(ZK_ADDRESS, CLUSTER_NAME, "QPS");
  new ZkBasedPartitionWeightProvider(ZK_ADDRESS, CLUSTER_NAME, "MEM");
memoryWeightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, resourceWeight);
ZkBasedCapacityProvider memoryCapacityProvider =
  new ZkBasedCapacityProvider(ZK_ADDRESS, CLUSTER_NAME, "MEM");
qpsWeightProvider.persistWeights();
memoryCapacityProvider.persistCapacity();
memoryWeightProvider.persistWeights();
  new ZkBasedPartitionWeightProvider(ZK_ADDRESS, CLUSTER_NAME, "QPS");
qpsCapacityProvider =
  new ZkBasedCapacityProvider(ZK_ADDRESS, CLUSTER_NAME, "QPS");
memoryWeightProvider =
  new ZkBasedPartitionWeightProvider(ZK_ADDRESS, CLUSTER_NAME, "MEM");
memoryCapacityProvider =
  new ZkBasedCapacityProvider(ZK_ADDRESS, CLUSTER_NAME, "MEM");

代码示例来源:origin: apache/helix

new ZkBasedPartitionWeightProvider(ZK_ADDR, CLUSTER_NAME, "Test");
weightProvider.updateWeights(resourceDefaultWeightMap, partitionWeightMap, resourceWeight);
weightProvider.persistWeights();
weightProvider = new ZkBasedPartitionWeightProvider(ZK_ADDR, CLUSTER_NAME, "Test");
validateWeight(weightProvider);
weightProvider = new ZkBasedPartitionWeightProvider(ZK_ADDR, CLUSTER_NAME, "Fack");
for (String resource : resourceNames) {
 for (String partition : partitions) {
  Assert.assertEquals(weightProvider.getPartitionWeight(resource, partition),
    DEFAULT_WEIGHT_VALUE);
weightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, -1);
try {
 weightProvider.persistWeights();
 Assert.fail("Should fail to persist invalid weight information.");
} catch (HelixException hex) {

代码示例来源:origin: apache/helix

@Test
public void testRebalanceUsingZkDataProvider() {
 // capacity / weight
 Map<String, Integer> capacity = new HashMap<>();
 for (String instance : instanceNames) {
  capacity.put(instance, defaultCapacity);
 }
 ZkBasedPartitionWeightProvider weightProvider =
   new ZkBasedPartitionWeightProvider(ZK_ADDR, CLUSTER_NAME, "QPS");
 weightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, resourceWeight);
 ZkBasedCapacityProvider capacityProvider =
   new ZkBasedCapacityProvider(ZK_ADDR, CLUSTER_NAME, "QPS");
 capacityProvider.updateCapacity(capacity, Collections.EMPTY_MAP, 0);
 TotalCapacityConstraint capacityConstraint =
   new TotalCapacityConstraint(weightProvider, capacityProvider);
 PartitionWeightAwareEvennessConstraint evenConstraint =
   new PartitionWeightAwareEvennessConstraint(weightProvider, capacityProvider);
 WeightAwareRebalanceUtil util = new WeightAwareRebalanceUtil(clusterConfig, instanceConfigs);
 ResourcesStateMap assignment = util.buildIncrementalRebalanceAssignment(resourceConfigs, null,
   Collections.<AbstractRebalanceHardConstraint>singletonList(capacityConstraint),
   Collections.<AbstractRebalanceSoftConstraint>singletonList(evenConstraint));
 Map<String, Integer> weightCount = checkPartitionUsage(assignment, weightProvider);
 int max = Collections.max(weightCount.values());
 int min = Collections.min(weightCount.values());
 // Since the accuracy of Default evenness constraint is 0.01, diff should be 1/100 of participant capacity in max.
 Assert.assertTrue((max - min) <= defaultCapacity / 100);
}

代码示例来源:origin: apache/helix

new ZkBasedPartitionWeightProvider(ZK_ADDR, CLUSTER_NAME, "QPS");
weightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, resourceWeight);

代码示例来源:origin: apache/helix

new ZkBasedPartitionWeightProvider(ZK_ADDRESS, CLUSTER_NAME, "QPS");
qpsWeightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, resourceWeight);
ZkBasedCapacityProvider qpsCapacityProvider =
  new ZkBasedCapacityProvider(ZK_ADDRESS, CLUSTER_NAME, "QPS");
  new ZkBasedPartitionWeightProvider(ZK_ADDRESS, CLUSTER_NAME, "MEM");
memoryWeightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, resourceWeight);
ZkBasedCapacityProvider memoryCapacityProvider =
  new ZkBasedCapacityProvider(ZK_ADDRESS, CLUSTER_NAME, "MEM");
qpsWeightProvider.persistWeights();
memoryCapacityProvider.persistCapacity();
memoryWeightProvider.persistWeights();
  new ZkBasedPartitionWeightProvider(ZK_ADDRESS, CLUSTER_NAME, "QPS");
qpsCapacityProvider =
  new ZkBasedCapacityProvider(ZK_ADDRESS, CLUSTER_NAME, "QPS");
memoryWeightProvider =
  new ZkBasedPartitionWeightProvider(ZK_ADDRESS, CLUSTER_NAME, "MEM");
memoryCapacityProvider =
  new ZkBasedCapacityProvider(ZK_ADDRESS, CLUSTER_NAME, "MEM");

代码示例来源:origin: apache/helix

new ZkBasedPartitionWeightProvider(ZK_ADDR, CLUSTER_NAME, "QPS");
weightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, resourceWeight);

代码示例来源:origin: apache/helix

new ZkBasedPartitionWeightProvider(ZK_ADDR, CLUSTER_NAME, "QPS");
weightProvider.updateWeights(Collections.EMPTY_MAP, Collections.EMPTY_MAP, resourceWeight);

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com