- r - 以节省内存的方式增长 data.frame
- ruby-on-rails - ruby/ruby on rails 内存泄漏检测
- android - 无法解析导入android.support.v7.app
- UNIX 域套接字与共享内存(映射文件)
我正在尝试与 Node.JS (ioredis/cluster) 一起创建 Redis 集群,但这似乎不起作用。
在 GKE 上是 v1.11.8-gke.6。
我正在做的正是 ha-redis
文档中所说的:
~ helm install --set replicas=3 --name redis-test stable/redis-ha
NAME: redis-test
LAST DEPLOYED: Fri Apr 26 00:13:31 2019
NAMESPACE: yt
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
redis-test-redis-ha-configmap 3 0s
redis-test-redis-ha-probes 2 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
redis-test-redis-ha-server-0 0/2 Init:0/1 0 0s
==> v1/Role
NAME AGE
redis-test-redis-ha 0s
==> v1/RoleBinding
NAME AGE
redis-test-redis-ha 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-test-redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 0s
redis-test-redis-ha-announce-0 ClusterIP 10.7.244.34 <none> 6379/TCP,26379/TCP 0s
redis-test-redis-ha-announce-1 ClusterIP 10.7.251.35 <none> 6379/TCP,26379/TCP 0s
redis-test-redis-ha-announce-2 ClusterIP 10.7.252.94 <none> 6379/TCP,26379/TCP 0s
==> v1/ServiceAccount
NAME SECRETS AGE
redis-test-redis-ha 1 0s
==> v1/StatefulSet
NAME READY AGE
redis-test-redis-ha-server 0/3 0s
NOTES:
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
redis-test-redis-ha.yt.svc.cluster.local
To connect to your Redis server:
1. Run a Redis pod that you can use as a client:
kubectl exec -it redis-test-redis-ha-server-0 sh -n yt
2. Connect using the Redis CLI:
redis-cli -h redis-test-redis-ha.yt.svc.cluster.local
~ k get pods | grep redis-test
redis-test-redis-ha-server-0 2/2 Running 0 1m
redis-test-redis-ha-server-1 2/2 Running 0 1m
redis-test-redis-ha-server-2 2/2 Running 0 54s
~ kubectl exec -it redis-test-redis-ha-server-0 sh -n yt
Defaulting container name to redis.
Use 'kubectl describe pod/redis-test-redis-ha-server-0 -n yt' to see all of the containers in this pod.
/data $ redis-cli -h redis-test-redis-ha.yt.svc.cluster.local
redis-test-redis-ha.yt.svc.cluster.local:6379> set test key
(error) READONLY You can't write against a read only replica.
但最后只有一个我连接的随机 pod 是可写的。我在几个容器上运行日志,那里的一切似乎都很好。我尝试在 redis-cli
中运行 cluster info
但我得到 ERR This instance has cluster support disabled
everywhere.
日志:
~ k logs pod/redis-test-redis-ha-server-0 redis
1:C 25 Apr 2019 20:13:43.604 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 25 Apr 2019 20:13:43.604 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 25 Apr 2019 20:13:43.604 # Configuration loaded
1:M 25 Apr 2019 20:13:43.606 * Running mode=standalone, port=6379.
1:M 25 Apr 2019 20:13:43.606 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 25 Apr 2019 20:13:43.606 # Server initialized
1:M 25 Apr 2019 20:13:43.606 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 25 Apr 2019 20:13:43.627 * DB loaded from disk: 0.021 seconds
1:M 25 Apr 2019 20:13:43.627 * Ready to accept connections
1:M 25 Apr 2019 20:14:11.801 * Replica 10.7.251.35:6379 asks for synchronization
1:M 25 Apr 2019 20:14:11.801 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for 'c2827ffe011d774db005a44165bac67a7e7f7d85', my replication IDs are '8311a1ca896e97d5487c07f2adfd7d4ef924f36b' and '0000000000000000000000000000000000000000')
1:M 25 Apr 2019 20:14:11.802 * Delay next BGSAVE for diskless SYNC
1:M 25 Apr 2019 20:14:17.825 * Starting BGSAVE for SYNC with target: replicas sockets
1:M 25 Apr 2019 20:14:17.825 * Background RDB transfer started by pid 55
55:C 25 Apr 2019 20:14:17.826 * RDB: 0 MB of memory used by copy-on-write
1:M 25 Apr 2019 20:14:17.926 * Background RDB transfer terminated with success
1:M 25 Apr 2019 20:14:17.926 # Slave 10.7.251.35:6379 correctly received the streamed RDB file.
1:M 25 Apr 2019 20:14:17.926 * Streamed RDB transfer with replica 10.7.251.35:6379 succeeded (socket). Waiting for REPLCONF ACK from slave to enable streaming
1:M 25 Apr 2019 20:14:18.828 * Synchronization with replica 10.7.251.35:6379 succeeded
1:M 25 Apr 2019 20:14:42.711 * Replica 10.7.252.94:6379 asks for synchronization
1:M 25 Apr 2019 20:14:42.711 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for 'c2827ffe011d774db005a44165bac67a7e7f7d85', my replication IDs are 'af453adde824b2280ba66adb40cc765bf390e237' and '0000000000000000000000000000000000000000')
1:M 25 Apr 2019 20:14:42.711 * Delay next BGSAVE for diskless SYNC
1:M 25 Apr 2019 20:14:48.976 * Starting BGSAVE for SYNC with target: replicas sockets
1:M 25 Apr 2019 20:14:48.977 * Background RDB transfer started by pid 125
125:C 25 Apr 2019 20:14:48.978 * RDB: 0 MB of memory used by copy-on-write
1:M 25 Apr 2019 20:14:49.077 * Background RDB transfer terminated with success
1:M 25 Apr 2019 20:14:49.077 # Slave 10.7.252.94:6379 correctly received the streamed RDB file.
1:M 25 Apr 2019 20:14:49.077 * Streamed RDB transfer with replica 10.7.252.94:6379 succeeded (socket). Waiting for REPLCONF ACK from slave to enable streaming
1:M 25 Apr 2019 20:14:49.761 * Synchronization with replica 10.7.252.94:6379 succeeded
~ k logs pod/redis-test-redis-ha-server-1 redis
1:C 25 Apr 2019 20:14:11.780 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 25 Apr 2019 20:14:11.781 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 25 Apr 2019 20:14:11.781 # Configuration loaded
1:S 25 Apr 2019 20:14:11.786 * Running mode=standalone, port=6379.
1:S 25 Apr 2019 20:14:11.791 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:S 25 Apr 2019 20:14:11.791 # Server initialized
1:S 25 Apr 2019 20:14:11.791 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:S 25 Apr 2019 20:14:11.792 * DB loaded from disk: 0.001 seconds
1:S 25 Apr 2019 20:14:11.792 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
1:S 25 Apr 2019 20:14:11.792 * Ready to accept connections
1:S 25 Apr 2019 20:14:11.792 * Connecting to MASTER 10.7.244.34:6379
1:S 25 Apr 2019 20:14:11.792 * MASTER <-> REPLICA sync started
1:S 25 Apr 2019 20:14:11.792 * Non blocking connect for SYNC fired the event.
1:S 25 Apr 2019 20:14:11.793 * Master replied to PING, replication can continue...
1:S 25 Apr 2019 20:14:11.799 * Trying a partial resynchronization (request c2827ffe011d774db005a44165bac67a7e7f7d85:6006176).
1:S 25 Apr 2019 20:14:17.824 * Full resync from master: af453adde824b2280ba66adb40cc765bf390e237:722
1:S 25 Apr 2019 20:14:17.824 * Discarding previously cached master state.
1:S 25 Apr 2019 20:14:17.852 * MASTER <-> REPLICA sync: receiving streamed RDB from master
1:S 25 Apr 2019 20:14:17.853 * MASTER <-> REPLICA sync: Flushing old data
1:S 25 Apr 2019 20:14:17.853 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 25 Apr 2019 20:14:17.853 * MASTER <-> REPLICA sync: Finished with success
我缺少什么或者是否有更好的聚类方法?
最佳答案
这不是最好的解决方案,但我想我可以只使用 Sentinel 而不是寻找其他方法(或者可能没有其他方法)。它支持大多数语言,所以应该不会很难(除了 redis-cli,不知道如何查询 Sentinel 服务器)。
这就是我在 ioredis 上完成的(node.js,如果您不熟悉 ES6 语法,抱歉):
import * as IORedis from 'ioredis';
import Redis from 'ioredis';
import { redisHost, redisPassword, redisPort } from './config';
export function getRedisConfig(): IORedis.RedisOptions {
// I'm not sure how to set this properly
// ioredis/cluster automatically resolves all pods by hostname, but not this.
// So I have to implicitly specify all pods.
// Or resolve them all by hostname
return {
sentinels: process.env.REDIS_CLUSTER.split(',').map(d => {
const [host, port = 26379] = d.split(':');
return { host, port: Number(port) };
}),
name: process.env.REDIS_MASTER_NAME || 'mymaster',
...(redisPassword ? { password: redisPassword } : {}),
};
}
export async function initializeRedis() {
if (process.env.REDIS_CLUSTER) {
const cluster = new Redis(getRedisConfig());
return cluster;
}
// For dev environment
const client = new Redis(redisPort, redisHost);
if (redisPassword) {
await client.auth(redisPassword);
}
return client;
}
在环境中:
env:
- name: REDIS_CLUSTER
value: redis-redis-ha-server-1.redis-redis-ha.yt.svc.cluster.local:26379,redis-redis-ha-server-0.redis-redis-ha.yt.svc.cluster.local:23679,redis-redis-ha-server-2.redis-redis-ha.yt.svc.cluster.local:23679
您可能想使用密码保护它。
关于deployment - 为什么通过redis-ha在k8s上集群不起作用?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55857202/
我有一个具有 3 个节点的非 HA Hadoop 设置:一个 NameNode 和 2 个 DataNode。 NameNode是一个4GB内存和20GB硬盘的服务器,而每个DataNode有8GB内
关于 https://www.rabbitmq.com/ha.html我读到了这两个属性: ha-promote-on-shutdown ha-promote-on-failure 但是我找不到关于这
据我所知,列表的 + op 只要求第二个操作数是可迭代的,“ha”显然是。 在代码中: >>> x = [] >>> x += "ha" >>> x ['h', 'a'] >>> x = x + "h
我试图在网上找到更多关于它的信息,但似乎找不到合适的答案。 我们的新应用程序在顶部使用 HA 负载均衡器将访问者分配到集群 ampq 和集群 mysql,一切都完美无缺。 现在我们已经决定我们需要将我
我已经阅读了 clustering和 HA章节并对 RabbitMQ 集群有了一个公平的理解。我不明白的一件事是,集群上有 2 个以上的节点和一组 HA 队列,客户端如何建立连接,以便在一个节点发生故
我正在尝试使用Hadoop HA设置hbase ha。 我已经设置了Hadoop HA,并对其进行了测试。 但是在HBase安装中,启动时出现以下错误: 2020-05-02 16:11:09,336
我目前正在创建3个Neo4j高可用性服务器。我的业务逻辑将一台服务器作为专用主服务器,而将另外两台计算机作为专用从服务器。我的从服务器与主服务器存在于完全不同的数据中心中。 在两个应用程序之间建立链接
我目前在Docker上安装了GitLab omnibus。我计划通过将其添加到Kubernetes来具有相同的HA,并使用Gluster保持持久性。我玩过用Gluster配置Kubernetes的游戏
如何使用主动/主动或主动/备用 HA 系统在生产环境中运行 docker?有任何指南或最佳实践吗? 我正在考虑 3 个场景: 1) NFS - 用于两台服务器 - 准备有 docker-machine
我使用rabbitmq-server-3.6.1。 似乎有两个选项如何设置同步批量大小。第一个是策略选项“ha-sync-batch-size”。 RabbitMQ 文档 here 中对此进行了描述。
*我正在使用 hearbeat 2.1.4 2 个节点正常工作,但是当我在 node1 中手动停止 httpd 时,heartbeat 不会切换到 node02。如何解决这个问题??* 最佳答案 该版
我正在使用 HA 名称节点配置 Hadoop 2.2.0 稳定版,但我不知道如何配置对集群的远程访问。 我配置了手动故障转移的 HA 名称节点,我定义了 dfs.nameservices,我可以从集群
我一直在尝试 Docker Swarm 并研究其他解决方案,例如 Kubernetes,但我就是不知道什么最适合我的用例,我可以从专家那里获得一些帮助,所以非常欢迎您的意见. 我对要构建的云有一些要求
我有Apache Mesos 0.22.1集群(3个主节点和5个从节点),在HA配置和Spark 1.5.1框架中运行Cloudera HDFS(2.5.0-cdh5.3.1)。 当我尝试 Spark
我正在阅读《 Hadoop最终指导》中的“高可用性”,以下内容并不清楚, To recover from a failed namenode in this situation, an adminis
请注意:这是一个非常相似的问题as this one,但是我断言这不是骗子!另一个问题显然是基于(较旧的)基于Python的Docker注册表,而不是用GoLang编写的最新注册表,而且看起来有很大不
我使用 MySQL 和 Galera wsrep 来获得同步复制,该部分已启动并运行 我需要设置一种代理来处理客户端连接。由于集群中的任何节点都可能发生故障,因此客户端不会直接连接节点,而只能通过代理
我的机器上的一个地窖组中有三个 karaf 节点。第一个节点 (lb_node) 用作负载均衡器,其他两个节点(1_node 和 2_node)用作服务节点(具有已部署的功能)。两个节点都有可用的 /
我正在使用 Win-8 机器并尝试创建多语言网站。 我使用的是法语、阿拉伯语和豪萨语。 我的代码运行完美,但是当我选择豪萨语时出现错误: Culture name 'ha-latn' is not s
最近,我成功地为 HDFS 和 YARN 启用了 HA。现在我有一个事件的和备用的名称节点,自动故障转移工作正常。我正在使用 Cloudera Manager 和 CDH 5。 我有以下问题。 例如,
我是一名优秀的程序员,十分优秀!