- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我正在遵循本教程(https://github.com/drginm/docker-boilerplates/tree/master/mongodb-replicaset),以便获得三个实例的mongodb副本集,以在docker-compose中工作。
这是我到目前为止已采取的步骤:
1)我已经将setup
和mongo-rs0-1
文件夹复制到了我的根目录中。
2)我已经将三个mongo实例和安装实例添加到我的docker-compose文件中。现在看起来像这样:
version: '3'
services:
mongo-rs0-1:
image: "mongo-start"
build: ./mongo-rs0-1
ports:
- "27017:27017"
volumes:
- ./mongo-rs0-1/data:/data/db
networks:
- app-network
depends_on:
- "mongo-rs0-2"
- "mongo-rs0-3"
mongo-rs0-2:
image: "mongo"
command: --replSet rs0 --smallfiles --oplogSize 128
networks:
- app-network
ports:
- "27018:27017"
volumes:
- ./mongo-rs0-2/data:/data/db
mongo-rs0-3:
image: "mongo"
command: --replSet rs0 --smallfiles --oplogSize 128
networks:
- app-network
ports:
- "27019:27017"
volumes:
- ./mongo-rs0-3/data:/data/db
setup-rs:
image: "setup-rs"
build: ./setup
networks:
- app-network
depends_on:
- "mongo-rs0-1"
nodejs:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
container_name: nodejs
restart: unless-stopped
networks:
- app-network
depends_on:
- setup-rs
nextjs:
build:
context: ../.
dockerfile: Dockerfile
ports:
- "3000:3000"
container_name: nextjs
restart: unless-stopped
networks:
- app-network
depends_on:
- nodejs
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./picFolder:/picFolder
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
depends_on:
- nodejs
- nextjs
- setup-rs
networks:
- app-network
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /
o: bind
networks:
app-network:
driver: bridge
nginx.conf
文件,但是我已经在这里添加了它来简化:
server {
listen 80;
listen [::]:80;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com localhost;
location /socket.io {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_pass http://nodejs:8000/socket.io/;
}
location /back {
proxy_connect_timeout 75s;
proxy_read_timeout 75s;
proxy_send_timeout 75s;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_pass http://nodejs:8000/back/;
}
location /staticBack{
alias /picFolder;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location / {
proxy_pass http://nextjs:3000;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
'mongodb://mongo-rs0-1,mongo-rs0-2,mongo-rs0-3/test'
,如下所示(
https://github.com/drginm/docker-boilerplates/blob/master/mongodb-replicaset/web-site/database.js)
mongoose.connect("mongodb://mongo-rs0-1,mongo-rs0-2,mongo-rs0-3/test");
MongoDB connection error: { MongoError: no mongos proxy available
at Timeout.<anonymous> (/var/www/back/node_modules/mongodb-core/lib/topologies/mongos.js:757:28)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5) name: 'MongoError', [Symbol(mongoErrorContextSymbol)]: {} }
mongo.conf
文件(
https://github.com/drginm/docker-boilerplates/blob/master/mongodb-replicaset/mongo-rs0-1/mongo.conf)似乎表示副本集名称为
rs0
。所以现在我连接:
mongoose.connect("mongodb://mongo-rs0-1,mongo-rs0-2,mongo-rs0-3/test?replicaSet=rs0");
MongoDB connection error: { MongoError: no primary found in replicaset or invalid replica set name
at /var/www/back/node_modules/mongodb-core/lib/topologies/replset.js:636:11
at Server.<anonymous> (/var/www/back/node_modules/mongodb-core/lib/topologies/replset.js:357:9)
at Object.onceWrapper (events.js:315:30)
at emitOne (events.js:116:13)
at Server.emit (events.js:211:7)
at /var/www/back/node_modules/mongodb-core/lib/topologies/server.js:508:16
at /var/www/back/node_modules/mongodb-core/lib/connection/pool.js:532:18
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickCallback (internal/process/next_tick.js:181:9) name: 'MongoError', [Symbol(mongoErrorContextSymbol)]: {} }
var options = {
native_parser: true,
auto_reconnect: false,
poolSize: 10,
connectWithNoPrimary: true,
sslValidate: false,
socketOptions: {
keepAlive: 1000,
connectTimeoutMS: 30000
}
};
mongoose.connect("mongodb://mongo-rs0-1:27017,mongo-rs0-2:27017,mongo-rs0-3:27017/test?replicaSet=rs0", options);
connectWithNoPrimary: true
似乎特别重要,因为nodejs处于竞争状态,而mongo服务从Docker启动时,它们可能尚未选择主要对象。
MongoDB connection error: { MongoError: no secondary found in replicaset or invalid replica set name
at /var/www/back/node_modules/mongodb-core/lib/topologies/replset.js:649:11
at Server.<anonymous> (/var/www/back/node_modules/mongodb-core/lib/topologies/replset.js:357:9)
at Object.onceWrapper (events.js:315:30)
at emitOne (events.js:116:13)
at Server.emit (events.js:211:7)
at /var/www/back/node_modules/mongodb-core/lib/topologies/server.js:508:16
at /var/www/back/node_modules/mongodb-core/lib/connection/pool.js:532:18
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickCallback (internal/process/next_tick.js:181:9) name: 'MongoError', [Symbol(mongoErrorContextSymbol)]: {} }
connectwithnosecondary
不会执行任何操作,并且会导致相同的错误-我认为这不是有效的选择。仍然停留,任何帮助将不胜感激。
MongoError: no secondary found in replicaset or invalid replica set name
。因此,我不再认为问题是连接上的竞争条件-至少这似乎不是当前的错误。
var options = {
native_parser: true,
auto_reconnect: false,
poolSize: 10,
connectWithNoPrimary: true,
sslValidate: false
};
// mongoose.connect("mongodb://mongo-rs0-1,mongo-rs0-2,mongo-rs0-3/?replicaSet=rs0", { useNewUrlParser: true, connectWithNoPrimary: true });
const connectFunc = () => {
mongoose.connect("mongodb://mongo-rs0-1:27017,mongo-rs0-2:27017,mongo-rs0-3:27017/test?replicaSet=rs0", options);
mongoose.Promise = global.Promise;
var db = mongoose.connection;
db.on('error', (error)=>{
console.log('MongoDB connection error:', error)
console.log('now calling connectFunc() again');
connectFunc()
});
db.once('open', function() {
// we're connected!
console.log('connected to mongoose db')
});
}
connectFunc()
patientplatypus:~/Documents/patientplatypus.com/forum/back:19:47:11$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d46cfb5e1927 nginx:mainline-alpine "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp webserver
6798fe1f6093 back_nextjs "npm start" 3 minutes ago Up 3 minutes 0.0.0.0:3000->3000/tcp nextjs
ab6888f703c7 back_nodejs "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 0.0.0.0:8000->8000/tcp nodejs
48131a82b34e mongo-start "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 0.0.0.0:27017->27017/tcp mongo1
312772b1b0f1 mongo "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 0.0.0.0:27019->27017/tcp mongo3
9fe9a16eb20e mongo "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 0.0.0.0:27018->27017/tcp mongo2
patientplatypus:~/Documents/patientplatypus.com/forum/back:19:48:55$docker logs 9fe9a16eb20e
2019-04-12T00:45:29.689+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-04-12T00:45:29.727+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=9fe9a16eb20e
2019-04-12T00:45:29.728+0000 I CONTROL [initandlisten] db version v4.0.8
2019-04-12T00:45:29.728+0000 I CONTROL [initandlisten] git version: 9b00696ed75f65e1ebc8d635593bed79b290cfbb
2019-04-12T00:45:29.728+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
2019-04-12T00:45:29.728+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-04-12T00:45:29.728+0000 I CONTROL [initandlisten] modules: none
2019-04-12T00:45:29.729+0000 I CONTROL [initandlisten] build environment:
2019-04-12T00:45:29.729+0000 I CONTROL [initandlisten] distmod: ubuntu1604
2019-04-12T00:45:29.729+0000 I CONTROL [initandlisten] distarch: x86_64
2019-04-12T00:45:29.729+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-04-12T00:45:29.729+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true }, replication: { oplogSizeMB: 128, replSet: "rs" }, storage: { mmapv1: { smallFiles: true } } }
2019-04-12T00:45:29.734+0000 W STORAGE [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
2019-04-12T00:45:29.738+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-04-12T00:45:29.741+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2019-04-12T00:45:29.742+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1461M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-04-12T00:45:43.165+0000 I STORAGE [initandlisten] WiredTiger message [1555029943:165420][1:0x7f7051ca9a40], txn-recover: Main recovery loop: starting at 7/4608 to 8/256
2019-04-12T00:45:43.214+0000 I STORAGE [initandlisten] WiredTiger message [1555029943:214706][1:0x7f7051ca9a40], txn-recover: Recovering log 7 through 8
2019-04-12T00:45:43.787+0000 I STORAGE [initandlisten] WiredTiger message [1555029943:787329][1:0x7f7051ca9a40], txn-recover: Recovering log 8 through 8
2019-04-12T00:45:43.849+0000 I STORAGE [initandlisten] WiredTiger message [1555029943:849811][1:0x7f7051ca9a40], txn-recover: Set global recovery timestamp: 0
2019-04-12T00:45:43.892+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-04-12T00:45:43.972+0000 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2019-04-12T00:45:43.972+0000 I CONTROL [initandlisten]
2019-04-12T00:45:43.972+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-04-12T00:45:43.972+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-04-12T00:45:43.973+0000 I CONTROL [initandlisten]
2019-04-12T00:45:44.035+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-04-12T00:45:44.054+0000 I REPL [initandlisten] Did not find local voted for document at startup.
2019-04-12T00:45:44.064+0000 I REPL [initandlisten] Rollback ID is 1
2019-04-12T00:45:44.064+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2019-04-12T00:45:44.065+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2019-04-12T00:45:44.065+0000 I NETWORK [initandlisten] waiting for connections on port 27017
2019-04-12T00:45:44.069+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2019-04-12T00:45:45.080+0000 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
docker exec -it mongo1 mongo
,然后再使用
rs.status()
,得到以下输出:
{
"operationTime" : Timestamp(0, 0),
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized",
"$clusterTime" : {
"clusterTime" : Timestamp(0, 0),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
2019-04-12T00:45:44.064+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
)中的错误非常相似。有人知道它认为缺失了什么吗?
最佳答案
您需要初始化副本集才能访问它。否则,应用程序将无法连接到本地实例。
理想情况下,您需要在副本集配置(setup-rs
步骤)和下一个副本集配置之间增加一些时间,因为副本集配置所花费的时间可能比应用程序启动时间更长。
如果脚本本身有故障,请修复该脚本。
关于node.js - 连接到副本集时的“MongoError: no mongos proxy available”,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55637808/
我在这里阅读了有关代理的示例: http://docs.oracle.com/javase/1.3/docs/guide/reflection/proxy.html 可以看到,'invoke'方法中的
在我的通用 nuxt 应用程序中,我将代理设置为 true 并重写了我的 url 以避免 CORS 问题。但是当我将代理设置为 true 时,我所有的发布请求都会更改为获取请求。不明白为什么以及如何将
我正在开发一个连接到 SFTP 服务器并使用 Apache Commons VFS 下载文件的应用程序,它工作得很好,但系统需要允许用户根据需要指定代理。 现在,我知道 Apache Commons
跟随线程[实体无法转换为javassist.util.proxy.Proxy,我现在确实有服务器端错误(tks thomas) 我无法在我的应用程序中面对真正的问题。 java.lang.ClassC
我正在使用 Charles Proxy 重写 API 的响应以进行测试。 如果我设置断点,我就可以按照自己的意愿完全重写原始响应。 但是,如果我想通过“重写”工具自动化它,我就陷入困境了,似乎你无法修
让我解释一下困境。 我使用 Amazon 的 3 项服务:EC2、S3 和 CloudFront。 EC2 接收上传的文件,并将其存储在 S3 存储桶中。然后 CloudFront 镜像 S3 存储桶
我正在使用 Caddy 在 DigitalOcean Ubuntu droplet 上反向代理某些站点。 这是我的 Caddy 文件,非常简单:upside_down: my-site.com {
我正在尝试测试承受多台计算机负载的 SOCKS 代理。我的代码大纲类似于 使用一个客户端直接连接到服务器,下载测试文件,并记录所花费的时间。 通过代理与一个客户端连接,下载测试文件,并记录所花费的时间
以下情况: 如果我将浏览器的 http/https 代理设置为 Charles 为 (127.0.0.1:8888) 配置的端口,使用 Charles 代理拦截 Web 流量就可以正常工作 如果我将浏
我有一个使用 grunt 构建的 angularJs 应用程序和一个用 Java 编写的在 tomcat 服务器上运行的服务器后端。为了在开发时将它们连接在一起,我想使用 grunt-connect-
对于文件上传,我试图在我的 Spring Controller 中注入(inject)并使用 validator ,如下所示: @RestController @RequestMapping("/ap
我需要使用 CaSTLe DynamicProxy 来代理接口(interface),方法是向 ProxyGenerator.CreateInterfaceProxyWithTarget 提供接口(i
我已经看到,当不同框架(例如实现 EJB 规范的框架或某些 JPA 提供程序)中发生错误时,堆栈跟踪包含像 com.sun.proxy.$Proxy 这样的类。我知道代理是什么,但我正在寻找更技术性和
我正在使用带有多个 apiserver 的集群设置,它们前面有一个负载均衡器,用于外部访问,并安装在裸机上。 就像 High Availability Kubernetes Clusters 中提到的
我使用 Charles 代理(在 OS X 10.9.3 下,Mavericks 下)修改 Origin header ,以便我连接的 API(开发中)接受来自开发环境的传入请求。 我设法通过一个简单
我已经在 Python 中实现了一个“网络服务”父类(super class),如下所示: class NetworkService (threading.Thread): """
我正在使用node.js代理。但是它工作成功: proxy.on('proxyResponse', function (proxyRes, req, res) { console.log("h
我正在尝试使用Nginx-Proxy在Ubuntu VPS的docker容器内运行WordPress网站。 我创建了以下docker-compose.yml文件 version: '3.4' serv
我一直在使用 DataKinds 扩展以类型安全的方式将类型级别 Nats 传递给函数,我只是想知道是否有更好的编写方式: (Proxy :: Proxy 42) 例如,如果类型系统看到参数需要,是否
已关闭。此问题旨在寻求有关书籍、工具、软件库等的建议。不符合Stack Overflow guidelines .它目前不接受答案。 我们不允许提问寻求书籍、工具、软件库等的推荐。您可以编辑问题,以
我是一名优秀的程序员,十分优秀!