- 浏览: 243436 次
最新评论
-------------------------------------------------------------------------------------------------------------------
副本集:
与主从区别在于:1、没有特定的主数据库 2、如果哪个主数据库宕机了,集群中就会推选出一个从属数据库作为主数据库顶上
这就具备了自动故障恢复功能。
搭建副本集的环境如下
mongo集群:
mongos 192.168.12.107:27021
config 192.168.12.107:27018
shard0000 192.168.12.104:27017
shard0001 192.168.12.104:27019
shard0002 192.168.12.104:27023
对testdb数据库里面的usr表以name为key进行了分片
现在想给shard0000 192.168.12.104:27017搭建副本 192.168.12.104:27030 仲裁 192.168.12.104:27033,并添加到分片中去。
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("541a47a9124d847f09e99204")
}
shards:
{ "_id" : "shard0000", "host" : "192.168.12.104:27017" }
{ "_id" : "shard0001", "host" : "192.168.12.104:27019" }
{ "_id" : "shard0002", "host" : "192.168.12.104:27023" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "testdb", "partitioned" : true, "primary" : "shard0000" }
testdb.usr
shard key: { "name" : 1 }
chunks:
shard0000 5
shard0001 4
shard0002 5
{ "name" : { "$minKey" : 1 } } -->> { "name" : "ava0" } on : shard0000 Timestamp(6, 0)
{ "name" : "ava0" } -->> { "name" : "bobo0" } on : shard0001 Timestamp(3, 1)
{ "name" : "bobo0" } -->> { "name" : "mengtian57493" } on : shard0001 Timestamp(4, 0)
{ "name" : "mengtian57493" } -->> { "name" : "师傅10885" } on : shard0001 Timestamp(7, 0)
{ "name" : "师傅10885" } -->> { "name" : "师傅18160" } on : shard0000 Timestamp(8, 0)
{ "name" : "师傅18160" } -->> { "name" : "师傅23331" } on : shard0001 Timestamp(9, 0)
{ "name" : "师傅23331" } -->> { "name" : "师傅337444" } on : shard0000 Timestamp(10, 0)
{ "name" : "师傅337444" } -->> { "name" : "师傅47982" } on : shard0002 Timestamp(10, 1)
{ "name" : "师傅47982" } -->> { "name" : "师傅583953" } on : shard0002 Timestamp(8, 2)
{ "name" : "师傅583953" } -->> { "name" : "师傅688087" } on : shard0002 Timestamp(9, 2)
{ "name" : "师傅688087" } -->> { "name" : "师傅792944" } on : shard0002 Timestamp(9, 4)
{ "name" : "师傅792944" } -->> { "name" : "张荣达16836" } on : shard0002 Timestamp(9, 5)
{ "name" : "张荣达16836" } -->> { "name" : "张荣达9999" } on : shard0000 Timestamp(5, 1)
{ "name" : "张荣达9999" } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(3, 3)
1、新建目录,并启动副本中的两个mongodb
[root@testvip1 ~]# ps -ef|grep mongo
root 3399 18112 0 15:11 pts/0 00:00:02 mongod --dbpath=/mongo_zc --port 27033 --replSet fubentina/192.168.12.104:27017 --仲裁mongo
root 4547 1 0 Sep24 ? 00:03:49 mongod --dbpath=/mongo3 --port 27023 --fork --logpath=/mongo3/log3/log3.txt --rest --起rest是监控,也可以不起
root 4846 1 0 Sep24 ? 00:03:34 mongod --dbpath=/mongo2 --port 27019 --fork --logpath=/mongo2/log2/log2.txt
root 5259 1 0 15:16 ? 00:00:01 mongod --dbpath=/mongo1 --port 27017 --fork --logpath=/mongo1/log1/log1.txt --replSet fubentina/192.168.12.104:27030 --以新的方式启动27017
root 5489 3464 0 15:21 pts/2 00:00:00 grep mongo
root 18446 1 0 Sep24 ? 00:09:04 mongod --dbpath=/mongo1_fuben --port 27030 --fork --logpath=/mongo1_fuben/logfuben.txt --replSet fubentina/192.168.12.104:27017 --启动副本27030
2、开启副本配置(进入副本中任意一个admin)
[root@testvip1 mongo3]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
> db.runCommand({"replSetInitiate":{"_id":"fubentina","members":[{"_id":1,"host":"192.168.12.104:27017"},{"_id":2,"host":"192.168.12.104:27030"}]}})
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
3、配置仲裁服务器
指定fubentina中的任意一个服务器端口
root@testvip1 /]# mongod --dbpath=/mongo_zc --port 27033 --replSet fubentina/192.168.12.104:27017
4、就发现shard1已经变成了下面这样
[root@testvip1 mongo3]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
fubentina:PRIMARY>
5、添加仲裁节点
fubentina:PRIMARY> rs.addArb("192.168.12.104:27033")
{ "ok" : 1 }
6、使用rs.status()查看集群中的服务器状态。
fubentina:PRIMARY> rs.status()
{
"set" : "fubentina",
"date" : ISODate("2014-09-24T09:22:38Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "192.168.12.104:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", --主
"uptime" : 413,
"optime" : Timestamp(1411550517, 1),
"optimeDate" : ISODate("2014-09-24T09:21:57Z"),
"self" : true
},
{
"_id" : 2,
"name" : "192.168.12.104:27030",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", --从
"uptime" : 373,
"optime" : Timestamp(1411550517, 1),
"optimeDate" : ISODate("2014-09-24T09:21:57Z"),
"lastHeartbeat" : ISODate("2014-09-24T09:22:37Z"),
"lastHeartbeatRecv" : ISODate("2014-09-24T09:22:37Z"),
"pingMs" : 0,
"syncingTo" : "192.168.12.104:27017"
},
{
"_id" : 3,
"name" : "192.168.12.104:27033",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER", --仲裁
"uptime" : 41,
"lastHeartbeat" : ISODate("2014-09-24T09:22:37Z"),
"lastHeartbeatRecv" : ISODate("2014-09-24T09:22:37Z"),
"pingMs" : 0
}
],
"ok" : 1
}
如果某一个挂掉了或者没起来,就会发现stateStr 显示成:"stateStr" : "(not reachable/healthy)",
7、副本创建成功,下一步就是解决分片的问题,我们需要先删除分片shard0000,让数据移动到其他分片
mongos> db.runCommand({"removeshard":"192.168.12.104:27017"})
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(0), ---显示的是数据移动的剩余量,为0表示数据已经全部移走了
"dbs" : NumberLong(2) --这里需要注意,要将两个数据库的primary也移一下
},
"note" : "you need to drop or movePrimary these databases", --看这里的提示
"dbsToMove" : [
"test",
"testdb"
],
"ok" : 1
}
mongos> use admin
switched to db admin
mongos> db.runCommand({moveprimary:"testdb",to:"shard0001"}) --将主移动到另一个分片上
{ "primary " : "shard0001:192.168.12.104:27019", "ok" : 1 }
mongos> db.runCommand({moveprimary:"test",to:"shard0001"})
{ "primary " : "shard0001:192.168.12.104:27019", "ok" : 1 }
mongos> db.runCommand({removeshard:"192.168.12.104:27017"}) --这样就可以成功移除
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0000",
"ok" : 1
}
Do not run the movePrimary until you have finished draining the shard
也就是说,一定要等到分片数据迁移完了,再运行movePrimary命令!!!
而且这句命令不像removeshard是异步的,这个movePrimary命令会等到将所有非分片数据都移到其他服务器后,才响应,所以时间有可能会比较长,主要还是看这个服务器上,非分片数据有多少。
另外,movePrimary执行完后,还记得将db.runCommand({removeshard:"shardx"})再运行一遍,直到看到如下结果{ msg: "remove shard completed successfully" , stage: "completed", host: "mongodb0", ok : 1 }
到此为止,迁移才真正完成,可以放心地关闭mongod。
8、删除原192.168.12.104:27017里面的testdb库
mongos> db.runCommand({addshard:"fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033"})
{
"ok" : 0,
"errmsg" : "can't add shard fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033 because a local database 'testdb' exists in another shard0000:192.168.12.104:27017"
} ---添加副本的时候,因为检测到12.104:27017上面存在testdb这个同名数据库,因此无法添加成功。
connecting to: 192.168.12.104:27017/admin
fubentina:PRIMARY> show dbs
local 1.078125GB
testdb 0.453125GB
fubentina:PRIMARY> use testdb
switched to db testdb
fubentina:PRIMARY> db.usr.drop();
true
fubentina:PRIMARY> db.dropDatabase();
{ "dropped" : "testdb", "ok" : 1 }
9、添加副本集到分片
mongos> db.runCommand({addshard:"192.168.12.104:27017"}); --直接添加副本中的一个是不行的,要一起添加
{
"ok" : 0,
"errmsg" : "host is part of set: fubentina use replica set url format <setname>/<server1>,<server2>,...."
}
添加副本集
mongos> db.runCommand({addshard:"fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033"}) --最后一个是仲裁
{ "shardAdded" : "fubentina", "ok" : 1 }
10、查看分片信息
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("541a47a9124d847f09e99204")
}
shards:
{ "_id" : "fubentina", "host" : "fubentina/192.168.12.104:27017,192.168.12.104:27030" } ---显示的是副本集
{ "_id" : "shard0001", "host" : "192.168.12.104:27019" }
{ "_id" : "shard0002", "host" : "192.168.12.104:27023" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "testdb", "partitioned" : true, "primary" : "shard0001" }
testdb.usr
shard key: { "name" : 1 }
chunks:
fubentina 4
shard0001 5
shard0002 5
{ "name" : { "$minKey" : 1 } } -->> { "name" : "ava0" } on : fubentina Timestamp(16, 0) --已经有数据被分配到这个副本
{ "name" : "ava0" } -->> { "name" : "bobo0" } on : fubentina Timestamp(18, 0)
{ "name" : "bobo0" } -->> { "name" : "mengtian57493" } on : shard0001 Timestamp(18, 1)
{ "name" : "mengtian57493" } -->> { "name" : "师傅10885" } on : shard0001 Timestamp(7, 0)
{ "name" : "师傅10885" } -->> { "name" : "师傅18160" } on : shard0001 Timestamp(12, 0)
{ "name" : "师傅18160" } -->> { "name" : "师傅23331" } on : shard0001 Timestamp(9, 0)
{ "name" : "师傅23331" } -->> { "name" : "师傅337444" } on : fubentina Timestamp(17, 0)
{ "name" : "师傅337444" } -->> { "name" : "师傅47982" } on : fubentina Timestamp(19, 0)
{ "name" : "师傅47982" } -->> { "name" : "师傅583953" } on : shard0002 Timestamp(19, 1)
{ "name" : "师傅583953" } -->> { "name" : "师傅688087" } on : shard0002 Timestamp(9, 2)
{ "name" : "师傅688087" } -->> { "name" : "师傅792944" } on : shard0002 Timestamp(9, 4)
{ "name" : "师傅792944" } -->> { "name" : "张荣达16836" } on : shard0002 Timestamp(9, 5)
{ "name" : "张荣达16836" } -->> { "name" : "张荣达9999" } on : shard0001 Timestamp(14, 0)
{ "name" : "张荣达9999" } -->> { "name" : { "$maxKey" : 1 } } on : shard0002 Timestamp(15, 0)
{ "_id" : "test", "partitioned" : false, "primary" : "shard0001" }
{ "_id" : "usr", "partitioned" : false, "primary" : "shard0001" }
11、查看
连接到副本中的一个点
[root@testvip1 ~]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
fubentina:PRIMARY> show dbs
local 1.078125GB
testdb 0.453125GB
fubentina:PRIMARY> use testdb
switched to db testdb
fubentina:PRIMARY> db.usr.count() --可以查到数据
283898
[root@testvip1 ~]# mongo 192.168.12.104:27030/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27030/admin
fubentina:SECONDARY> show dbs
local 1.078125GB
testdb 0.453125GB
fubentina:SECONDARY> use testdb
switched to db testdb
fubentina:SECONDARY> db.usr.count()
Thu Sep 25 15:01:33.879 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:180 --现在27030是副本,所以查看不了数据
副本集终于添加成功
12、进行测试
将27017给kill掉
kill 4353
[root@testvip1 ~]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
fubentina:PRIMARY> rs.status()
Thu Sep 25 15:11:49.785 DBClientCursor::init call() failed
Thu Sep 25 15:11:49.788 Error: error doing query: failed at src/mongo/shell/query.js:78
Thu Sep 25 15:11:49.789 trying reconnect to 192.168.12.104:27017
Thu Sep 25 15:11:49.789 reconnect 192.168.12.104:27017 failed couldn't connect to server 192.168.12.104:27017 --不能连接
[root@viptest2 ~]# mongo 192.168.12.107:27021/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.107:27021/admin
mongos> use testdb
switched to db testdb
mongos> db.usr.find({"name":"ava100"}) ---能正常查到副本集的数据
{ "_id" : ObjectId("541bd1573351616d81bb07e7"), "name" : "ava100", "address" : "yichangdong", "age" : 50 }
直接连到27030上去看
[root@testvip1 ~]# mongo 192.168.12.104:27030/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27030/admin
fubentina:PRIMARY> show dbs
local 1.078125GB
testdb 0.453125GB
fubentina:PRIMARY> use testdb
switched to db testdb
fubentina:PRIMARY> db.usr.count()
283898
fubentina:PRIMARY> db.usr.find({"name":"ava100"})
{ "_id" : ObjectId("541bd1573351616d81bb07e7"), "name" : "ava100", "address" : "yichangdong", "age" : 50 }
13、将27017起来,去看下rs.status()
mongod --dbpath=/mongo1 --port 27017 --fork --logpath=/mongo1/log1/log1.txt --replSet fubentina/192.168.12.104:27030
[root@testvip1 ~]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
fubentina:SECONDARY> rs.status()
{
"set" : "fubentina",
"date" : ISODate("2014-09-25T07:58:41Z"),
"myState" : 2,
"syncingTo" : "192.168.12.104:27030",
"members" : [
{
"_id" : 1,
"name" : "192.168.12.104:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", --启动后它变成了副本
"uptime" : 2551,
"optime" : Timestamp(1411628295, 2057),
"optimeDate" : ISODate("2014-09-25T06:58:15Z"),
"self" : true
},
{
"_id" : 2,
"name" : "192.168.12.104:27030",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", --它变成了主
"uptime" : 2551,
"optime" : Timestamp(1411628295, 2057),
"optimeDate" : ISODate("2014-09-25T06:58:15Z"),
"lastHeartbeat" : ISODate("2014-09-25T07:58:41Z"),
"lastHeartbeatRecv" : ISODate("2014-09-25T07:58:39Z"),
"pingMs" : 0
},
{
"_id" : 3,
"name" : "192.168.12.104:27033",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER", -它依然是仲裁
"uptime" : 2549,
"lastHeartbeat" : ISODate("2014-09-25T07:58:41Z"),
"lastHeartbeatRecv" : ISODate("2014-09-25T07:58:40Z"),
"pingMs" : 0
}
],
"ok" : 1
}
验证ok
副本集:
与主从区别在于:1、没有特定的主数据库 2、如果哪个主数据库宕机了,集群中就会推选出一个从属数据库作为主数据库顶上
这就具备了自动故障恢复功能。
搭建副本集的环境如下
mongo集群:
mongos 192.168.12.107:27021
config 192.168.12.107:27018
shard0000 192.168.12.104:27017
shard0001 192.168.12.104:27019
shard0002 192.168.12.104:27023
对testdb数据库里面的usr表以name为key进行了分片
现在想给shard0000 192.168.12.104:27017搭建副本 192.168.12.104:27030 仲裁 192.168.12.104:27033,并添加到分片中去。
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("541a47a9124d847f09e99204")
}
shards:
{ "_id" : "shard0000", "host" : "192.168.12.104:27017" }
{ "_id" : "shard0001", "host" : "192.168.12.104:27019" }
{ "_id" : "shard0002", "host" : "192.168.12.104:27023" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "testdb", "partitioned" : true, "primary" : "shard0000" }
testdb.usr
shard key: { "name" : 1 }
chunks:
shard0000 5
shard0001 4
shard0002 5
{ "name" : { "$minKey" : 1 } } -->> { "name" : "ava0" } on : shard0000 Timestamp(6, 0)
{ "name" : "ava0" } -->> { "name" : "bobo0" } on : shard0001 Timestamp(3, 1)
{ "name" : "bobo0" } -->> { "name" : "mengtian57493" } on : shard0001 Timestamp(4, 0)
{ "name" : "mengtian57493" } -->> { "name" : "师傅10885" } on : shard0001 Timestamp(7, 0)
{ "name" : "师傅10885" } -->> { "name" : "师傅18160" } on : shard0000 Timestamp(8, 0)
{ "name" : "师傅18160" } -->> { "name" : "师傅23331" } on : shard0001 Timestamp(9, 0)
{ "name" : "师傅23331" } -->> { "name" : "师傅337444" } on : shard0000 Timestamp(10, 0)
{ "name" : "师傅337444" } -->> { "name" : "师傅47982" } on : shard0002 Timestamp(10, 1)
{ "name" : "师傅47982" } -->> { "name" : "师傅583953" } on : shard0002 Timestamp(8, 2)
{ "name" : "师傅583953" } -->> { "name" : "师傅688087" } on : shard0002 Timestamp(9, 2)
{ "name" : "师傅688087" } -->> { "name" : "师傅792944" } on : shard0002 Timestamp(9, 4)
{ "name" : "师傅792944" } -->> { "name" : "张荣达16836" } on : shard0002 Timestamp(9, 5)
{ "name" : "张荣达16836" } -->> { "name" : "张荣达9999" } on : shard0000 Timestamp(5, 1)
{ "name" : "张荣达9999" } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(3, 3)
1、新建目录,并启动副本中的两个mongodb
[root@testvip1 ~]# ps -ef|grep mongo
root 3399 18112 0 15:11 pts/0 00:00:02 mongod --dbpath=/mongo_zc --port 27033 --replSet fubentina/192.168.12.104:27017 --仲裁mongo
root 4547 1 0 Sep24 ? 00:03:49 mongod --dbpath=/mongo3 --port 27023 --fork --logpath=/mongo3/log3/log3.txt --rest --起rest是监控,也可以不起
root 4846 1 0 Sep24 ? 00:03:34 mongod --dbpath=/mongo2 --port 27019 --fork --logpath=/mongo2/log2/log2.txt
root 5259 1 0 15:16 ? 00:00:01 mongod --dbpath=/mongo1 --port 27017 --fork --logpath=/mongo1/log1/log1.txt --replSet fubentina/192.168.12.104:27030 --以新的方式启动27017
root 5489 3464 0 15:21 pts/2 00:00:00 grep mongo
root 18446 1 0 Sep24 ? 00:09:04 mongod --dbpath=/mongo1_fuben --port 27030 --fork --logpath=/mongo1_fuben/logfuben.txt --replSet fubentina/192.168.12.104:27017 --启动副本27030
2、开启副本配置(进入副本中任意一个admin)
[root@testvip1 mongo3]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
> db.runCommand({"replSetInitiate":{"_id":"fubentina","members":[{"_id":1,"host":"192.168.12.104:27017"},{"_id":2,"host":"192.168.12.104:27030"}]}})
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
3、配置仲裁服务器
指定fubentina中的任意一个服务器端口
root@testvip1 /]# mongod --dbpath=/mongo_zc --port 27033 --replSet fubentina/192.168.12.104:27017
4、就发现shard1已经变成了下面这样
[root@testvip1 mongo3]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
fubentina:PRIMARY>
5、添加仲裁节点
fubentina:PRIMARY> rs.addArb("192.168.12.104:27033")
{ "ok" : 1 }
6、使用rs.status()查看集群中的服务器状态。
fubentina:PRIMARY> rs.status()
{
"set" : "fubentina",
"date" : ISODate("2014-09-24T09:22:38Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "192.168.12.104:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", --主
"uptime" : 413,
"optime" : Timestamp(1411550517, 1),
"optimeDate" : ISODate("2014-09-24T09:21:57Z"),
"self" : true
},
{
"_id" : 2,
"name" : "192.168.12.104:27030",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", --从
"uptime" : 373,
"optime" : Timestamp(1411550517, 1),
"optimeDate" : ISODate("2014-09-24T09:21:57Z"),
"lastHeartbeat" : ISODate("2014-09-24T09:22:37Z"),
"lastHeartbeatRecv" : ISODate("2014-09-24T09:22:37Z"),
"pingMs" : 0,
"syncingTo" : "192.168.12.104:27017"
},
{
"_id" : 3,
"name" : "192.168.12.104:27033",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER", --仲裁
"uptime" : 41,
"lastHeartbeat" : ISODate("2014-09-24T09:22:37Z"),
"lastHeartbeatRecv" : ISODate("2014-09-24T09:22:37Z"),
"pingMs" : 0
}
],
"ok" : 1
}
如果某一个挂掉了或者没起来,就会发现stateStr 显示成:"stateStr" : "(not reachable/healthy)",
7、副本创建成功,下一步就是解决分片的问题,我们需要先删除分片shard0000,让数据移动到其他分片
mongos> db.runCommand({"removeshard":"192.168.12.104:27017"})
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(0), ---显示的是数据移动的剩余量,为0表示数据已经全部移走了
"dbs" : NumberLong(2) --这里需要注意,要将两个数据库的primary也移一下
},
"note" : "you need to drop or movePrimary these databases", --看这里的提示
"dbsToMove" : [
"test",
"testdb"
],
"ok" : 1
}
mongos> use admin
switched to db admin
mongos> db.runCommand({moveprimary:"testdb",to:"shard0001"}) --将主移动到另一个分片上
{ "primary " : "shard0001:192.168.12.104:27019", "ok" : 1 }
mongos> db.runCommand({moveprimary:"test",to:"shard0001"})
{ "primary " : "shard0001:192.168.12.104:27019", "ok" : 1 }
mongos> db.runCommand({removeshard:"192.168.12.104:27017"}) --这样就可以成功移除
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0000",
"ok" : 1
}
Do not run the movePrimary until you have finished draining the shard
也就是说,一定要等到分片数据迁移完了,再运行movePrimary命令!!!
而且这句命令不像removeshard是异步的,这个movePrimary命令会等到将所有非分片数据都移到其他服务器后,才响应,所以时间有可能会比较长,主要还是看这个服务器上,非分片数据有多少。
另外,movePrimary执行完后,还记得将db.runCommand({removeshard:"shardx"})再运行一遍,直到看到如下结果{ msg: "remove shard completed successfully" , stage: "completed", host: "mongodb0", ok : 1 }
到此为止,迁移才真正完成,可以放心地关闭mongod。
8、删除原192.168.12.104:27017里面的testdb库
mongos> db.runCommand({addshard:"fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033"})
{
"ok" : 0,
"errmsg" : "can't add shard fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033 because a local database 'testdb' exists in another shard0000:192.168.12.104:27017"
} ---添加副本的时候,因为检测到12.104:27017上面存在testdb这个同名数据库,因此无法添加成功。
connecting to: 192.168.12.104:27017/admin
fubentina:PRIMARY> show dbs
local 1.078125GB
testdb 0.453125GB
fubentina:PRIMARY> use testdb
switched to db testdb
fubentina:PRIMARY> db.usr.drop();
true
fubentina:PRIMARY> db.dropDatabase();
{ "dropped" : "testdb", "ok" : 1 }
9、添加副本集到分片
mongos> db.runCommand({addshard:"192.168.12.104:27017"}); --直接添加副本中的一个是不行的,要一起添加
{
"ok" : 0,
"errmsg" : "host is part of set: fubentina use replica set url format <setname>/<server1>,<server2>,...."
}
添加副本集
mongos> db.runCommand({addshard:"fubentina/192.168.12.104:27017,192.168.12.104:27030,192.168.12.104:27033"}) --最后一个是仲裁
{ "shardAdded" : "fubentina", "ok" : 1 }
10、查看分片信息
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("541a47a9124d847f09e99204")
}
shards:
{ "_id" : "fubentina", "host" : "fubentina/192.168.12.104:27017,192.168.12.104:27030" } ---显示的是副本集
{ "_id" : "shard0001", "host" : "192.168.12.104:27019" }
{ "_id" : "shard0002", "host" : "192.168.12.104:27023" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "testdb", "partitioned" : true, "primary" : "shard0001" }
testdb.usr
shard key: { "name" : 1 }
chunks:
fubentina 4
shard0001 5
shard0002 5
{ "name" : { "$minKey" : 1 } } -->> { "name" : "ava0" } on : fubentina Timestamp(16, 0) --已经有数据被分配到这个副本
{ "name" : "ava0" } -->> { "name" : "bobo0" } on : fubentina Timestamp(18, 0)
{ "name" : "bobo0" } -->> { "name" : "mengtian57493" } on : shard0001 Timestamp(18, 1)
{ "name" : "mengtian57493" } -->> { "name" : "师傅10885" } on : shard0001 Timestamp(7, 0)
{ "name" : "师傅10885" } -->> { "name" : "师傅18160" } on : shard0001 Timestamp(12, 0)
{ "name" : "师傅18160" } -->> { "name" : "师傅23331" } on : shard0001 Timestamp(9, 0)
{ "name" : "师傅23331" } -->> { "name" : "师傅337444" } on : fubentina Timestamp(17, 0)
{ "name" : "师傅337444" } -->> { "name" : "师傅47982" } on : fubentina Timestamp(19, 0)
{ "name" : "师傅47982" } -->> { "name" : "师傅583953" } on : shard0002 Timestamp(19, 1)
{ "name" : "师傅583953" } -->> { "name" : "师傅688087" } on : shard0002 Timestamp(9, 2)
{ "name" : "师傅688087" } -->> { "name" : "师傅792944" } on : shard0002 Timestamp(9, 4)
{ "name" : "师傅792944" } -->> { "name" : "张荣达16836" } on : shard0002 Timestamp(9, 5)
{ "name" : "张荣达16836" } -->> { "name" : "张荣达9999" } on : shard0001 Timestamp(14, 0)
{ "name" : "张荣达9999" } -->> { "name" : { "$maxKey" : 1 } } on : shard0002 Timestamp(15, 0)
{ "_id" : "test", "partitioned" : false, "primary" : "shard0001" }
{ "_id" : "usr", "partitioned" : false, "primary" : "shard0001" }
11、查看
连接到副本中的一个点
[root@testvip1 ~]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
fubentina:PRIMARY> show dbs
local 1.078125GB
testdb 0.453125GB
fubentina:PRIMARY> use testdb
switched to db testdb
fubentina:PRIMARY> db.usr.count() --可以查到数据
283898
[root@testvip1 ~]# mongo 192.168.12.104:27030/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27030/admin
fubentina:SECONDARY> show dbs
local 1.078125GB
testdb 0.453125GB
fubentina:SECONDARY> use testdb
switched to db testdb
fubentina:SECONDARY> db.usr.count()
Thu Sep 25 15:01:33.879 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:180 --现在27030是副本,所以查看不了数据
副本集终于添加成功
12、进行测试
将27017给kill掉
kill 4353
[root@testvip1 ~]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
fubentina:PRIMARY> rs.status()
Thu Sep 25 15:11:49.785 DBClientCursor::init call() failed
Thu Sep 25 15:11:49.788 Error: error doing query: failed at src/mongo/shell/query.js:78
Thu Sep 25 15:11:49.789 trying reconnect to 192.168.12.104:27017
Thu Sep 25 15:11:49.789 reconnect 192.168.12.104:27017 failed couldn't connect to server 192.168.12.104:27017 --不能连接
[root@viptest2 ~]# mongo 192.168.12.107:27021/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.107:27021/admin
mongos> use testdb
switched to db testdb
mongos> db.usr.find({"name":"ava100"}) ---能正常查到副本集的数据
{ "_id" : ObjectId("541bd1573351616d81bb07e7"), "name" : "ava100", "address" : "yichangdong", "age" : 50 }
直接连到27030上去看
[root@testvip1 ~]# mongo 192.168.12.104:27030/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27030/admin
fubentina:PRIMARY> show dbs
local 1.078125GB
testdb 0.453125GB
fubentina:PRIMARY> use testdb
switched to db testdb
fubentina:PRIMARY> db.usr.count()
283898
fubentina:PRIMARY> db.usr.find({"name":"ava100"})
{ "_id" : ObjectId("541bd1573351616d81bb07e7"), "name" : "ava100", "address" : "yichangdong", "age" : 50 }
13、将27017起来,去看下rs.status()
mongod --dbpath=/mongo1 --port 27017 --fork --logpath=/mongo1/log1/log1.txt --replSet fubentina/192.168.12.104:27030
[root@testvip1 ~]# mongo 192.168.12.104:27017/admin
MongoDB shell version: 2.4.6
connecting to: 192.168.12.104:27017/admin
fubentina:SECONDARY> rs.status()
{
"set" : "fubentina",
"date" : ISODate("2014-09-25T07:58:41Z"),
"myState" : 2,
"syncingTo" : "192.168.12.104:27030",
"members" : [
{
"_id" : 1,
"name" : "192.168.12.104:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", --启动后它变成了副本
"uptime" : 2551,
"optime" : Timestamp(1411628295, 2057),
"optimeDate" : ISODate("2014-09-25T06:58:15Z"),
"self" : true
},
{
"_id" : 2,
"name" : "192.168.12.104:27030",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", --它变成了主
"uptime" : 2551,
"optime" : Timestamp(1411628295, 2057),
"optimeDate" : ISODate("2014-09-25T06:58:15Z"),
"lastHeartbeat" : ISODate("2014-09-25T07:58:41Z"),
"lastHeartbeatRecv" : ISODate("2014-09-25T07:58:39Z"),
"pingMs" : 0
},
{
"_id" : 3,
"name" : "192.168.12.104:27033",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER", -它依然是仲裁
"uptime" : 2549,
"lastHeartbeat" : ISODate("2014-09-25T07:58:41Z"),
"lastHeartbeatRecv" : ISODate("2014-09-25T07:58:40Z"),
"pingMs" : 0
}
],
"ok" : 1
}
验证ok
发表评论
-
mongodb安装最新版本并恢复成相同的旧环境
2015-12-17 15:55 1725需求方说有一个mongo节点挂掉了,现在需要按照原来的配置重新 ... -
mongodb 对内存的严重占用以及解决方法【转载】
2015-12-18 16:59 5606mongodb 对内存的严重占 ... -
mongodb测试小结-tina
2015-12-09 09:27 600mongod ... -
mongodb的用户认证
2015-12-09 09:27 1318MongoDB:用户认证 MongoD ... -
mongodb移除分片
2015-12-09 09:27 751MongoDB的Shard集群来说,添加一个分片很简单,Add ... -
mongodb学习笔记-tina
2015-12-08 17:15 3184mongodb mongodb是面向文档的数据库,不是关系型数 ... -
mongodb删除集合后磁盘空间不释放
2015-12-08 17:14 1461mongodb删除集合后磁盘空间不释放,只有用db.repai ... -
mongodb监控
2015-12-08 17:13 11691.mongosniff工具 首先了解一下sniffer的概 ... -
mongodb搭建集群
2015-12-08 17:11 1415如果想配置2个mongos,1个config,多个mongod ...
相关推荐
MongoDB 是一种流行的开源文档型数据库,支持多种集群搭建方式,包括主从模式、副本集(Replica Set)和分片(Sharding)。在本文中,我们将深入探讨副本集集群的构建及其特点。 副本集是MongoDB高可用性和冗余策略...
mongodb_consistent_backup, 对MongoDB集群或者副本集执行一致备份的工具 一致性备份工具- mongodb-consistent-backup 使用可选的归档,压缩/复制,加密和上传功能,为MongoDB创建一致的point-in-time备份这个工具的...
### MongoDB集群配置详解 #### 一、MongoDB集群与分片概述 MongoDB是一种非常流行的非关系型数据库系统,以其灵活的数据模型、高性能和可扩展性而受到广泛欢迎。随着数据量的增长,单一MongoDB实例可能无法满足高...
MongoDB 副本集群配置是 MongoDB 的一个重要概念,它允许用户创建一个高可用性的数据库集群,以确保数据的安全和可靠性。在这个配置文件中,我们将学习如何配置 MongoDB 副本集群,以便在实际应用中提高数据库的可用...
在这个“MongoDB集群测试代码”中,我们关注的是MongoDB的两个关键特性:副本集(Replica Set)和分片(Sharding),以及如何通过配置文件和脚本来进行集群的设置与测试。 1. **副本集(Replica Set)**: - 副本...
本解决方案通过使用 Kubernetes 部署 MongoDB 分片(Sharding)和副本集(Replica Set),从而实现 MongoDB 集群的自动化管理和高可用性。 在本解决方案中,我们首先需要安装 Kubernetes 环境,并且需要准备好 NFS ...
MongoDB 集群部署是实现高可用性和可扩展性的关键步骤,特别是在处理大量数据时。MongoDB 提供了一种名为分片集群(Sharding Cluster)的架构,它允许通过水平扩展来处理海量数据,避免单点故障,并提供数据冗余。 ...
MongoDB分片副本集集群搭建的知识点包含了以下几个方面: 1. MongoDB分片架构的基本组成:MongoDB分片架构由mongos(路由服务器)、config-server(配置服务器)和shard(分片服务器)三部分组成。mongos负责作为...
MongoDB 集群是由多个节点组成的,这些节点可以是副本集成员、分片服务器或仲裁者。集群提供了数据冗余和水平扩展的能力,确保了服务的高可用性。其主要优点包括: 1. **无单点故障**:通过副本集,数据在多个节点...
### MongoDB集群搭建教程 #### 一、主从模式详解 **主从模式**是MongoDB中最常见的复制方式之一,主要用于实现数据库同步备份、故障恢复以及读取扩展等功能。该模式的核心在于建立一个主节点和一个或多个从节点,...
### MongoDB集群分布式存储实战知识点详解 #### 一、MongoDB简介与应用场景 MongoDB是一款开源的NoSQL数据库系统,以其高性能、高可用性和易扩展性而闻名。它使用JSON-like的文档来存储数据,非常适合处理半结构化...
本项目是用C#语言开发的一个MongoDB集群自动部署工具,它的主要目标是简化在Linux服务器上部署和管理MongoDB副本集的过程。C#是一种通用的、面向对象的编程语言,通常用于Windows平台,但通过.NET框架和开源库,也...
这时,采用MongoDB集群的sharding模式来提升性能和存储能力就显得尤为重要。接下来,我们将详细介绍MongoDB的sharding模式配置示例,这将帮助读者理解sharding的工作原理、名词解释、配置步骤以及应用建议。 ### ...
### 小米电商Mongodb集群文档关键知识点解析 #### 标题与描述解析 - **标题**:“小米科技 电商部门 Mongodb sharding Cluster with Replica Set 集群 文档” - **描述**:“小米科技 电商部门 Mongodb sharding ...
### MongoDB多实例副本集群搭建详解 #### 一、概述 MongoDB是一种非常流行的NoSQL数据库系统,它提供了高性能、高可用性和易于扩展性等特点。在实际应用中,为了提高系统的可靠性和性能,我们通常会搭建MongoDB的...
为了构建一个高可用的MongoDB集群,我们需要确定每个组件的数量,例如:三个mongos实例、三个config server实例和三个shard server实例(每个shard有一个副本集和一个仲裁者)。这样的配置确保了即使某些节点出现...
在设计mongodb集群时,需要考虑到高可用性和高性能两个方面。为了实现高可用性,需要使用副本集(Replica Set)来确保数据的可用性。在高性能方面,需要使用分片(Sharding)来分布式存储数据。 设计框架 mongodb...
### MongoDB集群搭建详解 #### 一、MongoDB基础概念与术语对照 在深入了解MongoDB集群搭建之前,我们先简要回顾一下MongoDB的基本概念及其与传统关系型数据库的对应关系。 - **Database(数据库)**:MongoDB中的...