`
bit1129
  • 浏览: 1067894 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

[MongoDB学习笔记十三]MongoDB创建分片

 
阅读更多

MongoDB分片简介

MongoDB分片用于解决海量数据在多台机器上存储,如下所示:



 

一个典型的分片架构如下



 

本文在一台机器上,以1个路由服务器(mongos),1个配置服务器,3个分片(每个分片仅仅包括一个MongoDB服务器,而不是副本集)来快速搭建一个MongoDB分片服务器

 

二、搭建MongoDB分片服务器的步骤

2.1 启动配置服务器

 

mongod --dbpath config  --port 27000 

 可见,配置服务器的启动跟普通MongoDB服务器一样,指定它的db目录是config

 

2.2 启动路由服务器

 

mongos --configdb hostname:27100 --port 28000 

 

启动路由服务器需要指定配置服务器的信息,以域名:端口号的形式给定

 

2.2 启动三台分片服务器

 

mongod --dbpath data1 --port 27017
mongod --dbpath data2 --port 27018 
mongod --dbpath data3 --port 27019

 

2.3 将分片服务器加入到分片即群中

 

 

mongo -- 281000 //使用mongo命令,连接到路由服务器
mongos>use admin; //必须在admin上执行添加分片的操作,否则抛出error: "$err" : "error creating initial database config information :: caused by :: can't find a shard to put new db on"
mongos> db.runCommand({addshard:"hostname:27017",allowLocal:true })
mongos> db.runCommand({addshard:"hostname:27018",allowLocal:true })
mongos> db.runCommand({addshard:"hostname:27019",allowLocal:true })

 执行上面的命令后,得到的输出结果是
{ "shardAdded" : "shard0000", "ok" : 1 }
{ "shardAdded" : "shard0001", "ok" : 1 }
{ "shardAdded" : "shard0002", "ok" : 1 }
 这样,就把三个分片加入到分片集群中了。

 

 

2.3 查看集群中的分片情况

2.3.1 分片情况

mongo -- 281000 //连接到路由服务器,路由服务器中的config数据库复制自配置服务器
mongos>use config
mongos>db.shards.find();

执行上面的命令得到的结果是

{ "_id" : "shard0000", "host" : "10.1.241.203:27017" }
{ "_id" : "shard0001", "host" : "10.1.241.203:27018" }
{ "_id" : "shard0002", "host" : "10.1.241.203:27019" }

 

2.3.2 分片中的数据库情况

 

mongo -- 281000 //连接到路由服务器
mongos>use config
mongos>db.databases.find();

执行上面的命令得到的结果是

mongos> db.databases.find();
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }

 其中

  • _id表示数据库的名字
  • partitioned表示数据库是否已经分区,默认是false,需要开启分区。数据库的分区是针对集合而言的,即启动分区,实际上是对数据库的集合进行分区
  • primary:MongoDB的分片是基于数据库的。即一个分片节点中,有些数据库开启了分片功能,而有些数据库没有开启分片。那么没有开启分片的数据只能位于分片集群的一个分片上,因此用户读写没有开启分片的数据库时,mongos需要通过某种机制知道,这个数据库位于哪台分片节点上。MongoDB通过primary这个属性来记录这个信息,如下所示:

比如建立一个数据库users,往里插入一条数据后,再次执行use config; db.databases.find()得到的结果如下,表示users数据库建立在shard0000上

{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "users", "partitioned" : false, "primary" : "shard0000" }

 

 

 

 

 

2.4 开启数据库和集合的分片功能

 

mongo -- 281000 //连接到路由服务器
mongos>use admin
mongos> db.runCommand({"enablesharding":"foo"}) //对数据库启动分区
{ "ok" : 1 }
mongos> db.runCommand({"shardcollection":"foo.bar","key":{"uid":1}}); //对数据库中的哪个集合进行分区
{ "collectionsharded" : "foo.bar", "ok" : 1 }

通过上面的操作,使得可以对foo数据库的bar集合进行分片,再次执行下面的命令看看

 

mongos> db.databases.find();
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "users", "partitioned" : false, "primary" : "shard0000" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0001" }

 

可见,foo数据库的partitioned值改为了true

 

 

2.5 分片的chunkSize设置

首先执行如下命令,查看数据库的分片信息。这是对片键uid的范围进行了定义,此时在shard0001上,uid的分片区间是负无穷到正无穷

 

mongos> db.chunks.find();
{ "_id" : "foo.bar-uid_MinKey", "lastmod" : Timestamp(1, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : { "$minKey" : 1 } }, "max" : { "uid" : { "$ma
y" : 1 } }, "shard" : "shard0001" }

 

执行如下命令,查看MongoDB的默认的chunkSize,为64M

 

mongos> use config
mongos> db.settings.find();
{ "_id" : "chunksize", "value" : 64 }

 

通过如下命令将chunkSize的大小改为1M

 

mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 1 } ) //_id和value为什么没有用引号引起来??
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
mongos> db.settings.find();
{ "_id" : "chunksize", "value" : 1 }

 

 

2.6 往分片中写入足够量的数据

执行如下命令,写入50万条数据:

 

mongos> use foo
switched to db foo
mongos> for(i=0;i<500000;i++){ db.bar.insert({"uid":i,"description":"this is a very long description for " + i,"Date":new Date()}); }

 

执行完后,执行db.chunks.find()得到如下结果:

 

mongos> use config
switched to db config
mongos> db.chunks.find();
{ "_id" : "foo.bar-uid_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : { "$minKey" : 1 } }, "max" : { "uid" : 0 }, "sha
rd" : "shard0000" }
{ "_id" : "foo.bar-uid_0.0", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 0 }, "max" : { "uid" : 5333 }, "shard" : "shard0002
" }
{ "_id" : "foo.bar-uid_5333.0", "lastmod" : Timestamp(4, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 5333 }, "max" : { "uid" : 16357 }, "shard" : "sh
ard0000" }
{ "_id" : "foo.bar-uid_16357.0", "lastmod" : Timestamp(5, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 16357 }, "max" : { "uid" : 26187 }, "shard" : "
shard0002" }
{ "_id" : "foo.bar-uid_26187.0", "lastmod" : Timestamp(6, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 26187 }, "max" : { "uid" : 36823 }, "shard" : "
shard0000" }
{ "_id" : "foo.bar-uid_36823.0", "lastmod" : Timestamp(7, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 36823 }, "max" : { "uid" : 46546 }, "shard" : "
shard0002" }
{ "_id" : "foo.bar-uid_46546.0", "lastmod" : Timestamp(8, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 46546 }, "max" : { "uid" : 57035 }, "shard" : "
shard0000" }
{ "_id" : "foo.bar-uid_57035.0", "lastmod" : Timestamp(9, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 57035 }, "max" : { "uid" : 66699 }, "shard" : "
shard0002" }
{ "_id" : "foo.bar-uid_66699.0", "lastmod" : Timestamp(10, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 66699 }, "max" : { "uid" : 77783 }, "shard" :
"shard0000" }
{ "_id" : "foo.bar-uid_77783.0", "lastmod" : Timestamp(11, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 77783 }, "max" : { "uid" : 87230 }, "shard" :
"shard0002" }
{ "_id" : "foo.bar-uid_87230.0", "lastmod" : Timestamp(12, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 87230 }, "max" : { "uid" : 96803 }, "shard" :
"shard0000" }
{ "_id" : "foo.bar-uid_96803.0", "lastmod" : Timestamp(13, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 96803 }, "max" : { "uid" : 106346 }, "shard" :
 "shard0002" }
{ "_id" : "foo.bar-uid_106346.0", "lastmod" : Timestamp(14, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 106346 }, "max" : { "uid" : 117120 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_117120.0", "lastmod" : Timestamp(15, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 117120 }, "max" : { "uid" : 127899 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_127899.0", "lastmod" : Timestamp(16, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 127899 }, "max" : { "uid" : 137956 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_137956.0", "lastmod" : Timestamp(17, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 137956 }, "max" : { "uid" : 147773 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_147773.0", "lastmod" : Timestamp(18, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 147773 }, "max" : { "uid" : 158075 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_158075.0", "lastmod" : Timestamp(19, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 158075 }, "max" : { "uid" : 168681 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_168681.0", "lastmod" : Timestamp(20, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 168681 }, "max" : { "uid" : 178122 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_178122.0", "lastmod" : Timestamp(21, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 178122 }, "max" : { "uid" : 188515 }, "shard"
 : "shard0002" }
Type "it" for more
mongos> it
{ "_id" : "foo.bar-uid_188515.0", "lastmod" : Timestamp(22, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 188515 }, "max" : { "uid" : 198908 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_198908.0", "lastmod" : Timestamp(23, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 198908 }, "max" : { "uid" : 210107 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_210107.0", "lastmod" : Timestamp(24, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 210107 }, "max" : { "uid" : 219471 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_219471.0", "lastmod" : Timestamp(25, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 219471 }, "max" : { "uid" : 229496 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_229496.0", "lastmod" : Timestamp(26, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 229496 }, "max" : { "uid" : 240423 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_240423.0", "lastmod" : Timestamp(27, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 240423 }, "max" : { "uid" : 250932 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_250932.0", "lastmod" : Timestamp(28, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 250932 }, "max" : { "uid" : 261576 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_261576.0", "lastmod" : Timestamp(29, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 261576 }, "max" : { "uid" : 272106 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_272106.0", "lastmod" : Timestamp(30, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 272106 }, "max" : { "uid" : 281512 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_281512.0", "lastmod" : Timestamp(31, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 281512 }, "max" : { "uid" : 291917 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_291917.0", "lastmod" : Timestamp(32, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 291917 }, "max" : { "uid" : 301451 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_301451.0", "lastmod" : Timestamp(33, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 301451 }, "max" : { "uid" : 310834 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_310834.0", "lastmod" : Timestamp(34, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 310834 }, "max" : { "uid" : 321227 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_321227.0", "lastmod" : Timestamp(34, 1), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 321227 }, "max" : { "uid" : 331654 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_331654.0", "lastmod" : Timestamp(23, 4), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 331654 }, "max" : { "uid" : 341811 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_341811.0", "lastmod" : Timestamp(23, 6), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 341811 }, "max" : { "uid" : 351837 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_351837.0", "lastmod" : Timestamp(25, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 351837 }, "max" : { "uid" : 362649 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_362649.0", "lastmod" : Timestamp(26, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 362649 }, "max" : { "uid" : 373362 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_373362.0", "lastmod" : Timestamp(26, 4), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 373362 }, "max" : { "uid" : 384005 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_384005.0", "lastmod" : Timestamp(26, 6), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 384005 }, "max" : { "uid" : 394517 }, "shard"
 : "shard0001" }
Type "it" for more
mongos> it
{ "_id" : "foo.bar-uid_394517.0", "lastmod" : Timestamp(28, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 394517 }, "max" : { "uid" : 404599 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_404599.0", "lastmod" : Timestamp(28, 4), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 404599 }, "max" : { "uid" : 414320 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_414320.0", "lastmod" : Timestamp(28, 6), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 414320 }, "max" : { "uid" : 423890 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_423890.0", "lastmod" : Timestamp(30, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 423890 }, "max" : { "uid" : 436104 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_436104.0", "lastmod" : Timestamp(31, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 436104 }, "max" : { "uid" : 446250 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_446250.0", "lastmod" : Timestamp(31, 4), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 446250 }, "max" : { "uid" : 456575 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_456575.0", "lastmod" : Timestamp(31, 6), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 456575 }, "max" : { "uid" : 466768 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_466768.0", "lastmod" : Timestamp(31, 8), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 466768 }, "max" : { "uid" : 476831 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_476831.0", "lastmod" : Timestamp(32, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 476831 }, "max" : { "uid" : 487520 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_487520.0", "lastmod" : Timestamp(34, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 487520 }, "max" : { "uid" : 497583 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_497583.0", "lastmod" : Timestamp(34, 3), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 497583 }, "max" : { "uid" : { "$maxKey" : 1 }
 }, "shard" : "shard0001" }
mongos>
 

 

   执行如下命令查看分片的状态

 

mongos> printShardingStatus(db.getSisterDB("config"),1);
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "version" : 4,
        "minCompatibleVersion" : 4,
        "currentVersion" : 5,
        "clusterId" : ObjectId("546de44857bdea6a9874c8bc")
}
  shards:
        {  "_id" : "shard0000",  "host" : "10.1.241.203:27017" }
        {  "_id" : "shard0001",  "host" : "10.1.241.203:27018" }
        {  "_id" : "shard0002",  "host" : "10.1.241.203:27019" }
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "test1",  "partitioned" : false,  "primary" : "shard0000" }
        {  "_id" : "foo",  "partitioned" : true,  "primary" : "shard0001" }
                foo.bar
                        shard key: { "uid" : 1 }
                        chunks:
                                shard0000       17
                                shard0002       16
                                shard0001       18
                        { "uid" : { "$minKey" : 1 } } -->> { "uid" : 0 } on : shard0000 Timestamp(2, 0)
                        { "uid" : 0 } -->> { "uid" : 5333 } on : shard0002 Timestamp(3, 0)
                        { "uid" : 5333 } -->> { "uid" : 16357 } on : shard0000 Timestamp(4, 0)
                        { "uid" : 16357 } -->> { "uid" : 26187 } on : shard0002 Timestamp(5, 0)
                        { "uid" : 26187 } -->> { "uid" : 36823 } on : shard0000 Timestamp(6, 0)
                        { "uid" : 36823 } -->> { "uid" : 46546 } on : shard0002 Timestamp(7, 0)
                        { "uid" : 46546 } -->> { "uid" : 57035 } on : shard0000 Timestamp(8, 0)
                        { "uid" : 57035 } -->> { "uid" : 66699 } on : shard0002 Timestamp(9, 0)
                        { "uid" : 66699 } -->> { "uid" : 77783 } on : shard0000 Timestamp(10, 0)
                        { "uid" : 77783 } -->> { "uid" : 87230 } on : shard0002 Timestamp(11, 0)
                        { "uid" : 87230 } -->> { "uid" : 96803 } on : shard0000 Timestamp(12, 0)
                        { "uid" : 96803 } -->> { "uid" : 106346 } on : shard0002 Timestamp(13, 0)
                        { "uid" : 106346 } -->> { "uid" : 117120 } on : shard0000 Timestamp(14, 0)
                        { "uid" : 117120 } -->> { "uid" : 127899 } on : shard0002 Timestamp(15, 0)
                        { "uid" : 127899 } -->> { "uid" : 137956 } on : shard0000 Timestamp(16, 0)
                        { "uid" : 137956 } -->> { "uid" : 147773 } on : shard0002 Timestamp(17, 0)
                        { "uid" : 147773 } -->> { "uid" : 158075 } on : shard0000 Timestamp(18, 0)
                        { "uid" : 158075 } -->> { "uid" : 168681 } on : shard0002 Timestamp(19, 0)
                        { "uid" : 168681 } -->> { "uid" : 178122 } on : shard0000 Timestamp(20, 0)
                        { "uid" : 178122 } -->> { "uid" : 188515 } on : shard0002 Timestamp(21, 0)
                        { "uid" : 188515 } -->> { "uid" : 198908 } on : shard0000 Timestamp(22, 0)
                        { "uid" : 198908 } -->> { "uid" : 210107 } on : shard0002 Timestamp(23, 0)
                        { "uid" : 210107 } -->> { "uid" : 219471 } on : shard0000 Timestamp(24, 0)
                        { "uid" : 219471 } -->> { "uid" : 229496 } on : shard0002 Timestamp(25, 0)
                        { "uid" : 229496 } -->> { "uid" : 240423 } on : shard0000 Timestamp(26, 0)
                        { "uid" : 240423 } -->> { "uid" : 250932 } on : shard0002 Timestamp(27, 0)
                        { "uid" : 250932 } -->> { "uid" : 261576 } on : shard0000 Timestamp(28, 0)
                        { "uid" : 261576 } -->> { "uid" : 272106 } on : shard0002 Timestamp(29, 0)
                        { "uid" : 272106 } -->> { "uid" : 281512 } on : shard0000 Timestamp(30, 0)
                        { "uid" : 281512 } -->> { "uid" : 291917 } on : shard0002 Timestamp(31, 0)
                        { "uid" : 291917 } -->> { "uid" : 301451 } on : shard0000 Timestamp(32, 0)
                        { "uid" : 301451 } -->> { "uid" : 310834 } on : shard0002 Timestamp(33, 0)
                        { "uid" : 310834 } -->> { "uid" : 321227 } on : shard0000 Timestamp(34, 0)
                        { "uid" : 321227 } -->> { "uid" : 331654 } on : shard0001 Timestamp(34, 1)
                        { "uid" : 331654 } -->> { "uid" : 341811 } on : shard0001 Timestamp(23, 4)
                        { "uid" : 341811 } -->> { "uid" : 351837 } on : shard0001 Timestamp(23, 6)
                        { "uid" : 351837 } -->> { "uid" : 362649 } on : shard0001 Timestamp(25, 2)
                        { "uid" : 362649 } -->> { "uid" : 373362 } on : shard0001 Timestamp(26, 2)
                        { "uid" : 373362 } -->> { "uid" : 384005 } on : shard0001 Timestamp(26, 4)
                        { "uid" : 384005 } -->> { "uid" : 394517 } on : shard0001 Timestamp(26, 6)
                        { "uid" : 394517 } -->> { "uid" : 404599 } on : shard0001 Timestamp(28, 2)
                        { "uid" : 404599 } -->> { "uid" : 414320 } on : shard0001 Timestamp(28, 4)
                        { "uid" : 414320 } -->> { "uid" : 423890 } on : shard0001 Timestamp(28, 6)
                        { "uid" : 423890 } -->> { "uid" : 436104 } on : shard0001 Timestamp(30, 2)
                        { "uid" : 436104 } -->> { "uid" : 446250 } on : shard0001 Timestamp(31, 2)
                        { "uid" : 446250 } -->> { "uid" : 456575 } on : shard0001 Timestamp(31, 4)
                        { "uid" : 456575 } -->> { "uid" : 466768 } on : shard0001 Timestamp(31, 6)
                        { "uid" : 466768 } -->> { "uid" : 476831 } on : shard0001 Timestamp(31, 8)
                        { "uid" : 476831 } -->> { "uid" : 487520 } on : shard0001 Timestamp(32, 2)
                        { "uid" : 487520 } -->> { "uid" : 497583 } on : shard0001 Timestamp(34, 2)
                        { "uid" : 497583 } -->> { "uid" : { "$maxKey" : 1 } } on : shard0001 Timestamp(34, 3)
        {  "_id" : "test",  "partitioned" : false,  "primary" : "shard0002" }

 

 

 因为uid是单调递增的,所以,随着uid的增大,越来越多的数据集中在一个分片上了,这里是shard0001,这也说明了,片键不能选择单调增的属性作为片键

 

 三、添加分片

 当现有的分片集群存储压力上升后,需要添加分片,添加分片也比较简单,做法是:

启动一个新的MongoDB实例,这个实例中没有要分片的数据库集合,如果有的话,先删除掉

 

mongo -- 281000 //连接到路由服务器
mongos>use admin; //必须在admin上执行添加分片的操作,否则抛出error: "$err" : "error creating initial database config information :: caused by :: can't find a shard to put new db on"
mongos> db.runCommand({addshard:"hostname:27020",allowLocal:true })

 

此后,分片Balancer开始平衡分片服务器中的文档,在上面三个分片50万个文档,添加一个分片后,平衡后,执行命令:

 

 

mongo>use config;
mongo>printShardingStatus(db.getSisterDB("config"),1);
 

 

接到的分片结果是

 

 shard key: { "uid" : 1 }
 chunks:
         shard0003       12
         shard0002       13
         shard0000       13
         shard0001       13
 
如果要添加的分片包含了分片的数据库集合,此时加入到分片集合中,会出错:
mongos> use admin
switched to db admin
mongos> db.runCommand({addshard:"10.1.241.203:27020",allowLocal:true })
{
        "ok" : 0,
        "errmsg" : "can't add shard 10.1.241.203:27020 because a local database 'foo' exists in another shard0001:10.1.241.203:27018"
}
意思是说,在加入集群时,只有一个分片可以包含分片数据库文档,它称为"primary" ,如下所示:
mongos> use config
switched to db config
mongos> db.databases.find();
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test1", "partitioned" : false, "primary" : "shard0000" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0001" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard0002" }
 
其中,shard0001就是27018所在的分片,如下所示(27020)被删除,但是它没有从config数据库的shards中删除,相反只是添加了一个属性draining:true
use config
db.shards.find();
{ "_id" : "shard0000", "host" : "10.1.241.203:27017" }
{ "_id" : "shard0001", "host" : "10.1.241.203:27018" }
{ "_id" : "shard0002", "host" : "10.1.241.203:27019" }
{ "_id" : "shard0003", "host" : "10.1.241.203:27020", "draining" : true }
 
 
 

 

 三、删除分片

执行如下命令,从集群服务器中删除一个分片

 

mongo -- 281000 //连接到路由服务器
mongos>use admin; //必须在admin上执行添加分片的操作,否则抛出error: "$err" : "error creating initial database config information :: caused by :: can't find a shard to put new db on"
mongos> db.runCommand({addshard:"hostname:27020",allowLocal:true })

分片删除后,分片集群要做的事情是把被删除分片的数据迁移到其他分片中,并保持均衡。数据迁移完成后,

执行如下命令,

 

mongo>use config;
mongo>printShardingStatus(db.getSisterDB("config"),1);

 

得到的结果如下,可见三个集群是平衡的。

 

  foo.bar
          shard key: { "uid" : 1 }
          chunks:
                  shard0000       17
                  shard0001       17
                  shard0002       17
 
分片删除后,被删除的分片中的数据被移动到了其它分片,自身没有任何数据留下。
 
 
 
 

 

 

 

  • 大小: 50.4 KB
  • 大小: 60.6 KB
  • 大小: 38.1 KB
分享到:
评论

相关推荐

    MongoDB学习笔记

    3. 分片集群的理论、搭建过程、测试结果、优势剖析、分片键选择和集群管理:分片是MongoDB中实现数据水平扩展的关键技术。 五、数据库管理 1. 导入导出:可以将数据导出为JSON格式,也可以从JSON导入数据。 2. 数据...

    mongodb学习笔记和mongodb权威指南

    书中可能还会涉及复制集的设置和管理,用于实现高可用性和数据冗余,以及分片技术,用于在大型集群上水平扩展数据库。 `mongodb学习手册`可能是一个全面的教程,旨在引导初学者逐步了解MongoDB。它可能从安装和启动...

    mongoDB学习笔记及工具.zip

    本压缩包“mongoDB学习笔记及工具.zip”包含了一些资源,帮助你深入理解和掌握MongoDB的相关知识。 1. **笔记(note.txt)**: 这个文件可能是对MongoDB的基础概念、安装过程、基本操作和进阶特性的详细记录。笔记...

    mongodb学习笔记

    在本篇 MongoDB 学习笔记中,我们将聚焦于 MongoDB 的集群和分片(sharding)配置,这是实现大规模数据存储和处理的关键特性。 1. **MongoDB 集群**: - 集群是由多个独立的 MongoDB 实例组成的,它们共同提供高...

    MongoDB学习总结笔记

    以下是对MongoDB学习的一些关键知识点的详细解释: 1. **MongoDB的基本概念**:MongoDB以集合(Collections)的形式存储数据,集合相当于关系型数据库中的表。集合内包含文档(Documents),文档是JSON格式的数据...

    MongoDB数据库学习笔记

    同时,熟悉MongoDB的复制和分片配置也是运维中必不可少的部分。 总的来说,MongoDB是一个强大的非关系型数据库,适合处理大规模、动态结构的数据。其简洁易懂的接口和丰富的功能使其成为现代Web应用、物联网(IoT)和...

    mongo学习笔记.doc

    MongoDB基本知识整理。 Mongodb导出与导入命令、创建分片、创建复制集等。

    MongoDB数据库-163 李兴华培训笔记.rar

    另外,分片(Sharding)是MongoDB处理大数据的重要手段。通过将数据分散到多个物理机器上,可以实现水平扩展,提高系统的存储和处理能力。分片策略可以根据集合的大小、字段值或哈希值来确定。 此外,MongoDB提供了...

    mongodb笔记

    此笔记是我个人通过自学整理出来的。希望看到的人有什么建议告诉我。也可以共同去学习!里面内容包括:安装配置、增删改查、用户管理、主从复制、分片、副本集以及和JAVA的结合案例等等!

    MongoDB学习笔记—Linux下搭建MongoDB环境

    4. **分片**:随着数据量和处理需求的增长,MongoDB可以分布式部署在多台计算机上,实现数据分片。 5. **丰富的查询表达式**:使用JSON形式的查询指令,能处理嵌套对象和数组。 6. **更新操作**:`update()`命令可以...

    深入云计算(MongoDB管理与开发实战详解--学习笔记

    在性能方面,MongoDB支持水平扩展,通过分片和副本集技术实现高可用性和数据冗余,确保在大数据量下的读写性能。它的文档存储方式使得数据插入、更新和删除操作相对简单,无需像关系型数据库那样进行复杂的表结构...

    mongoDb源码和笔记

    最后,MongoDB的复制集和分片集群技术也是其高可用性和可扩展性的基石。源码中会涉及复制状态同步、仲裁节点选举和数据迁移的实现。 通过深入研究这些源码和笔记,开发者不仅可以提升对MongoDB的理解,还能学习到...

    mongodb一些笔记

    MongoDB是一种流行的开源文档数据库系统,属于...这些笔记和教程涵盖了MongoDB的基础知识到高级用法,对于学习和理解MongoDB的操作和特性非常有帮助。通过阅读提供的文档,可以深入学习MongoDB的使用技巧和最佳实践。

    MongoDB学习笔记(一) MongoDB介绍与安装方法

    MongoDB 是一种高性能、开源、无模式的文档型数据库,属于 NoSQL 数据库中的热门选择。...如果你希望深入了解,可以进一步学习其复制集、分片、数据备份与恢复、安全配置以及更复杂的查询和聚合操作。

    免费的mongoDB

    该指南可能深入讲解了MongoDB的核心特性,包括复制集、分片集群、数据备份与恢复策略、安全性和权限管理等。复制集是MongoDB高可用性的关键,它允许多个副本同步数据,以确保服务的连续性。而分片集群则支持水平扩展...

    数据库学习笔记包括:Oracle、MySQL、MongoDB、Redis,Neo4j的在完善.zip

    本压缩包中的学习笔记涵盖了五大主流数据库系统:Oracle、MySQL、MongoDB、Redis以及Neo4j,它们各自拥有独特的特性和用途,适用于不同的场景。 1. Oracle数据库: Oracle是一款关系型数据库管理系统(RDBMS),在...

    mongodb官网文档集合

    “Recommended Production Architectures”部分介绍了在生产环境中部署MongoDB的推荐架构,包括主从复制、分片集群和地理冗余方案。这部分内容对于规划和实现高可用性和可扩展性的数据库系统非常有帮助。 #### ...

Global site tag (gtag.js) - Google Analytics