A. Jalil @AJ
2015-09-26 04:08:47 UTC
Hi Stephen,
I got my RS0 Replicat Set migrated successfully to AWS. But, I read
somewhere in mongo Doc that *we are supposed to keep Odd numbers of members
within each Replica Set*. Right now, in my RS0 I have 6 nodes, 3 old nodes
and 3 new nodes I just added on AWS which you can see below. In order to
keep Odd numbers of nodes within my RS0, I went ahead and shutdown the node *[server0-3.com
] *which you can see below it says [* not reachable/healthy *] because I
stopped mongo on this server. But, when I check Shards, I see *6 nodes in
RS0 (3 old nodes + 3 new AWS nodes) + 3 old nodes from RS1* which is what I
expected. So, I was wondering if this is the proper way to do this or
should I remove this node completely from the Replica Set R0 as well as
from the Shards cluster ?
Please note, eventually, I will remove the old nodes completely from the
cluster, and things will go back to Odd members again, but for now I'd like
to wait until I see data getting replicated successfully on AWS nodes and
then remove the old ones. I know I can add an arbiter as an alternative,
but I don't want to add more work to this since I will be removing all old
nodes anyway.
rs0:PRIMARY> *rs.status()*
{
"set" : "rs0",
"date" : ISODate("2015-09-26T02:56:33Z"),
"myState" : 1,
"members" : [
{
"_id" : 3,
"name" : "server0-1.com:27017",
*=> Old RS0 server-1*
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 78490,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"electionTime" : Timestamp(1443157717, 1),
"electionDate" : ISODate("2015-09-25T05:08:37Z"),
"self" : true
},
{
"_id" : 4,
"name" : "server0-2.com:27017",
*=> Old RS0 server-2*
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 78489,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:32Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:32Z"),
"pingMs" : 1,
"syncingTo" : "server0-1.com:27017"
},
{
"_id" : 5,
"name" : "*server0-3.com*:27017",
*=> Old RS0 server-3*
"health" : 0,
"state" : 8,
"stateStr" : "(*not reachable/healthy*)",
(not reachable cz I shutdown mongo)
"uptime" : 0,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:09Z"),
"pingMs" : 0,
"syncingTo" : "server0-1.com:27017"
},
{
"_id" : 6,
"name" : "server0-2-AWS.com:27017",
*=> T**he new server-2 I added on AWS*
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 78489,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:33Z"),
"pingMs" : 1,
"syncingTo" : "server0-1.com:27017"
},
{
"_id" : 7,
"name" : "server0-3-AWS.com:27017",
*=> T**he new server-3 I added on AWS*
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 78487,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:32Z"),
"pingMs" : 1,
"syncingTo" : "server0-1.com:27017"
},
{
"_id" : 8,
"name" : "server0-1-AWS.com:27017",
*=> T**he new server-1 I added on AWS*
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1024,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:32Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:32Z"),
"pingMs" : 0,
"syncingTo" : "server0-1.com:27017"
}
],
"ok" : 1
}
*> And when I do rs.conf() I still see the server despite the fact I
shutdown mongo on this server..*
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 24,
"members" : [
{
"_id" : 3,
"host" : "server0-1.com:27017",
=> Old RS0 server
"priority" : 100
},
{
"_id" : 4,
"host" : "server0-2.com:27017",
=> Old RS0 server
"priority" : 50
},
{
"_id" : 5,
"host" : "*server0-3.com*:27017",
=> Old RS0 server - I am still seeing the server in config even
though I stopped mongo on this server
"priority" : 50
},
{
"_id" : 6,
"host" : "server0-2-AWS.com:27017" *=>
the new server I added in AWS*
},
{
"_id" : 7,
"host" : "server0-3-AWS.com:27017" *=>
the new server I added in AWS*
},
{
"_id" : 8,
"host" : "server0-1-AWS.com:27017" *=>
the new server I added in AWS*
}
],
"settings" : {
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
*Thank you so much !*
@AJ
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/7c1a4d29-4166-4ce9-aaa1-b860600ba3f1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I got my RS0 Replicat Set migrated successfully to AWS. But, I read
somewhere in mongo Doc that *we are supposed to keep Odd numbers of members
within each Replica Set*. Right now, in my RS0 I have 6 nodes, 3 old nodes
and 3 new nodes I just added on AWS which you can see below. In order to
keep Odd numbers of nodes within my RS0, I went ahead and shutdown the node *[server0-3.com
] *which you can see below it says [* not reachable/healthy *] because I
stopped mongo on this server. But, when I check Shards, I see *6 nodes in
RS0 (3 old nodes + 3 new AWS nodes) + 3 old nodes from RS1* which is what I
expected. So, I was wondering if this is the proper way to do this or
should I remove this node completely from the Replica Set R0 as well as
from the Shards cluster ?
Please note, eventually, I will remove the old nodes completely from the
cluster, and things will go back to Odd members again, but for now I'd like
to wait until I see data getting replicated successfully on AWS nodes and
then remove the old ones. I know I can add an arbiter as an alternative,
but I don't want to add more work to this since I will be removing all old
nodes anyway.
rs0:PRIMARY> *rs.status()*
{
"set" : "rs0",
"date" : ISODate("2015-09-26T02:56:33Z"),
"myState" : 1,
"members" : [
{
"_id" : 3,
"name" : "server0-1.com:27017",
*=> Old RS0 server-1*
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 78490,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"electionTime" : Timestamp(1443157717, 1),
"electionDate" : ISODate("2015-09-25T05:08:37Z"),
"self" : true
},
{
"_id" : 4,
"name" : "server0-2.com:27017",
*=> Old RS0 server-2*
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 78489,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:32Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:32Z"),
"pingMs" : 1,
"syncingTo" : "server0-1.com:27017"
},
{
"_id" : 5,
"name" : "*server0-3.com*:27017",
*=> Old RS0 server-3*
"health" : 0,
"state" : 8,
"stateStr" : "(*not reachable/healthy*)",
(not reachable cz I shutdown mongo)
"uptime" : 0,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:09Z"),
"pingMs" : 0,
"syncingTo" : "server0-1.com:27017"
},
{
"_id" : 6,
"name" : "server0-2-AWS.com:27017",
*=> T**he new server-2 I added on AWS*
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 78489,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:33Z"),
"pingMs" : 1,
"syncingTo" : "server0-1.com:27017"
},
{
"_id" : 7,
"name" : "server0-3-AWS.com:27017",
*=> T**he new server-3 I added on AWS*
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 78487,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:32Z"),
"pingMs" : 1,
"syncingTo" : "server0-1.com:27017"
},
{
"_id" : 8,
"name" : "server0-1-AWS.com:27017",
*=> T**he new server-1 I added on AWS*
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1024,
"optime" : Timestamp(1443235169, 1),
"optimeDate" : ISODate("2015-09-26T02:39:29Z"),
"lastHeartbeat" : ISODate("2015-09-26T02:56:32Z"),
"lastHeartbeatRecv" :
ISODate("2015-09-26T02:56:32Z"),
"pingMs" : 0,
"syncingTo" : "server0-1.com:27017"
}
],
"ok" : 1
}
*> And when I do rs.conf() I still see the server despite the fact I
shutdown mongo on this server..*
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 24,
"members" : [
{
"_id" : 3,
"host" : "server0-1.com:27017",
=> Old RS0 server
"priority" : 100
},
{
"_id" : 4,
"host" : "server0-2.com:27017",
=> Old RS0 server
"priority" : 50
},
{
"_id" : 5,
"host" : "*server0-3.com*:27017",
=> Old RS0 server - I am still seeing the server in config even
though I stopped mongo on this server
"priority" : 50
},
{
"_id" : 6,
"host" : "server0-2-AWS.com:27017" *=>
the new server I added in AWS*
},
{
"_id" : 7,
"host" : "server0-3-AWS.com:27017" *=>
the new server I added in AWS*
},
{
"_id" : 8,
"host" : "server0-1-AWS.com:27017" *=>
the new server I added in AWS*
}
],
"settings" : {
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
*Thank you so much !*
@AJ
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/7c1a4d29-4166-4ce9-aaa1-b860600ba3f1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.