个人技术博客:http://www.cooli.cc/
Installation Guide
Note: this guide is a draft, feel free to make changes if you see anything that can be improved, expanded on or corrected This installation guide describes a installation of MMM 2 (without the MMM tools) based on Debian Lenny (5.0)
A basic installation contains at least 2 database servers and 1 monitoring server. In this guide, I used 5 servers with Debian Lenny (5.0)
I used the following virtual IPs. They will be distributed across the hosts by MMM.
Basic configuration of master 1
First we install MySQL on all hosts:
Then we edit the configuration file /etc/mysql/my.cnf and add the following lines - be sure to use different server ids for all hosts:
Then remove the following entry:
Do not bind of any specific IP, use 0.0.0.0 instead:
Afterwards we need to restart MySQL for our changes to take effect:
Create users
Now we can create the required users. We'll need 3 different users:
Note: We could be more restrictive here regarding the hosts from which the users are allowed to connect: mmm_monitor is used from 192.168.0.10. mmm_agent and replication are used from 192.168.0.11 - 192.168.0.14.
Synchronisation of data between both databases
I'll assume that db1 contains the correct data. If you have an empty database, you still have to syncronize the accounts we have just created.
First make sure that no one is altering the data while we create a backup.
Then get the current position in the binary-log. We will need this values when we setup the replication on db2, db3 and db4.
Now we can remove the database-lock. Go to the first shell:
Then import this into db2, db3 and db4:
Then flush the privileges on db2, db3 and db4. We have altered the user-table and mysql has to reread this table.
On debian and ubuntu, copy the passwords in /etc/mysql/debian.cnf from db1 to db2, db3 and db4. This password is used for starting and stopping mysql.
Both databases now contain the same data. We now can setup replication to keep it that way.
Note: Import just only add records from dump file. You should drop all databases before import dump file.
Setup replication
Configure replication on db2, db3 and db4 with the following commands:
Start the slave-process on all 3 hosts:
Now check if the replication is running correctly on all hosts:
Now we have to make db1 replicate from db2. First we have to determine the values for master_log_file and master_log_pos:
Now insert the values return by “show master status” on db2 at the <file> and <position> tags.
Start the slave-process:
Install MMM
Create user
Optional: Create user that will be the owner of the MMM scripts and configuration files. This will provide an easier method to securely manage the monitor scripts.
First install dependencies:
Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-monitor*.deb and install it:
Database hosts
On Ubuntu First install dependencies:
Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-agent*.deb and install it:
On RedHat
This will take care of all the dependencies, which may include:
Installed:
Configure MMM
All generic configuration-options are grouped in a separate file called /etc/mysql-mmm/mmm_common.conf. This file will be the same on all hosts in the system:
Don't forget to copy this file to all other hosts (including the monitoring host).
On the database hosts we need to edit /etc/mysql-mmm/mmm_agent.conf. Change “db1” accordingly on the other hosts:
On the monitor host we need to edit /etc/mysql-mmm/mmm_mon.conf:
ping_ips are some ips that are pinged to determine whether the network connection of the monitor is ok. I used my switch (192.168.0.1) and the four database server.
Start MMM
Start the agents
(On the database hosts)
Debian/Ubuntu
Edit /etc/default/mysql-mmm-agent to enable the agent:
ENABLED=1Red Hat
RHEL/Fedora does not enable packages to start at boot time per default policy, so you might have to turn it on manually so the agents will start automatically when server is rebooted:
Then start it:
Start the monitor
(On the monitoring host) Edit /etc/default/mysql-mmm-monitor to enable the monitor:
Then start it:
Wait some seconds for mmmd_mon to start up. After a few seconds you can use mmm_control to check the status of the cluster:
Because its the first startup the monitor does not know our hosts, so it sets all hosts to state AWAITING_RECOVERY and logs a warning message:
2009/10/28 23:15:28 WARN Detected new host 'db4': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db4' to switch it online.Now we set or hosts online (db1 first, because the slaves replicate from this host):
原文地址:http://mysql-mmm.org/mmm2:guide
Installation Guide
Note: this guide is a draft, feel free to make changes if you see anything that can be improved, expanded on or corrected This installation guide describes a installation of MMM 2 (without the MMM tools) based on Debian Lenny (5.0)
A basic installation contains at least 2 database servers and 1 monitoring server. In this guide, I used 5 servers with Debian Lenny (5.0)
function ip hostname server id |
monitoring host 192.168.0.10 mon - |
master 1 192.168.0.11 db1 1 |
master 2 192.168.0.12 db2 2 |
slave 1 192.168.0.13 db3 3 |
slave 2 192.168.0.14 db4 4 |
I used the following virtual IPs. They will be distributed across the hosts by MMM.
ip role description |
192.168.0.100 writer Your application should connect to this IP for write queries. |
192.168.0.101 reader Your application should connect to one of these four IPs for read queries |
192.168.0.102 reader |
192.168.0.103 reader |
192.168.0.104 reader |
Basic configuration of master 1
First we install MySQL on all hosts:
aptitude install mysql-server
Then we edit the configuration file /etc/mysql/my.cnf and add the following lines - be sure to use different server ids for all hosts:
server_id = 1 log_bin = /var/log/mysql/mysql-bin.log log_bin_index = /var/log/mysql/mysql-bin.log.index relay_log = /var/log/mysql/mysql-relay-bin relay_log_index = /var/log/mysql/mysql-relay-bin.index expire_logs_days = 10 max_binlog_size = 100M log_slave_updates = 1
Then remove the following entry:
bind-address = 127.0.0.1
Do not bind of any specific IP, use 0.0.0.0 instead:
bind-address = 0.0.0.0
Afterwards we need to restart MySQL for our changes to take effect:
/etc/init.d/mysql restart
Create users
Now we can create the required users. We'll need 3 different users:
function description privileges |
monitor user used by the mmm monitor to check the health of the MySQL servers REPLICATION CLIENT |
agent user used by the mmm agent to change read-only mode, replication master, etc. SUPER, REPLICATION CLIENT, PROCESS |
relication user used for replication REPLICATION SLAVE |
GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.0.%' IDENTIFIED BY 'monitor_password'; GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.0.%' IDENTIFIED BY 'agent_password'; GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.0.%' IDENTIFIED BY 'replication_password';
Note: We could be more restrictive here regarding the hosts from which the users are allowed to connect: mmm_monitor is used from 192.168.0.10. mmm_agent and replication are used from 192.168.0.11 - 192.168.0.14.
Synchronisation of data between both databases
I'll assume that db1 contains the correct data. If you have an empty database, you still have to syncronize the accounts we have just created.
First make sure that no one is altering the data while we create a backup.
(db1) mysql> FLUSH TABLES WITH READ LOCK;
Then get the current position in the binary-log. We will need this values when we setup the replication on db2, db3 and db4.
(db1) mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000002 | 374 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec) DON'T CLOSE this mysql-shell. If you close it, the database lock will be removed. Open a second console and type:
db1$ mysqldump -u root -p --all-databases > /tmp/database-backup.sql
Now we can remove the database-lock. Go to the first shell:
(db1) mysql> UNLOCK TABLES;Copy the database backup to db2, db3 and db4. db1$ scp /tmp/database-backup.sql <user>@192.168.0.12:/tmp db1$ scp /tmp/database-backup.sql <user>@192.168.0.13:/tmp db1$ scp /tmp/database-backup.sql <user>@192.168.0.14:/tmp
Then import this into db2, db3 and db4:
db2$ mysql -u root -p < /tmp/database-backup.sql db3$ mysql -u root -p < /tmp/database-backup.sql db4$ mysql -u root -p < /tmp/database-backup.sql
Then flush the privileges on db2, db3 and db4. We have altered the user-table and mysql has to reread this table.
(db2) mysql> FLUSH PRIVILEGES; (db3) mysql> FLUSH PRIVILEGES; (db4) mysql> FLUSH PRIVILEGES;
On debian and ubuntu, copy the passwords in /etc/mysql/debian.cnf from db1 to db2, db3 and db4. This password is used for starting and stopping mysql.
Both databases now contain the same data. We now can setup replication to keep it that way.
Note: Import just only add records from dump file. You should drop all databases before import dump file.
Setup replication
Configure replication on db2, db3 and db4 with the following commands:
(db2) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication', master_password='replication_password', master_log_file='<file>', master_log_pos=<position>; (db3) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication', master_password='replication_password', master_log_file='<file>', master_log_pos=<position>; (db4) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication', master_password='replication_password',master_log_file='<file>', master_log_pos=<position>;Please insert the values return by “show master status” on db1 at the <file> and <position> tags.
Start the slave-process on all 3 hosts:
(db2) mysql> START SLAVE; (db3) mysql> START SLAVE; (db4) mysql> START SLAVE;
Now check if the replication is running correctly on all hosts:
(db2) mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.11 Master_User: replication Master_Port: 3306 Connect_Retry: 60 … (db3) mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.11 Master_User: replication Master_Port: 3306 Connect_Retry: 60 … (db4) mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.11 Master_User: replication Master_Port: 3306 Connect_Retry: 60…
Now we have to make db1 replicate from db2. First we have to determine the values for master_log_file and master_log_pos:
(db2) mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 98 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec) Now we configure replication on db1 with the following command:
(db1) mysql> CHANGE MASTER TO master_host = '192.168.0.12', master_port=3306, master_user='replication', master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;
Now insert the values return by “show master status” on db2 at the <file> and <position> tags.
Start the slave-process:
(db1) mysql> START SLAVE;Now check if the replication is running correctly on db1: (db1) mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.12 Master_User: <replication> Master_Port: 3306 Connect_Retry: 60…Replication between the nodes should now be complete. Try it by inserting some data into both db1 and db2 and check that the data will appear on all other nodes.
Install MMM
Create user
Optional: Create user that will be the owner of the MMM scripts and configuration files. This will provide an easier method to securely manage the monitor scripts.
useradd --comment "MMM Script owner" --shell /sbin/nologinmmmdMonitoring host
First install dependencies:
aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl libclass-singleton-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl
Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-monitor*.deb and install it:
dpkg -i mysql-mmm-common_*.deb mysql-mmm-monitor*.deb
Database hosts
On Ubuntu First install dependencies:
aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl iproute libnet-arp-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl
Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-agent*.deb and install it:
dpkg -i mysql-mmm-common_*.deb mysql-mmm-agent_*.deb
On RedHat
yum install -y mysql-mmm-agent
This will take care of all the dependencies, which may include:
Installed:
mysql-mmm-agent.noarch 0:2.2.1-1.el5Dependency Installed:
libart_lgpl.x86_64 0:2.3.17-4 mysql-mmm.noarch 0:2.2.1-1.el5 perl-Algorithm-Diff.noarch 0:1.1902-2.el5 perl-DBD-mysql.x86_64 0:4.008-1.rf perl-DateManip.noarch 0:5.44-1.2.1 perl-IPC-Shareable.noarch 0:0.60-3.el5 perl-Log-Dispatch.noarch 0:2.20-1.el5 perl-Log-Dispatch-FileRotate.noarch 0:1.16-1.el5 perl-Log-Log4perl.noarch 0:1.13-2.el5 perl-MIME-Lite.noarch 0:3.01-5.el5 perl-Mail-Sender.noarch 0:0.8.13-2.el5.1 perl-Mail-Sendmail.noarch 0:0.79-9.el5.1 perl-MailTools.noarch 0:1.77-1.el5 perl-Net-ARP.x86_64 0:1.0.6-2.1.el5 perl-Params-Validate.x86_64 0:0.88-3.el5 perl-Proc-Daemon.noarch 0:0.03-1.el5 perl-TimeDate.noarch 1:1.16-5.el5 perl-XML-DOM.noarch 0:1.44-2.el5 perl-XML-Parser.x86_64 0:2.34-6.1.2.2.1 perl-XML-RegExp.noarch 0:0.03-2.el5 rrdtool.x86_64 0:1.2.27-3.el5 rrdtool-perl.x86_64 0:1.2.27-3.el5
Configure MMM
All generic configuration-options are grouped in a separate file called /etc/mysql-mmm/mmm_common.conf. This file will be the same on all hosts in the system:
active_master_role writer <host default> cluster_interface eth0 pid_path /var/run/mmmd_agent.pid bin_path /usr/lib/mysql-mmm/ replication_user replication replication_password replication_password agent_user mmm_agent agent_password agent_password </host> <host db1> ip 192.168.0.11 mode master peer db2 </host> <host db2> ip 192.168.0.12 mode master peer db1 </host> <host db3> ip 192.168.0.13 mode slave </host> <host db4> ip 192.168.0.14 mode slave </host> <role writer> hosts db1, db2 ips 192.168.0.100 mode exclusive </role> <role reader> hosts db1, db2, db3, db4 ips 192.168.0.101, 192.168.0.102, 192.168.0.103, 192.168.0.104 mode balanced </role>
Don't forget to copy this file to all other hosts (including the monitoring host).
On the database hosts we need to edit /etc/mysql-mmm/mmm_agent.conf. Change “db1” accordingly on the other hosts:
include mmm_common.conf this db1
On the monitor host we need to edit /etc/mysql-mmm/mmm_mon.conf:
include mmm_common.conf <monitor> ip 127.0.0.1 pid_path /var/run/mmmd_mon.pid bin_path /usr/lib/mysql-mmm/ status_path /var/lib/misc/mmmd_mon.status ping_ips 192.168.0.1, 192.168.0.11, 192.168.0.12, 192.168.0.13, 192.168.0.14 </monitor> <host default> monitor_user mmm_monitor monitor_password monitor_password </host> debug 0
ping_ips are some ips that are pinged to determine whether the network connection of the monitor is ok. I used my switch (192.168.0.1) and the four database server.
Start MMM
Start the agents
(On the database hosts)
Debian/Ubuntu
Edit /etc/default/mysql-mmm-agent to enable the agent:
ENABLED=1Red Hat
RHEL/Fedora does not enable packages to start at boot time per default policy, so you might have to turn it on manually so the agents will start automatically when server is rebooted:
chkconfig mysql-mmm-agent on
Then start it:
/etc/init.d/mysql-mmm-agent start
Start the monitor
(On the monitoring host) Edit /etc/default/mysql-mmm-monitor to enable the monitor:
ENABLED=1
Then start it:
/etc/init.d/mysql-mmm-monitor start
Wait some seconds for mmmd_mon to start up. After a few seconds you can use mmm_control to check the status of the cluster:
mon$ mmm_control show db1(192.168.0.11) master/AWAITING_RECOVERY. Roles: db2(192.168.0.12) master/AWAITING_RECOVERY. Roles: db3(192.168.0.13) slave/AWAITING_RECOVERY. Roles: db4(192.168.0.14) slave/AWAITING_RECOVERY. Roles:
Because its the first startup the monitor does not know our hosts, so it sets all hosts to state AWAITING_RECOVERY and logs a warning message:
mon$ tail /var/log/mysql-mmm/mmmd_mon.warn … 2009/10/28 23:15:28 WARN Detected new host 'db1': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db1' to switch it online. 2009/10/28 23:15:28 WARN Detected new host 'db2': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db2' to switch it online. 2009/10/28 23:15:28 WARN Detected new host 'db3': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db3' to switch it online.
2009/10/28 23:15:28 WARN Detected new host 'db4': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db4' to switch it online.Now we set or hosts online (db1 first, because the slaves replicate from this host):
mon$ mmm_control set_online db1 OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles! mon$ mmm_control set_online db2 OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles! mon$ mmm_control set_online db3 OK: State of 'db3' changed to ONLINE. Now you can wait some time and check its new roles! mon$ mmm_control set_online db4 OK: State of 'db4' changed to ONLINE. Now you can wait some time and check its new roles!
原文地址:http://mysql-mmm.org/mmm2:guide
相关推荐
MySQL-MM (Multi-Master Replication Manager) 是一种用于MySQL数据库的高可用性和负载均衡解决方案,它支持多主复制,允许数据在多个主节点之间双向同步。以下是对MySQL-MMM安装的详细步骤和基本配置的说明。 首先...
本篇文章将深入探讨MySQL 5.7的高可用性解决方案——MMM(Multi-Master Replication Manager for MySQL),这是一种用于管理MySQL多主复制的工具。 多主复制(Multi-Master Replication)是一种分布式数据库架构,...
另外,MySQL Replication与其他集群软件结合使用,如MMM(Multi-Master Replication Manager for MySQL)或Heartbeat+DRBD+MySQL,可以提供更高的可用性级别。DRBD(Distributed Replicated Block Device)通过底层...
- MMM HA:MySQL Multi-Master Replication Manager for High Availability,是 MMM复制管理器的高可用解决方案。 - MHA HA:Master High Availability,是提高数据库主从复制高可用性的工具。 - MySQL Fabric:...
### MySQL MHA (Multi-Master Heartbeat Algorithm) 安装指南 #### 一、概述 MySQL MHA(Multi-Master Heartbeat Algorithm)是一种用于实现MySQL高可用性的解决方案,它能够提供快速故障转移、数据同步等功能,...
MySQL-MMM,全称是MySQL-Multi-Master Replication Manager,是一个高可用性和容错性的MySQL多主复制解决方案。本文档将详细介绍如何在Debian Lenny (5.0)平台上安装和配置MySQL-MMM,包括基本的安装步骤、配置文件...
MHA(Multi-Master Heartbeat and Automatic failover for MySQL)是一种为MySQL设计的高可用解决方案,它通过心跳检测机制来实现故障自动切换。在MHA中,主要涉及到的角色包括: 1. **Node(节点)**: - **定义*...
- 主从复制是最基础的HA方案,但面试中会考察更高级的方案,如MMM(Mysql Multi-Master Replication Manager),MHA(Mysql High Availability)和基于Keepalived的解决方案。面试官会询问这些方案的原理、优缺点...
MySQL-MMM,全称为“MySQL Multi-Master Replication”,是一个开源的、高可用性的解决方案,用于管理MySQL数据库的多主复制。它旨在提供一个灵活、可扩展的框架,以实现数据库集群的高可用性和容错性。在MySQL-MMM...
2. 高可用架构:如MMM(Multi-Master Replication Manager)、PXC(Percona XtraDB Cluster)。 3. Binlog(二进制日志):记录所有改变数据库的数据操作,用于复制和恢复。 五、备份与恢复 1. 全量备份:mysqldump...
MySQL Multi-Master Replication (MMM) 是一个高可用性解决方案,专为MySQL数据库设计,用于实现多主复制。MMM允许数据在多个主服务器之间进行实时同步,提供了容错能力和负载均衡,确保即使在一个或多个主节点故障...
- MMM(Multi-Master Replication Manager)是一种多主复制解决方案,可能要求主库设置为只读以保护数据一致性。 - 如果不再需要MMM环境,应调整配置以适应新的需求。 7. **解决方案**: - 调整`read_only`参数...