Archive

Articles taggués ‘cluster’

Heartbeat/mysql: Bascule esclave/maitre

24/11/2022 Aucun commentaire

heartbeat mysqlNous sommes dans la situation où il faut redémarrer le serveur maitre

connection ssh sur les deux serveurs:

sur le maitre
#ssh maitre@IP

Arrêt du service heartbeat sur les deux serveurs

# service heartbeat stop
ou
#/etc/init.d/heartbeat stop

Même chose sur l’esclave
A partir de là, plus personne n’a accès à l’application web

MySql:

on va se placer dans votre homedir sur les deux serveurs.
#cd /home

sur le serveur esclave, on va récupérer la base mysql et la copier sur le serveur maître:

[esclave]# mysqldump -u root -pMDP votre_bdd > votre_bdd.sql
[esclave]# scp votre_bdd.sql maitre@IP:/home

Passons sur le serveur maître:

[maitre]#cd /home

on injecte la base:
[maitre]# mysql -u root -pMDP votre_bdd.sql > votre_bdd.sql

Maintenant, on va activer la réplication de la base MySql du serveur maître sur l’esclave: sur le serveur maître: tout d’abord on va se connecter sur la base

[maitre]# mysql -u root -pMDP

Puis on bloque la base en lecture seule
mysql>: FLUSH TABLES WITH READ LOCK;

On repère la position de la base:

mysql > SHOW MASTER STATUS;
  ------------------ ---------- -------------- -------------------------- 
 | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         |
  ------------------ ---------- -------------- -------------------------- 
 | mysql-bin.000003 | 73       | test,bar     | foo,manual,mysql         |
  ------------------ ---------- -------------- -------------------------- 
 1 row in set (0.06 sec)

et on note:
mysql-bin.000003 ← le log bin à utiliser
73 ← la position du log
évidemment à adapter suivant le cas.

sur le serveur esclave:
on se connecte aussi à la base mysql
[esclave]# mysql -u root -pMDP

on stoppe la réplication en cours
mysql>stop slave;

on fait une RAZ de la réplication
mysql>reset slave;

On va positionner le serveur esclave comme le maître:
mysql> CHANGE MASTER TO MASTER_HOST = 'IP', MASTER_USER = 'repli', MASTER_PASSWORD = 'repli', MASTER_LOG_FILE = 'mysql-bin.000003', MASTER_LOG_POS = 73;

on démarre la réplication
mysql> start slave;

on vérifie qu’on n’a pas d’erreur

mysql> show slave statusG
 *************************** 1. row ***************************
 Slave_IO_State: Waiting for master to send event
 Master_Host: 10.94.8.58
 Master_User: repli
 Master_Port: 3306
 Connect_Retry: 60
 Master_Log_File: mysql-bin.000003
 Read_Master_Log_Pos: 73
 Relay_Log_File: GLPI_esclave-relay-bin.000024
 Relay_Log_Pos: 660186
 Relay_Master_Log_File: mysql-bin.000003
 Slave_IO_Running: Yes
 Slave_SQL_Running: Yes
 ******
 Last_Errno: 0
 Last_Error:
 ******
 Seconds_Behind_Master: 0
 1 row in set (0.00 sec)

on se déconnecte de mysql
mysql>exit;

sur le serveur maitre on déverrouille l’écriture sur la base
mysql> UNLOCK TABLES;

on se déconnecte de mysql
mysql>exit;

et pour finir on lance le script de lancement automatique des services heartbeat/mon/http/myslq:

[maitre]#./startHA

sur le serveur maitre c’est le même script sauf le service Mon en moins:
[esclave]#./startHA

on vérifie que l’IP virtuel est actif sur le serveur maitre:

[maitre]#ifconfig
eth0:0 Link encap:Ethernet HWaddr 00:22:19:D7:73:48
 inet adr:10.94.8.56 Bcast:10.94.15.255 Masque:255.255.248.0
 UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1
 Interruption:16

et dans le navigateur firefox, on fait un test de connection:

http://appliweb.com

SI TABLE CRASHED :

d’après le log, on repère la table dite crashed

connexion à mysql :
#mysql -u root -pMDP ;

on sélectionne la base
mysql>use votre_bdd ;

on répare la table
mysql>repair table nomdelatable ;

on sort de mysql
mysql>exit;

MySQL – Migrate Users from Server to Server

14/06/2022 Comments off

mysql migrate usersSometimes we need to migrate our databases to a new MySQL server.  It is easy to move the databases, but without the users and their permissions, our new databases would be worthless.  Below is a step-by-step on migrating MySQL users to a new MySQL server

Step 1 – Create a Query List That We Can Use to Get Grants for All Users

I use these options so that I wouldn’t get any formatting characters that I would have to manually delete later.

  • -N skip column names in the output
  • -p password – Asks me to type the password so nobody can get it from the command line history
  • -s  silent mode – less formatting output that we don’t want like “|” and “-“

So, let’s get a list of the users in a query that we can use to get the grants.  Our query will be output into the “myfile” file

$ mysql -uroot -N -p -s > myfile
Enter password:
select Distinct CONCAT(‘show grants for `’, user, ‘`@`’, host, ‘`;’) as query from mysql.user;
quit

If we want to see what our query file look like, we can take a quick peek:

[root@classes-dev-mysql ~]# cat myfile
show grants for `user1`@`%`;
show grants for `user2`@`%`;
show grants for `user3`@`10.%`;
show grants for `user4`@`10.%`;
show grants for `jeff`@`10.%`;

Step 2 – Create the MySQL Grant File

We don’t have quite what we want and need yet.  We are looking for a query that will create all of our users on the new MySQL server.  We need to run the query that we just created and it will give us the query that we will use later to create the users.  It will create our grant permission statements in a file named “grantfile

[root@classes-dev-mysql ~]# mysql -uroot -N -p -s -r < myfile > grantfile
Enter password:

We can take a peek at what our grantfile contains:

$ cat grantfile
GRANT USAGE ON *.* TO ‘user1’@’%’ IDENTIFIED BY PASSWORD ‘5ea9af6g6t27032f’
GRANT ALL PRIVILEGES ON `database1`.* TO ‘user1’@’%’
GRANT USAGE ON *.* TO ‘user2’@’10.%’ IDENTIFIED BY PASSWORD ‘2a123b405cbfe27d’
GRANT SELECT ON `database1`.`table1` TO ‘user2’@’10.%’GRANT ALL PRIVILEGES ON *.* TO ‘user3’@’10.%’ IDENTIFIED BY PASSWORD ‘753af2za1be637ea’
GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO ‘user3’@’10.%’ IDENTIFIED BY PASSWORD ’08ad9be605rfgcb’…

Step 3 – Create Users and Grant MySQL Permissions on the New MySQL Machine

Now we are done working on the source machine.  We need to copy our file named “grantfile” over to the new machine.

$ scp grantfile myuser@mysql2.uptimemadeeasy.com:/home/myuser

Next, we login to the destination or the new MySQL machine that we are building and run the “grantfile” in MySQL to create our users on the new MySQL machine.

$ mysql -uroot -p < ./grantfile

That’s it.  As long as our databases are named the same in the new MySQL, our users should be ready to use the copy of the databases in the new MySQL machine.

Source: Uptime Made Easy

Installing A High Availability Web Server Cluster On Ubuntu 12.10 Using HAProxy, HeartBeat And Lampp

13/05/2022 Comments off

What is the main objective of this entire topology?

high availability web server clusterRedundancy and Load Sharing! Imagine a scenario where your single web server is receiving millions and millions of HTTP requests per second, the CPU load is going insane, as well as the memory usage, when suddenly “crash!”, the server dies without saying good-bye (probably because of some weird hardware out-stage that you certainly won’t have time to debug). Well, this simple scheme might lead you into a brand new world of possibilities

What is this going to solve?

Hardware Failures! We are going to have redundant hardware all over the place, if one goes down, another one will be immediately ready for taking its place. Also, by using load sharing schemes, this is going to solve our High Usage! issue. Balancing the load among every server on our “farm” will reduce the amount of HTTP request per server (but you already figured that out, right?).
Let’s set it up! Firstly, we’re not going to use a domain scheme (let’s keep it simple), make sure your /etc/hosts file looks exactly like the picture below on every machine:
#vi /etc/hosts
192.168.0.241   haproxy
192.168.0.39 Node1
192.168.0.30 Node2
192.168.223.147 Node1
192.168.223.148 Node2
192.168.0.58 Web1
192.168.0.139 Web2
192.168.0.132 Mysql

Lire la suite…

Categories: Système Tags: ,

Installing a high availability web server cluster on Ubuntu 12.04 LTS using HAProxy, HeartBeat and Nginx

12/05/2022 Comments off

How to set-up a high-availability cluster

Here are a few notes about how to set-up a high-availability web server farm using Ubuntu 12.04 LTS using a whole load of awesome software (HAProxy, HeartBeat, Watchdog and Nginx)

The setup

In my setup I have five virtual machines, these are named and used for the following:-

haproxy1 – Our first proxy (master)/load-balancer (running HAProxy, HeartBeat and Watchdog) [IP address: 172.25.87.190]
haproxy2 – Our second proxy (failover)/load-balancer (running HAProxy, HeartBeat and Watchdog) [IP address: 172.25.87.191]
web1 – Our first web server node (running nginx) [IP address: 172.25.87.192]
web2 – Our second web server node (running nginx) [IP address: 172.25.87.193]
web3 – Our third web server node (running nginx) [IP address: 172.25.87.194]

The servers are connected in the following way:-

thesetup

In my next post I will also explain how to configure the web servers to point to a backend shared storage cluster (using NFS) and a MySQL cluster server to have a truly highly available web hosting platform.

Lire la suite…

MySQL Cluster Replication: Multi-Master and Circular Replication

02/05/2022 Comments off

mysql-multi-master-replication-14-638Beginning with MySQL 5.1.18, it is possible to use MySQL Cluster in multi-master replication, including circular replication between a number of MySQL Clusters.

Prior to MySQL 5.1.18, multi-master replication including circular replication was not supported with MySQL Cluster replication. This was because log events created in a particular MySQL Cluster were wrongly tagged with the server ID of the master rather than the server ID of the originating server.

Circular replication example. In the next few paragraphs we consider the example of a replication setup involving three MySQL Clusters numbered 1, 2, and 3, in which Cluster 1 acts as the replication master for Cluster 2, Cluster 2 acts as the master for Cluster 3, and Cluster 3 acts as the master for Cluster 1. Each cluster has two SQL nodes, with SQL nodes A and B belonging to Cluster 1, SQL nodes C and D belonging to Cluster 2, and SQL nodes E and F belonging to Cluster 3.

Circular replication using these clusters is supported as long as the following conditions are met:

  • The SQL nodes on all masters and slaves are the same
  • All SQL nodes acting as replication masters and slaves are started using the --log-slave-updates option

Lire la suite…