Archive

Archives pour 01/2016

MySQL Master / Slave Replication

29/01/2016 Comments off

Source: Uptime Made Easy

Master Slave MySQL Replication Summary

Master / Slave replication in MySQL is a great way to store an exact replica of your database on another machine in another location as part of a disaster recovery plan.  Before setting up Master / Slave replication there are a few things to remember.

  • Writes – Writes to the master database should make it to the slave.  But writes to the slave will not make it to the master.  If you do write records to the slave database directly, be prepared to have to either recreate the records or back them up separately and recover them if the replication breaks.  Many times the only way to get the databases to replicate again is to backup the master and recover it over the top of the slave deleting anything that was in the slave database before.
  • Broken Replication – Writes made directly to the slave can cause the replication to break due to duplicate key rows, etc..  Always write to the master.
  • Reads – Reads should be possible from either server.  Many organizations will use replication so as to create another database to read from thereby taking the load of all of their select statements and reports off the master server.

 

MySQL Replication schemeMaster Slave MySQL Replication

Steps to Setup MySQL Master / Slave Replication

Prerequisites

We will be assuming that the following prerequisites are done prior to beginning the steps listed below:

  • MySQL has been installed on both the master and the slave servers
  • The slave server is able to communicate directly to the mysqld port (typically 3306) on the master server, meaning that there is no firewall, routing, NAT or other problems preventing communication.
  • You have an administrator MySQL user that can create users on both the master and the slave machines.
  • You have permissions to edit the /etc/my.cnf files on both machines and enough privileges to restart mysql.

That should be it!  Let’s begin setting it up.

Lire la suite…

MySQL – Migrate Users from Server to Server

29/01/2016 Comments off

mysql migrate usersSometimes we need to migrate our databases to a new MySQL server.  It is easy to move the databases, but without the users and their permissions, our new databases would be worthless.  Below is a step-by-step on migrating MySQL users to a new MySQL server

Step 1 – Create a Query List That We Can Use to Get Grants for All Users

I use these options so that I wouldn’t get any formatting characters that I would have to manually delete later.

  • -N skip column names in the output
  • -p password – Asks me to type the password so nobody can get it from the command line history
  • -s  silent mode – less formatting output that we don’t want like “|” and “-“

So, let’s get a list of the users in a query that we can use to get the grants.  Our query will be output into the “myfile” file

$ mysql -uroot -N -p -s > myfile
Enter password:
select Distinct CONCAT(‘show grants for `’, user, ‘`@`’, host, ‘`;’) as query from mysql.user;
quit

If we want to see what our query file look like, we can take a quick peek:

[root@classes-dev-mysql ~]# cat myfile
show grants for `user1`@`%`;
show grants for `user2`@`%`;
show grants for `user3`@`10.%`;
show grants for `user4`@`10.%`;
show grants for `jeff`@`10.%`;

Step 2 – Create the MySQL Grant File

We don’t have quite what we want and need yet.  We are looking for a query that will create all of our users on the new MySQL server.  We need to run the query that we just created and it will give us the query that we will use later to create the users.  It will create our grant permission statements in a file named “grantfile

[root@classes-dev-mysql ~]# mysql -uroot -N -p -s -r < myfile > grantfile
Enter password:

We can take a peek at what our grantfile contains:

$ cat grantfile
GRANT USAGE ON *.* TO ‘user1’@’%’ IDENTIFIED BY PASSWORD ‘5ea9af6g6t27032f’
GRANT ALL PRIVILEGES ON `database1`.* TO ‘user1’@’%’
GRANT USAGE ON *.* TO ‘user2’@’10.%’ IDENTIFIED BY PASSWORD ‘2a123b405cbfe27d’
GRANT SELECT ON `database1`.`table1` TO ‘user2’@’10.%’GRANT ALL PRIVILEGES ON *.* TO ‘user3’@’10.%’ IDENTIFIED BY PASSWORD ‘753af2za1be637ea’
GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO ‘user3’@’10.%’ IDENTIFIED BY PASSWORD ’08ad9be605rfgcb’…

Step 3 – Create Users and Grant MySQL Permissions on the New MySQL Machine

Now we are done working on the source machine.  We need to copy our file named “grantfile” over to the new machine.

$ scp grantfile myuser@mysql2.uptimemadeeasy.com:/home/myuser

Next, we login to the destination or the new MySQL machine that we are building and run the “grantfile” in MySQL to create our users on the new MySQL machine.

$ mysql -uroot -p < ./grantfile

That’s it.  As long as our databases are named the same in the new MySQL, our users should be ready to use the copy of the databases in the new MySQL machine.

Source: Uptime Made Easy

How to secure an Ubuntu 12.04 LTS server – Part 2 The GUI installer script

25/01/2016 Comments off

Source: The Fan Club

This guide is based on various community forum posts and webpages. Special thanks to all. All comments and improvements are very welcome as this is purely a personal experimental project at this point and must be considered a work in progress.

The Ubuntu Server Secure script:

  • The Ubuntu Server Secure script is set of GUI security administration tools to harden and audit the security on an Ubuntu 12.04 LTS server by using a GUI shell script to install and configure various security features found in How to secure an Ubuntu 12.04 LTS server – Part 1 The Basics.
  • This script was done as an experiment in using Zenity to create a interactive Gnome Gtk+ GUI installer.
  • Zenity is a great tool for creating a simple yet powerful GUI user interface on top of bash like shell script very quickly.

Requirements:

  • Ubuntu 12.04 LTS server with a standard LAMP stack installed.
  • Unity or Gnome Desktop installed.
  • Zenity installed. (Zenity installed by default in Ubuntu 12.04 LTS Desktop)

Lire la suite…

How to secure an Ubuntu 12.04 LTS server – Part 1 The Basics

25/01/2016 Comments off

Source: The Fan Club

This guide is based on various community forum posts and webpages. Special thanks to all. All comments and improvements are very welcome as this is purely a personal experimental project at this point and must be considered a work in progress.

This guide is intended as a relatively easy step by step guide to:

Harden the security on an Ubuntu 12.04 LTS server by installing and configuring the following:

  1. Install and configure Firewall – ufw
  2. Secure shared memory – fstab
  3. SSH – Key based login, disable root login and change port
  4. Apache SSL – Disable SSL v3 support
  5. Protect su by limiting access only to admin group
  6. Harden network with sysctl settings
  7. Disable Open DNS Recursion and Remove Version Info  – Bind9 DNS
  8. Prevent IP Spoofing
  9. Harden PHP for security
  10. Restrict Apache Information Leakage
  11. Install and configure Apache application firewall – ModSecurity
  12. Protect from DDOS (Denial of Service) attacks with ModEvasive
  13. Scan logs and ban suspicious hosts – DenyHosts and Fail2Ban
  14. Intrusion Detection – PSAD
  15. Check for RootKits – RKHunter and CHKRootKit
  16. Scan open Ports – Nmap
  17. Analyse system LOG files – LogWatch
  18. SELinux – Apparmor
  19. Audit your system security – Tiger

Requirements:

  • Ubuntu 12.04 LTS or later server with a standard LAMP stack installed.

Lire la suite…

Debian / Ubuntu / CentOs – Block DDOS attacks with No More DDOS (formerly : DDoS Deflate)

22/01/2016 Comments off

If you arrive on this page, is that you have already received a DDoS attack on your server or you want to protect it before this attack happens on your server.
In this tutorial, we will install « No More DDoS » (replacing DDoS Deflate that is no longer maintained by its author) that lets you easily protect you against small DDoS attacks.

This script is available in 2 versions :

  1. the Debian version, compatible with : Debian 6/7/8, Ubuntu Server 13.10, Ubuntu Server 14.04, Linux Mint 17 and distributions based on Debian.
  2. the CentOs version, compatible with : CentOs 6/7, RHEL 6/7 (à venir dans la version 2.0), Fedora 20 (coming in version 2.0), and distributions based on CentOs.
  1. Install No More DDoS
  2. Configure No More DDoS
  3. No More DDoS GUI
  4. Update No More DDoS
  5. Uninstall No More DDoS

1. Install No More DDoS

To install « No More DDoS for Debian« , use the following command :

wget -O- https://raw.githubusercontent.com/stylersnico/nmd/master/debian/install.sh | sh

To install »No More DDoS for CentOS 7 » use the following command :

wget -O- https://raw.githubusercontent.com/stylersnico/nmd/master/centos/install.sh | sh

2. Configurer No More DDoS

To configure No More DDoS, edit the « /usr/local/nmd/conf.d/agent.conf » file :

vim  /usr/local/nmd/conf.d/agent.conf

In this file, you can edit the following information :

  • FREQ : Interval time between 2 launches of the script. By default, this script is run once per minute.
  • NO_OF_CONNECTIONS : Corresponds to the maximum number of established connections to an IP address. If an IP address has more than 500 connections established on your server, this IP will be banned.
  • APF_BAN : By default, the script blocks IP addresses in the firewall with iptables (APF_BAN=0). To use « APF », specify 1 (APF_BAN=1).
  • EMAIL_TO : If you wish to be notified when blocking a DDoS attack, enter your email address at this line. If you leave this empty, then, no e-mail will be sent.
  • BAN_PERIOD : Period during an IP address is blocked. Default : 3600 seconds = 1 hour.

Lire la suite…

Categories: Réseau, Sécurité Tags: , , ,

Protect DDOS attacks

22/01/2016 Comments off

Protect DDOS attacks

Using ModEvasive agains DDoS attacksprotect ddos attacks

The first think to do is to install ModEvasive. All details are provided in http://hardenubuntu.com/hardening/apache/modsecurity/.

Configuring UFW

The following instructions can be added to the UFW rules. Edit the /etc/ufw/before.rules:

sudo vi /etc/ufw/before.rules

Add those lines after *filter near the beginning of the file:

:ufw-http - [0:0]
:ufw-http-logdrop - [0:0]

Add those lines near the end of the file, before the COMMIT:

### Start HTTP ###

# Enter rule
-A ufw-before-input -p tcp --dport 80 -j ufw-http
-A ufw-before-input -p tcp --dport 443 -j ufw-http

# Limit connections per Class C
-A ufw-http -p tcp --syn -m connlimit --connlimit-above 50 --connlimit-mask 24 -j ufw-http-logdrop

# Limit connections per IP
-A ufw-http -m state --state NEW -m recent --name conn_per_ip --set
-A ufw-http -m state --state NEW -m recent --name conn_per_ip --update --seconds 10 --hitcount 20 -j ufw-http-logdrop

# Limit packets per IP
-A ufw-http -m recent --name pack_per_ip --set
-A ufw-http -m recent --name pack_per_ip --update --seconds 1 --hitcount 20 -j ufw-http-logdrop

# Finally accept
-A ufw-http -j ACCEPT

# Log
-A ufw-http-logdrop -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW HTTP DROP] "
-A ufw-http-logdrop -j DROP

### End HTTP ###

Lire la suite…

Resolving “nf_conntrack: table full, dropping packet.” flood message in dmesg Linux kernel log

19/01/2016 Comments off

Source: pc-freak.net

nf_conntrack_table_full_dropping_packet
On many busy servers, you might encounter in /var/log/syslog or dmesg kernel log messages like

nf_conntrack: table full, dropping packet

to appear repeatingly:

[1737157.057528] nf_conntrack: table full, dropping packet.
[1737157.160357] nf_conntrack: table full, dropping packet.
[1737157.260534] nf_conntrack: table full, dropping packet.
[1737157.361837] nf_conntrack: table full, dropping packet.
[1737157.462305] nf_conntrack: table full, dropping packet.
[1737157.564270] nf_conntrack: table full, dropping packet.
[1737157.666836] nf_conntrack: table full, dropping packet.
[1737157.767348] nf_conntrack: table full, dropping packet.
[1737157.868338] nf_conntrack: table full, dropping packet.
[1737157.969828] nf_conntrack: table full, dropping packet.
[1737157.969928] nf_conntrack: table full, dropping packet
[1737157.989828] nf_conntrack: table full, dropping packet
[1737162.214084] __ratelimit: 83 callbacks suppressed

There are two type of servers, I’ve encountered this message on:

1. Xen OpenVZ / VPS (Virtual Private Servers)
2. ISPs – Internet Providers with heavy traffic NAT network routers

I. What is the meaning of nf_conntrack: table full dropping packet error message

In short, this message is received because the nf_conntrack kernel maximum number assigned value gets reached.
The common reason for that is a heavy traffic passing by the server or very often a DoS or DDoS (Distributed Denial of Service) attack. Sometimes encountering the err is a result of a bad server planning (incorrect data about expected traffic load by a company/companeis) or simply a sys admin error…

– Checking the current maximum nf_conntrack value assigned on host:

linux:~# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
65536

– Alternative way to check the current kernel values for nf_conntrack is through:

linux:~# /sbin/sysctl -a|grep -i nf_conntrack_max
error: permission denied on key 'net.ipv4.route.flush'
net.netfilter.nf_conntrack_max = 65536
error: permission denied on key 'net.ipv6.route.flush'
net.nf_conntrack_max = 65536

– Check the current sysctl nf_conntrack active connections

To check present connection tracking opened on a system:

:

linux:~# /sbin/sysctl net.netfilter.nf_conntrack_count
net.netfilter.nf_conntrack_count = 12742

The shown connections are assigned dynamicly on each new succesful TCP / IP NAT-ted connection. Btw, on a systems that work normally without the dmesg log being flooded with the message, the output of lsmod is:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
ip_tables 9899 1 iptable_filter
x_tables 14175 1 ip_tables

On servers which are encountering nf_conntrack: table full, dropping packet error, you can see, when issuing lsmod, extra modules related to nf_conntrack are shown as loaded:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
nf_conntrack_ipv4 10346 3 iptable_nat,nf_nat
nf_conntrack 60975 4 ipt_MASQUERADE,iptable_nat,nf_nat,nf_conntrack_ipv4
nf_defrag_ipv4 1073 1 nf_conntrack_ipv4
ip_tables 9899 2 iptable_nat,iptable_filter
x_tables 14175 3 ipt_MASQUERADE,iptable_nat,ip_tables

Lire la suite…

Blocking FTP Hacking Attempts

18/01/2016 Comments off

1. Sensible first steps

Disable FTP

blocking ftp hackingFirstly, do you really need to be running an FTP server? If not, turn it off and block the relevant ports. For example, using iptables:

/sbin/iptables -A INPUT -p tcp --match multiport --dports ftp,ftp-data -j DROP

In any case you almost certainly want to disable anonymous FTP connections. For one thing Googlebot has a nasty habit of exploring anonymous ftp which could result in the wrong files being exposed.

Limit access to FTP

If you do need to allow FTP then can you restrict access to specific ip addresses within your local network or a clients network? If so you should set up a white-list.

This can be enabled using /etc/proftpd/proftpd.conf as shown below – including one or moreAllow clauses to identify from where you want to allow FTP access:

<Limit LOGIN>
# single ip address example
Allow from 192.168.0.1

# multiple ip addresses example
Allow from 192.168.0.1 10.30.124.6

# subnet example
Allow from 192.168.0.0/16

# hostname example
Allow from example.net DenyAll </Limit>

The final DenyAll prevents the rest of the world from being able to connect. If you’re running ftp viainetd then the changes take effect immediately. Otherwise you will need to restart your FTP server.

Make logins harder to guess

Most FTP hacking attempts are automated so rely on guessing both the username and the password. For example, if your domain name is www.example.net the hacking script will try « example« , « examplenet« , « admin@example.net« , « webmaster@example.net » and so on. Generic usernames including « admin« , « www« , « data » and « test » are also being tried.

If the script is unable to guess a valid username then it will not be able to try any passwords. You should ensure your FTP usernames are not predictable in any way from the domain name – by appending some random letters or digits for example.

Hackers are also equipped with dictionaries and large databases of exposed username/password combinations from previously exploited servers. So make sure your passwords, not just for FTP, are long and complicated and don’t match common patterns.

2. Dynamically blocking login attempts

The Fail2Ban program can be used to detect failed login attempts and automatically block the source ip address for a period of time. With Fail2Ban installed, we can enable this as follows.

Enable the jail in /etc/fail2ban/jail.conf:

[proftpd]

enabled = true
port = ftp,ftp-data,ftps,ftps-data
filter = proftpd
logpath = /var/log/proftpd/proftpd.log
maxretry = 5
bantime = 3600

Define the regular expression to look for in /etc/fail2ban/filter.d/proftpd.conf:

failregex = \(\S+\[<HOST>\]\)[: -]+ USER \S+: no such user found from \S+ \[\S+\] to \S+:\S+ *$
\(\S+\[<HOST>\]\)[: -]+ USER \S+ \(Login failed\): .*$
\(\S+\[<HOST>\]\)[: -]+ SECURITY VIOLATION: \S+ login attempted\. *$
\(\S+\[<HOST>\]\)[: -]+ Maximum login attempts \(\d+\) exceeded *$

With the above configuration any ip address responsible for 5 or more failed FTP login attempts – any logfile entries matching the above regular expressions – will be ‘jailed’ for a period of 1 hour. You can change these values to require less failed login attempts or to make the jailing last longer.

Lire la suite…

How To Turn Off Post Revision In WordPress 2.6

18/01/2016 Comments off

One of the irritating feature for me in WordPress 2.6 is the post revision. I am the only author of my blog and hence this feature is useless to me.

Just in case you are wondering how post revision works, whenever a post is edited, a new row will be created in wp_posts table. Hence if your posts or pages got edited 10 times, you will have 10 new rows in wp_posts table.

In no time your wp_posts table will be filled up and the post ID will be huge.

To turn off this feature, add this following code to wp-config.php:

define('WP_POST_REVISIONS', false);

You can also delete all post revisions by running this query in phpMyAdmin:

DELETE a,b,c  
FROM wp_posts a  
LEFT JOIN wp_term_relationships b ON (a.ID = b.object_id)  
LEFT JOIN wp_postmeta c ON (a.ID = c.post_id)  
WHERE a.post_type = 'revision'  

Be sure to backup your database first before performing any queries in phpMyAdmin.

*UPDATE* Auto Saves does not create a revision of the post.

*UPDATE 2* Updated SQL query from Andrei Neculau as the previous query does not delete from wp_postmeta and wp_term_relationships tables.

*UPDATE 3* There is a proper way of cleaning up Post Revisions as mentioned by kitchin in Deleting Post Revisions: do NOT use the a,b,c JOIN code you see everywhere. I like his method more than the SQL query above.

Source: Lester Chan’s WordPress Plugins

Categories: Logiciel Tags:

Des requêtes SQL pour vous sauver la vie avec WordPress

18/01/2016 Comments off

sql wordpressVous avez changé votre blog WordPress de serveur et de domaine et là plus rien ne marche?

Vous voulez faire le ménage dans votre base de données ?

Et bien voici quelques exemples de requêtes SQL qui vous aiderons à faire tout ça rapidement et sans prise de tête.

Sauvegarde:

Avant de commencer la chose la plus importante à faire c’est de sauvegarder votre base de donnée.

Pour cela, plusieurs solutions s’offre à vous.

Soit vous utilisez directement une extensions WordPress tel que WP-DB-Backup ou WP-DBManager, ou alors vous pouvez le faire manuellement depuis l’interface de phpMyAdmin en suivant la procédure suivante:

  1. Connectez vous sur phpMyAdmin.
  2. Sélectionnez votre base WordPress.
  3. Cliquez sur « exporter » dans le menu en haut de la page.
  4. Sélectionnez les tables à sauvegarder.
  5. Sélectionnez SQL pour avoir un fichier de sortie en .sql.
  6. Cochez la case « Transmettre«
  7. Vous pouvez choisir un type de compression afin de réduire la taille de votre fichier.
  8. Cliquez sur « Exécuter » et le fichier devrait se télécharger.

phpmyadmin-export

Lire la suite…