Archive

Articles taggués ‘firewall’

Iptables Allow MYSQL server incoming request on port 3306

24/03/2017 Comments off

MySQL database is a popular for web applications and acts as the database component of the LAMP, MAMP, and WAMP platforms. Its popularity as a web application is closely tied to the popularity of PHP, which is often combined with MySQL. MySQL is open source database server and by default it listen on TCP port 3306. In this tutorial you will learn how to open TCP port # 3306 using iptables command line tool on Linux operating system.

Task: Open port 3306

In most cases following simple rule opens TCP port 3306:

iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT

The following iptable rules allows incoming client request (open port 3306) for server IP address 202.54.1.20. Add rules to your iptables shell script:

iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 202.54.1.20 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 202.54.1.20 --sport 3306 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT

However in real life you do not wish give access to everyone. For example in a web hosting company, you need to gives access to MySQL database server from web server only. Following example allows MySQL database server access (202.54.1.20) from Apache web server (202.54.1.50) only:

iptables -A INPUT -p tcp -s 202.54.1.50 --sport 1024:65535 -d 202.54.1.20 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 202.54.1.20 --sport 3306 -d 202.54.1.50 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT

Please note if you follow above setup, then you need tell all your hosting customer to use 202.54.1.50 as MySQL host in PHP/Perl code. A better approach is to create following entry in /etc/hosts file or use fully qualified domain name (create dns entry) mysql.hostingservicecompany.com which points to 202.54.1.50 ip:
202.54.1.50 mysql

In shot MySQL database connection code from PHP hosted on our separate webserver would look like as follows:

// ** MySQL settings ** //
define('DB_NAME', 'YOUR-DATABASE-NAME');     // The name of the database
define('DB_USER', 'YOUR-USER-NAME');     // Your MySQL username
define('DB_PASSWORD', 'YOUR-PASSWORD''); // ...and password
define('DB_HOST', 'mysql');       // mysql i.e. 202.54.1.50
// ** rest of PHP code ** //

Lire la suite…

Make the configuration of iptables persistent (Debian)

22/03/2017 Comments off

Objective

To make the configuration of iptables persistent on a Debian-based system

Background

The iptables and ip6tables commands can be used to instruct Linux to perform functions such as firewalling and network address translation, however the configuration that they create is non-persistent so is lost whenever the machine is rebooted. For most practical applications this is not the desired behaviour, so some means is needed to reinstate the configuration at boot time.

For security, the iptables configuration should be applied at an early stage of the bootstrap process: preferably before any network interfaces are brought up, and certainly before any network services are started or routing is enabled. If this is not done then there will be a window of vulnerability during which the machine is remotely accessible but not firewalled.

Scenario

Suppose you have a machine that you wish to protect using a firewall. You have written iptables and ip6tables rulesets, and wish to install them so that they will remain active if the machine is rebooted.

Lire la suite…

GeoIP pour iptables

18/03/2017 Comments off

Source: how-to.ovh

Marre des pays exotiques qui essaient de s’introduire sur le serveur et pourrissent vos logs et font bosser fail2ban ?

Une solution pour bloquer les pays avec lesquels vous n’avez pas de relations. Pour Debian mais sûrement adaptable à d’autres distributions.

# Install GeoIP pour iptables

apt-get install dkms xtables-addons-dkms xtables-addons-common xtables-addons-dkms geoip-database libgeoip1 libtext-csv-xs-perl unzip

# On vérifie que c’est ok

dkms status xtables-addons

# on crée le repertoire

mkdir /usr/share/xt_geoip

# on se déplace dedans

cd /usr/share/xt_geoip/

# on télécharge le fichier

wget http://man.sethuper.com/wp-content/uploads/2013/06/geoip-dl-build.tar.gz

# on le décompresse

tar xvf geoip-dl-build.tar.gz

# on l’exécute

./xt_geoip_dl

# si cela donne un message d’erreur, on fait ceci

/usr/bin/perl -MCPAN -e'install Text::CSV_XS'

# on exécute l’autre fichier

./xt_geoip_build -D . *.csv

# on efface les fichiers inutiles

rm -rf geoip-dl-build.tar.gz

# on teste iptables en bloquant la Chine et la Russie

iptables -A INPUT -m geoip --src-cc CN,RU -j DROP

# on vérifie

iptables -L -v

# ce qui donnera cette ligne indiquant que les pays seront bloqués

DROP all -- anywhere anywhere -m geoip --source-country CN,RU

pour interdire le port 22 à ces pays

iptables -A INPUT -p tcp --dport 22 -m geoip --src-cc CN,RU -j DROP

Block entire countries on Ubuntu server with Xtables and GeoIP

18/03/2017 Comments off

Source: jeshurun.ca

Anyone who has administered even a moderately high traffic server will have noticed that certain unwelcome traffic such as port scans and probes tend to come from IP addresses belonging to a certain group of countries. If your application or service does not cater to users in these countries, it might be a safe bet to block these countries off entirely.

This is especially true for email servers. The average email server, based on anecdotal evidence of servers for around 20 domains, rejects about 30% of incoming email every day as spam. Some servers on some days reject up to as much as 97% of incoming email as spam. Most of these originate in a certain subset of countries. That is a lot of wasted CPU cycles being expended on scanning these undesired emails for spam and viruses. Although tools such as amavisd and spamassasin do a good job of keeping the vast majority of spam out of users’ inboxes, when the rare well crafted and targeted phishing email does get through, it wrecks havoc in the enterprise.

Lire la suite…

How to save rules of the iptables?

18/03/2017 Comments off
iptables-save

Saving iptables rules for reboot

On a server, iptables rules don’t reload automatically at reboot. You need to reload the rules using ax executable shell scripture a dedicated utility that will load them at the same time as the program itself, i.e. with the kernel.

Depending of the version of Linux you use, you can select different methods:

sudo su
iptables-save > /etc/iptables.rules

In /etc/network/if-pre-up.d/iptables, put:

#!/bin/sh
iptables-restore < /etc/iptables.rules
exit 0

After, in /etc/network/if-post-down.d/iptables, put:

#!/bin/sh
iptables-save -c > /etc/iptables.rules
if [ -f /etc/iptables.rules ];
       then iptables-restore < /etc/iptables.rules
fi
exit 0

After, give permission to the scripts:

sudo chmod +x /etc/network/if-post-down.d/iptables sudo chmod +x /etc/network/if-pre-up.d/iptables

Another scenario is to is to install iptables-persistent:

sudo apt-get install iptables-persistent

After it’s installed, you can save/reload iptables rules anytime:

    sudo /etc/init.d/iptables-persistent save 
    sudo /etc/init.d/iptables-persistent reload

Or if you use Ubuntu server 16.04, things are simpler:

The installation as described above works without a problem, but the two commands for saving and reloading above do not seem to work with a 16.04 server. The following commands work with that version:

    sudo netfilter-persistent save
    sudo netfilter-persistent reload

Easy Ubuntu 16.04 Server Firewall

23/02/2017 Comments off

If you read our previous article Easy Ubuntu Server Firewall, then you may have noted that on Ubuntu 16.04 the described method no longer works. This is due to systemd. In the article below we will walk through creating a persistent IPTables based firewall on Ubuntu 16.04 LTS. First we need to install some required software packages. As seen in the command below, install iptables-persistent. Next we will make netfilter-persistent run at boot. This is the most important step as it will ensure your rules are reloaded at boot time.

# Install IPTables Persistent Package
apt-get install -y iptables-persistent
# Add netfilter-persistent Startup
invoke-rc.d netfilter-persistent save
# Stop netfilter-persistent Service
service netfilter-persistent stop

Once the packages above are installed and the service is stopped, you will have a new directory at /etc/iptables/. This directory holds the IPTables filter rules that will be reloaded at boot time. These files are named rules.v4 and rules.v6 respectively. IPV4 rules are loaded into rules.v4 and IPV6 rules are loaded into rules.v6. For the purpose of this article we will focus on IPV4 rules. Next we will want to copy the rules below into our rules.v4 file. Of course the rules will need to be modified to fit your environment.

# Generated by iptables-save v1.3.3 on Wed Apr 9 10:51:08 2008
# Flush out any rules that are already in there
*filter
:INPUT ACCEPT [146:11332]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [104:9831]
 
# Allow internal loopback connections
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
 
# Allow pinging
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
 
# Allow any outbound data, and any inbound data related to a connection that is already in use
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
 
# =========BEGIN SERVER SPECIFIC PORT OPEN RULES=========
# Allow SCP/SSH Access from Green & Blue Subnet
-A INPUT -s 172.16.12.0/255.255.255.0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -s 10.10.12.0/255.255.255.0 -p tcp -m tcp --dport 22 -j ACCEPT
 
# Allow HTTP Access from Red Subnet/Internet
-A INPUT -p tcp -m state --state NEW,ESTABLISHED --dport 80 -j ACCEPT
 
# Allow HTTPS Access from Red Subnet/Internet
-A INPUT -p tcp -m state --state NEW,ESTABLISHED --dport 443 -j ACCEPT
 
# Allow MySQL Access from Red Subnet/Internet
-A INPUT -p tcp -m state --state NEW,ESTABLISHED --dport 3306 -j ACCEPT
 
# Allow FTP Access from Red Subnet/Internet
-A INPUT -p tcp -m state --state NEW,ESTABLISHED --dport 21 -j ACCEPT
-A INPUT -p tcp -m state --state NEW,ESTABLISHED --dport 58000:58010 -j ACCEPT
# =========END SERVER SPECIFIC PORT OPEN RULES=========
 
# Drop everything that hasn't been picked up by one of the rules above
-A INPUT -j DROP
-A FORWARD -j DROP
-A OUTPUT -j DROP
 
COMMIT
# Completed on Wed Apr 9 10:51:08 2008

Lastly, in order for our new rules to take affect, we simply need to start the netfilter-persistent service as seen below. That’s it, you now have a fully functional IPTables based firewall.

# Start netfilter-persistent Service
service netfilter-persistent start

# Check if IPTables were applied
iptables -L

A Deep Dive into Iptables and Netfilter Architecture

09/06/2016 Comments off

Introduction

Firewalls are an important tool that can be configured to protect your servers and infrastructure. In the Linux ecosystem, iptables is a widely used firewall tool that interfaces with the kernel’s netfilter packet filtering framework. For users and administrators who don’t understand the architecture of these systems, creating reliable firewall policies can be daunting, not only due to challenging syntax, but also because of number of interrelated parts present in the framework.

In this guide, we will dive into the iptables architecture with the aim of making it more comprehensible for users who need to build their own firewall policies. We will discuss how iptables interacts with netfilter and how the various components fit together to provide a comprehensive filtering and mangling system.

 

What Are IPTables and Netfilter?

The basic firewall software most commonly used in Linux is called iptables. The iptables firewall works by interacting with the packet filtering hooks in the Linux kernel’s networking stack. These kernel hooks are known as the netfilter framework.

Every packet that enters networking system (incoming or outgoing) will trigger these hooks as it progresses through the stack, allowing programs that register with these hooks to interact with the traffic at key points. The kernel modules associated with iptables register at these hooks in order to ensure that the traffic conforms to the conditions laid out by the firewall rules.

 

Netfilter Hooks

There are five netfilter hooks that programs can register with. As packets progress through the stack, they will trigger the kernel modules that have registered with these hooks. The hooks that a packet will trigger depends on whether the packet is incoming or outgoing, the packet’s destination, and whether the packet was dropped or rejected at a previous point.

The following hooks represent various well-defined points in the networking stack:

  • NF_IP_PRE_ROUTING: This hook will be triggered by any incoming traffic very soon after entering the network stack. This hook is processed before any routing decisions have been made regarding where to send the packet.
  • NF_IP_LOCAL_IN: This hook is triggered after an incoming packet has been routed if the packet is destined for the local system.
  • NF_IP_FORWARD: This hook is triggered after an incoming packet has been routed if the packet is to be forwarded to another host.
  • NF_IP_LOCAL_OUT: This hook is triggered by any locally created outbound traffic as soon it hits the network stack.
  • NF_IP_POST_ROUTING: This hook is triggered by any outgoing or forwarded traffic after routing has taken place and just before being put out on the wire.

Kernel modules that wish to register at these hooks must provide a priority number to help determine the order in which they will be called when the hook is triggered. This provides the means for multiple modules (or multiple instances of the same module) to be connected to each of the hooks with deterministic ordering. Each module will be called in turn and will return a decision to the netfilter framework after processing that indicates what should be done with the packet.

 

IPTables Tables and Chains

The iptables firewall uses tables to organize its rules. These tables classify rules according to the type of decisions they are used to make. For instance, if a rule deals with network address translation, it will be put into the nat table. If the rule is used to decide whether to allow the packet to continue to its destination, it would probably be added to the filter table.

Within each iptables table, rules are further organized within separate « chains ». While tables are defined by the general aim of the rules they hold, the built-in chains represent the netfilter hooks which trigger them. Chains basically determine when rules will be evaluated.

As you can see, the names of the built-in chains mirror the names of the netfilter hooks they are associated with:

  • PREROUTING: Triggered by the NF_IP_PRE_ROUTING hook.
  • INPUT: Triggered by the NF_IP_LOCAL_IN hook.
  • FORWARD: Triggered by the NF_IP_FORWARD hook.
  • OUTPUT: Triggered by the NF_IP_LOCAL_OUT hook.
  • POSTROUTING: Triggered by the NF_IP_POST_ROUTING hook.

Chains allow the administrator to control where in a packet’s delivery path a rule will be evaluated. Since each table has multiple chains, a table’s influence can be exerted at multiple points in processing. Because certain types of decisions only make sense at certain points in the network stack, every table will not have a chain registered with each kernel hook.

There are only five netfilter kernel hooks, so chains from multiple tables are registered at each of the hooks. For instance, three tables have PREROUTING chains. When these chains register at the associated NF_IP_PRE_ROUTING hook, they specify a priority that dictates what order each table’s PREROUTING chain is called. Each of the rules inside the highest priority PREROUTING chain is evaluated sequentially before moving onto the next PREROUTING chain. We will take a look at the specific order of each chain in a moment.
Lire la suite…

What is a Distributed Firewall?

01/06/2016 Comments off

In the post “What is Network Virtualization?” I described a model where the application’s complete L2-L7 virtual network is decoupled from hardware and moved into a software abstraction layer for the express purpose of automation and business agility. In this post I’ll focus on network security, and describe an imminent firewall form factor enabled by Network Virtualization — the Distributed Firewall.

ALL YOUR PACKET ARE BELONG TO US

If InfoSec ruled the world … well, OK, maybe not the world … if InfoSec ruled the data center network design, and if money was no object, we would probably have something like this. Every server in the data center directly connected to its own port on one massive firewall. Every packet sent from every server would be inspected against a stateful security policy before going anywhere. And every packet received by every server would pass one final policy check before hitting the server’s NIC receive buffer. The firewall wouldn’t care about the IP address of the servers, for the simple reason that it’s directly connected to every server. E.g. “The server on this port can talk to the server on that port, on TCP port X”. And if that wasn’t good enough, the firewall knows everything about the servers connected to it, and can create rules around a rich set of semantics. All of this with no performance penalty. That would be awesome, right?

Let’s pretend money was not the issue. How would you design this massive omnipresent data center firewall? I can think of three ways off hand.

  1. You design a monstrous power sucking stateful firewall chassis with thousands of line-rate ports. At this point it’s time to route a ghastly mess of cables from every server to this centralized mega firewall core chassis – but that’s somebody else’s problem. Oh, and don’t forget you’ll need two of those bad boys for “redundancy”. Your monster firewall is pretty freaking awesome at security, but only so-so at basic L2 and L3 networking. But so what — the network team can learn to like it or find a new job. And if you run out of ports … no worries; just wait another few years for a bigger chassis and do the rip/replace routine.
  2. You design a line rate stateful firewall ToR switch. Rip out the network team’s favorite ToR and put this one in its place. Tell them to stop throwing a fit and just deal with it. You’ll have hundreds of these ToR firewalls to manage and configure consistently. No problem … just let the network team re-apply for their jobs as firewall engineers.

Go ahead and pinch yourself now. This is nothing but a fantasy nightmare.

The interests of security often poorly translate into networking. Comprehensive security ~= Compromisednetworking.

What about design #3? More on that in a minute. (Hint: title of the post)

In the real world, rest assured we do have firewalls to provide some security. But this security is not ubiquitous, nor is it assured. Instead, we have firewalls (physical or virtual) hanging off the network somewhere catching steered packets – and we can only hope the network was configured correctly to steer the right traffic to the right policy.

In this post we’ll briefly review the physical and virtual firewall, followed by a discussion on the Distributed Firewall.

Lire la suite…

Tutorial: Using VMWare ESXi and PFsense as a network firewall/router

12/05/2016 Comments off

vmware esxi

Using VMWare ESXi and PFsense as a network firewall/router

In most networks, you will have dedicated hardware to function as your “edge” (firewall/router). This is typically for the best, but there are always cases where you can’t put out that dedicated hardware. Sometimes it’s for cost reasons and sometimes it’s for complexity. In my particular case, I was installing an ESXi server in a datacenter and only had 2 amps of power to work with, of which my server took up ~1.8amps at peak load. So cost came into play and we simply couldn’t afford to put in dedicated hardware that could push enough bits. In such cases, it is possible the setup ESXi on the network edge, in a reasonably secure fashion, with PFSense acting as a firewall.

vmware_vsphereThe most important requirement to this project is that your VMWare ESXi server has at least two network ports on it. One will be the WAN port, one will be the LAN port. Also throughout this tutorial I will use PFSense as my firewall/router OS of choice, however it is just an example that can be easily swapped out with any other virtualized firewall product. Some options include Palo Alto Networks, Fortinet, and even generic *NIX operating systems with the right forwarding/firewall setup.

Section 1 – VMWare Setup

Step 1 – Install & Connect to ESXi

  • You should already have ESXi setup and connected via the VSphere client on Windows.
  • It’s recommended that you static the IP address of the VMWare Management interface, if you’ve not done so already.
  • Go to Configuration > Networking
  • Rename the vSwitch interface you’re using to “LAN”
2015-08-25-18_23_50-esxi1
Step 2 – Add new interface
You want “Virtual Machine” type
2015-08-25-18_24_15-Add-Network-Wizard
Step 3 – Select NIC
You want to select your unused NIC (assuming you only have two)
2015-08-25-18_25_11-Add-Network-Wizard
Step 4 – Name it
This is your “WAN” interface
2015-08-25-18_25_35-Add-Network-Wizard
Step 5 – Confirm you’ve got two networks
You’ll notice that we’ve got two vSwitches now. The “LAN” switch has the Management network and is connected currently. The “WAN” switch has nothing, and the adapter is disconnected.
2015-08-25-18_26_06-VMware

Section 2 – Virtual Machine Setup

Step 1 – New VM 2015-08-25-18_29_17-New-VM
Step 2 – Typical Setup 2015-08-25-18_29_31-Create-New-Virtual-Machine
Step 3 – Name your VM 2015-08-25-18_29_39-Create-New-Virtual-Machine
Step 4 – Select Datastore 2015-08-25-18_29_46-Create-New-Virtual-Machine
Step 5 – OS Type
If you’re using PFSense, select “Other” and “FreeBSD 64bit”
2015-08-25-18_29_57-Create-New-Virtual-Machine
Step 6 – Two NICs
Unlike most VMs with 1 NIC, add 2 NICs to this VM.
Make sure one adapter is on “WAN” network and one adapter is on “LAN” network.
2015-08-25-18_30_18-Create-New-Virtual-Machine
Step 7 – Allocated HD
PFSense doesn’t need much space, but it should be allocated a 2:1 for swap (e.g. 4096 MB swap file for 2048 MB of RAM), plus some extra space for packages and logs may be useful.
2015-08-25-18_30_38-Create-New-Virtual-Machine
Step 8 – Edit before completion 2015-08-25-18_30_46-Create-New-Virtual-Machine
Step 9 – Final settings
As this is my firewall, I want to make sure it is plenty fast. So I opted for 4 cores and 2 GB RAM. Also attach the CD drive to PFSense installer (be it datastore ISO or real USB/Optical drive).
2015-08-25-18_31_54-pfsense-Virtual-Machine-Properties
Step 8 – Verify Network
Hop back to Configuration > Networking and you should see something like this. Note: various VMs are all attached to the LAN vSwitch, however only PFsense VM is attached to both WAN & LAN (just like a real firewall).
2015-08-25-18_33_31-VMWare-Verify
Step 9 – VM Startup
Go to Configuration > VM Startup/Shutdown
Click Properties
2015-08-31-12_30_32-Store
Step 10 – Set PFSense to first boot order
You may have other VMs that you want to auto-start, but as this is your firewall, it should be the first to start.
2015-08-31-12_31_05-Virtual-Machine-Startup-and-Shutdown

Section 3 – PFSense

Step 1 – Install PFSense
Once you’ve installed PFSense, it will automatically configure its local interface to 192.168.1.1
pfsense-install1
Step 2 (Optional) – Change local network
You can reconfigure the local network either via web interface (at the aforementioned IP: http://192.168.1.1) or command line
pfsense-installer
Step 3 – Configure WAN
Again, this can be configured either via the web, or command line.
2015-08-31-12_19_39-pfSense-Interfaces_-WAN
Step 4 – Plug in WAN cable 2015-08-19-13.59.53
Step 5 – Test
If you’ve got the ports configured properly (i.e. WAN hardware is WAN in VMWare and WAN in PFSense), you should be able to connect to the internet.
2015-08-31-12_27_35-pfSense-Status_-Dashboard

There are two big questions after building a setup like this, the first is security. Since PFSense is the host to provide an interface on the WAN, it should be the only method of ingress into your network. With no VMware management interface on the WAN, there should be no way for an outside party to access ESXi directly. I’ve used this setup successfully (and safely) before, as have others. However, you always need to balance your particular security concerns with the cost of dedicated devices.

The second question is remote management/maintenance/failure. Managing ESXi remotely is easy, if you setup a VPN on your PFSense VM. Without that (or similar) you will not be able to remotely manage the box (by design). But what happens if there is a failure either in the VMWare hardware or the PFSense virtual machine? That’s the big failing point of this setup – you’re down. If, for whatever reason, PFsense dies – your network is offline and you cannot remotely manage it. If this hardware is installed in a dateacenter, you’d need to either get in there yourself or remote hands reboot. Something to keep in mind when balancing the cost issue. OF course, if it’s local (say you use this at home), then it’s not such a big deal.
IMG_07121I will note that this is the setup I use in my home network, which doubles as my homelab. Having a VM for a firewall gives me a lot of flexibility, like adding an entirely separate vSwitched network for experimental VMs. I can also swap out the firewall VM for another one with next to no downtime. It also allows me to skip one more piece of hardware at home which would add to my otherwise hefty powerbill.

Source: obviate.io

Voyage au centre du noyau: Traffic Control, la QoS

11/05/2016 Comments off

Gérer la QoS.

On peut aujourd’hui largement envisager d’héberger un ou plusieurs services sur son serveur à domicile, et des mouvements comme auto-hebergement.fr l’on bien illustré. Reste le problème de la bande passante en upload, qui bien que largement suffisante pour héberger des serveurs web, email, jabber ou autre, reste à utiliser intelligemment.

Linux fournit cette intelligence, sous forme d’un scheduler de paquets nommé Traffic Control (TC, pour les intimes), et l’objectif de cet article est de présenter cette technologie et sa mise en place dans un cas d’étude d’hébergement Web, DNS et même BitTorrent. Notons au passage que bon nombre de scripts et programmes existent pour simplifier la mise en place de la QoS (Quality of Service). Citons Wondershaper, Shorewall, ADSL-Optimizer par exemple. Cet article n’en parlera pas, car l’objectif est ici de faire mais aussi de comprendre comment ça marche sous le capot, et pour ça, il faut démonter le moteur et mettre les mains dans le cambouis.

1. Traffic Control, la QoS, les bases

Traffic Control travaille sur les paquets sortant du noyau. Il n’a pas, initialement, pour objectif de contrôler le trafic des paquets entrants. Cette portion de code du noyau se situe entre la couche IP et le pilote du matériel qui transmet sur le réseau. On est donc très bas dans les couches. En réalité, c’est Traffic Control qui est constamment en charge de transmettre au driver de la carte réseau le paquet à envoyer.

figure1-tcgeneral

Cela signifie, en fait, que le module TC – le scheduler de paquet – est en permanence activé dans le noyau, même quand vous ne pensez pas l’utiliser. Par défaut, ce scheduler maintient une queue (prononcer kiou, une file d’attente) similaire à FIFO dans laquelle le premier paquet entré est donc le premier sortit.

La base de TC est la Queuing Discipline (qdisc) qui représente la politique de scheduling appliquée à une queue. Il existe différentes qdisc. Comme pour le scheduling processeur, on retrouve les méthodes FIFO, FIFO à plusieurs files, FIFO avec hash et round robin (SFQ). On a également un système Token Bucket Filter (TBF) qui attribue des jetons (tokens) à une qdisc pour en limiter le débit (pas de token = pas de transmission = on attend d’avoir un jeton disponible). Cette dernière politique a ensuite été étendue à un TBF hiérarchique, le HTB (Hierarchical Token Bucket). Les politiques que nous allons étudier ici sont TBF, qui pose les fondamentaux, SFQ et HTB. Nous allons également jeter un coup d’oeil à la politique par défaut, que, tout Monsieur Jourdain que nous sommes, nous utilisons sans le savoir: pfifo_fast.

1.1 Premier contact

Jean-Kevin est pressé, il n’a pas de temps à perdre, et tout de suite maintenant, il doit limiter la bande passante sortante de son serveur web à 200kbits par secondes (25ko/s). Au diable la théorie, on y reviendra plus tard, mettons tout de suite les mains dans le cambouis. La mécanique que nous allons mettre en place est simple. Nous allons utiliser une règle Netfilter pour marquer les paquets qui nous intéressent. Ensuite, nous allons fournir à TC une politique qui s’appliquera sur les paquets contenant la marque définie. C’est parti.

1.2 Netfilter MARK

Netfilter permet d’interagir directement avec la structure représentant un paquet dans le noyau. Cette structure, le sk_buff, possède un champ « __u32 nfmark » que l’on va renseigner et qui sera lu par le filtre de TC pour sélectionner la classe de destination du paquet. La règle iptables suivante va appliquer la marque ’80’ sur les paquets sortant (chaine OUTPUT) ayant pour port source le port 80:

# iptables -t mangle -A OUTPUT -o eth0 -p tcp --sport 80 -j MARK --set-mark 80

On peut vérifier que cette règle est bien appliquée aux paquets sortants en visualisant les statistiques de Netfilter.

# iptables -L OUTPUT -t mangle -v
Chain OUTPUT (policy ACCEPT 74107 packets, 109M bytes)
 pkts bytes target prot opt in  out  source   destination
73896  109M MARK   tcp  --  any eth0 anywhere anywhere    tcp spt:www MARK xset 0x50/0xffffffff

1.3 Deux classes dans un arbre

Le binaire /sbin/tc est compris dans le package iproute (sous Debian). Un simple aptitude suffit à l’installer, s’il ne l’est pas déjà. Nous allons créer un arbre dont la racine appliquera la politique HTB. Cet arbre va contenir deux classes: une pour notre trafic marqué, l’autre pour tout le reste et qui sera donc considérée par défaut.

# tc qdisc add dev eth0 root handle 1: htb default 20
# tc class add dev eth0 parent 1:0 classid 1:10 htb rate 200kbit ceil 200kbit prio 1 mtu 1500
# tc class add dev eth0 parent 1:0 classid 1:20 htb rate 1024kbit ceil 1024kbit prio 2 mtu 1500

Les deux classes filles sont raccrochés à la racine. Ces classes possèdent un débit garantie (rate) et un débit maximal opportuniste (ceil). Si la bande passante n’est pas utilisée, alors une classe pourra monter son débit jusqu’à la valeur de ceil. Sinon c’est la valeur de rate qui s’applique. Cela veut dire que la somme des valeurs de rate doit correspondre à la bande passante disponible. Dans le cas d’un upload ADSL classique chez un fournisseur correct, cela sera d’environ 1024kbits (dans le meilleur des cas, éloignement du DSLAM, etc…).

Nous avons maintenant d’un côté un arbre de contrôle de trafic, et d’un autre côté du marquage de paquets. Il reste donc à relier les deux. Cela est fait avec les règles de filtrage de TC. Ces règles sont très simples. On dit à TC de prendre en charge (handle) les paquets portant la marque 80 et de les envoyer (fw flowid) à la classe correspondante. Un point important toutefois, un filtre doit être rattaché à la racine « root » de l’arbre. Sinon, il n’est pas pris en compte.

# tc filter add dev eth0 parent 1:0 protocol ip prio 1 handle 80 fw flowid 1:10

Faisons maintenant le test avec NetCat, on ouvre un port en écoute qui renvoi des zéro. C’est basique et parfait pour tester notre politique. On lance donc :

# nc -l -p 80 < /dev/zero

Et sur une autre machine, on lance un telnet vers le port 80 de la machine en écoute. L’outil iptraf permet de visualiser la connexion en cours et, surtout, son débit (voir figure 2).

figure2-debitqos

Comme on le voit dans l’encadré rouge, en bas à droite, le débit de la connexion est de 199,20kbps. On s’approche de beaucoup des 200kbps, la précision dépendant quelques paramètres que nous allons étudier. Si l’on teste une connexion du même type sur un autre port, on verra un débit limité à 1024kbps, ce qui correspond au débit de la classe par défaut qui s’applique à tous les paquets non marqués.

Lire la suite…