Archive

Archives pour 01/2019

How To Choose an Effective Firewall Policy to Secure your Servers

31/01/2019 Comments off

Source: DigitalOcean – Justin Ellingwood

Introduction

Using a firewall is as much about making intelligent policy decisions as it is about learning the syntax. Firewalls like iptables are capable of enforcing policies by interpreting rules set by the administrator. However, as an administrator, you need to know what types of rules make sense for your infrastructure.

While other guides focus on the commands needed to get up and running, in this guide, we will discuss some of the decisions you will have to make when implementing a firewall. These choices will affect how your firewall behaves, how locked down your server is, and how it will respond to various conditions that are likely to occur from time to time. We will be using iptables as an example to discuss specifics, but most of the actual decisions will be relevant regardless of the tools used.

Deciding on a Default Policy

When constructing a firewall, one of the fundamental decisions that you must make is the default policy. This determines what happens when traffic is not matched by any other rules. By default, a firewall can either accept any traffic unmatched by previous rules, or deny that traffic.

Default Drop vs Default Accept

A default policy of « accept » means that any unmatched traffic is allowed to enter the server. This is generally not advised because it means that, effectively, you will be maintaining a black list. Black lists are difficult to manage because you must anticipate and block every type of unwanted traffic explicitly. This can lead to maintenance headaches and is generally prone to mistakes, mis-configurations, and unanticipated holes in the established policy.

The alternative is a default policy of « drop ». This means that any traffic not matched by an explicit rule will not be allowed. This is akin to a white list ACL. Each and every service must be explicitly allowed, which might seem like a significant amount of research and work at first. However, this means that your policy tends towards security and that you know exactly what is permitted to receive traffic on your server.

Basically the choice comes down to a tendency towards security by default or reachable services out-of-the-box. While it may be tempting to implement a firewall that leans towards service availability, it is almost always a better idea to block traffic unless explicitly allowed.

Default Drop Policy vs Final Drop Rule

The above choice of a default drop policy leads to another subtle decision. With iptables and other similar firewalls, the default policy can be set using the built-in policy functionality of the firewall, or implemented by adding a catch-all drop rule at the end of the list of rules.

The distinction between these two methods lies in what happens if the firewall rules are flushed.

If your firewall’s built-in policy function is set to « drop » and your firewall rules are ever flushed (reset), or if certain matching rules are removed, your services will instantly become inaccessible remotely. This is often a good idea when setting policy for non-critical services so that your server is not exposed to malicious traffic if the rules are removed.

The downside to this approach is that your services will be completely unavailable to your clients until you re-establish permissive rules. You could even potentially lock yourself out of the server if you do not have local or out-of-band access to skirt the issue (DigitalOcean servers are accessible regardless of network settings by using the « Console Access » button located in the « Access » portion of your Droplet’s page in the control panel). If your firewall flush is intentional, this can be avoided by simply switching the default policy to « accept » just prior to resetting the rules.

The alternative to setting a drop policy using the built-in policy functionality is to set your firewall’s default policy to « accept » and then implement a « drop » policy with regular rules. You can add a normal firewall rule at the end of your chain that matches and denies all remaining unmatched traffic.

In this case, if your firewall rules are flushed, your services will be accessible but unprotected. Depending on your options for local or alternative access, this might be a necessary evil in order to ensure that you can re-enter your server if the rules are flushed. If you decide to use this option, you must ensure that the catch-all rule always remains the last rule in your rule set.

Lire la suite…

How To Set Up an Iptables Firewall to Protect Traffic Between your Servers

31/01/2019 Comments off

Source DigitalOcean – Justin Ellingwood

Introduction

Deploying discrete components in your application setup onto different nodes is a common way to decrease load and begin scaling horizontally. A typical example is configuring a database on a separate server from your application. While there are a great number of advantages with this setup, connecting over a network involves a new set of security concerns.

In this guide, we’ll demonstrate how to set up a simple firewall on each of your servers in a distributed setup. We will configure our policy to allow legitimate traffic between our components while denying other traffic.

For the demonstration in this guide, we’ll be using two Ubuntu 14.04 servers. One will have a WordPress instance served with Nginx and the other will host the MySQL database for the application. Although we will be using this setup as an example, you should be able to extrapolate the techniques involved to fit your own server requirements.

Prerequisites

To get started, you will have to have two fresh Ubuntu 14.04 servers. Add a regular user account with sudo privileges on each. To learn how to do this correctly, follow our Ubuntu 14.04 initial server setup guide.

The application setup we will be securing is based on this guide. If you’d like to follow along, set up your application and database servers as indicated by that tutorial.

Setting Up a Basic Firewall

We will begin by implementing a baseline firewall configuration for each of our servers. The policy that we will be implementing takes a security-first approach. We will be locking down almost everything other than SSH traffic and then poking holes in the firewall for our specific application.

The firewall in this guide provides the basic setup that we need. Install the iptables-persistent package and paste the basic rules into the /etc/iptables/rules.v4 file:

sudo apt-get update
sudo apt-get install iptables-persistent
sudo nano /etc/iptables/rules.v4
/etc/iptables/rules.v4
*filter
# Allow all outgoing, but drop incoming and forwarding packets by default
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]

# Custom per-protocol chains
:UDP - [0:0]
:TCP - [0:0]
:ICMP - [0:0]

# Acceptable UDP traffic

# Acceptable TCP traffic
-A TCP -p tcp --dport 22 -j ACCEPT

# Acceptable ICMP traffic

# Boilerplate acceptance policy
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A INPUT -i lo -j ACCEPT

# Drop invalid packets
-A INPUT -m conntrack --ctstate INVALID -j DROP

# Pass traffic to protocol-specific chains
## Only allow new connections (established and related should already be handled)
## For TCP, additionally only allow new SYN packets since that is the only valid
## method for establishing a new TCP connection
-A INPUT -p udp -m conntrack --ctstate NEW -j UDP
-A INPUT -p tcp --syn -m conntrack --ctstate NEW -j TCP
-A INPUT -p icmp -m conntrack --ctstate NEW -j ICMP

# Reject anything that's fallen through to this point
## Try to be protocol-specific w/ rejection message
-A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable
-A INPUT -p tcp -j REJECT --reject-with tcp-reset
-A INPUT -j REJECT --reject-with icmp-proto-unreachable

# Commit the changes
COMMIT

*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT

*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT

*security
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT

*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT

If you are implementing this in a live environment do not reload your firewall rules yet. Loading the basic rule set outlined here will immediately drop the connection between your application and database server. We will need to adjust the rules to reflect our operational needs before reloading.

Lire la suite…

How To Migrate Iptables Firewall Rules to a New Server

30/01/2019 Comments off

Source: DigitalOcean – Mitchell Anicas

Introduction

When migrating from one server to another, it is often desirable to migrate the iptables firewall rules as part of the process. This tutorial will show you how to easily copy your active iptables rule set from one server to another.

Prerequisites

This tutorial requires two servers. We will refer to the source server, which has the existing iptables rules, as Server A. The destination server, where the rules will be migrated to, will be referred to as Server B.

You will also need to have superuser, or sudo, access to both servers.

View Existing Iptables Rules

Before migrating your iptables rules, let’s see what they are set to. You can do that with this command on Server A:

sudo iptables -S
Example output:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -s 15.15.15.51/32 -j DROP

The example rules above will be used to demonstrate the firewall migration process.

Export Iptables Rules

The iptables-save command writes the current iptables rules to stdout (standard out). This gives us an easy way to export the firewall rules to file, by redirecting stdout to a file.

On the Server A, the one with the iptables rules that you want to migrate, use the iptables-save to export the current rules to a file named « iptables-export » like this:

cd ~
sudo iptables-save > iptables-export

This will create the iptables-export file, in your home directory. This file can be used on a different server to load the firewall rules into iptables.

Lire la suite…

How To Use ProxySQL as a Load Balancer for MySQL on Ubuntu 16.04

30/01/2019 Comments off

Introduction

ProxySQL is an open-source MySQL proxy server, meaning it serves as an intermediary between a MySQL server and the applications that access its databases. ProxySQL can improve performance by distributing traffic among a pool of multiple database servers and also improve availability by automatically failing over to a standby if one or more of the database servers fail.

In this guide, you will set up ProxySQL as a load balancer for multiple MySQL servers with automatic failover. As an example, this tutorial uses a multi-primary replicated cluster of three MySQL servers, but you can use a similar approach with other cluster configurations as well.

Prerequisites

To follow this tutorial, you will need:

Lire la suite…
Categories: Système Tags: ,

How To Create a High Availability Setup with Heartbeat and Floating IPs on Ubuntu 14.04

30/01/2019 Comments off
Source: Digital Ocean – Mitchell Anicas

Introduction

Heartbeat is an open source program that provides cluster infrastructure capabilities—cluster membership and messaging—to client servers, which is a critical component in a high availability (HA) server infrastructure. Heartbeat is typically used in conjunction with a cluster resource manager (CRM), such as Pacemaker, to achieve a complete HA setup. However, in this tutorial, we will demonstrate how to create a 2-node HA server setup by simply using Heartbeat and a DigitalOcean Floating IP.

If you are looking to create a more robust HA setup, look into using Corosync and Pacemaker or Keepalived.

Goal

When completed, the HA setup will consist of two Ubuntu 14.04 servers in an active/passive configuration. This will be accomplished by pointing a Floating IP, which is how your users will access your services or website, to point to the primary, or active, server unless a failure is detected. In the event that the Heartbeat service detects that the primary server is unavailable, the secondary server will automatically run a script to reassign the Floating IP to itself via the DigitalOcean API. Thus, subsequent network traffic to the Floating IP will be directed to your secondary server, which will act as the active server until the primary server becomes available again (at which point, the primary server will reassign the Floating IP to itself).

ha-diagram-animated

Note: This tutorial only covers setting up active/passive high availability at the gateway level. That is, it includes the Floating IP, and the load balancer servers—Primary and Secondary. Furthermore, for demonstration purposes, instead of configuring reverse-proxy load balancers on each server, we will simply configure them to respond with their respective hostname and public IP address.

To achieve this goal, we will follow these steps:

  • Create 2 Droplets that will receive traffic
  • Create Floating IP and assign it to one of the Droplets
  • Create DNS A record that points to Floating IP (optional)
  • Install Heartbeat on Droplets
  • Configure Heartbeat to Run Floating IP Reassignment Service
  • Create Floating IP Reassignment Service
  • Test failover

Lire la suite…