Archive

Articles taggués ‘performance’

MySQL: Setting Account Resource Limits

14/04/2019 Aucun commentaire

mysql account resource limitsOne means of restricting client use of MySQL server resources is to set the global max_user_connections system variable to a nonzero value. This limits the number of simultaneous connections that can be made by any given account, but places no limits on what a client can do once connected. In addition, settingmax_user_connections does not enable management of individual accounts. Both types of control are of interest to MySQL administrators.

To address such concerns, MySQL permits limits for individual accounts on use of these server resources:

  • The number of queries an account can issue per hour
  • The number of updates an account can issue per hour
  • The number of times an account can connect to the server per hour
  • The number of simultaneous connections to the server by an account

Any statement that a client can issue counts against the query limit, unless its results are served from the query cache. Only statements that modify databases or tables count against the update limit.

An account in this context corresponds to a row in the mysql.user table. That is, a connection is assessed against the User and Host values in the user table row that applies to the connection. For example, an account 'usera'@'%.example.com' corresponds to a row in the user table that has User and Host values of usera and %.example.com, to permit usera to connect from any host in the example.com domain. In this case, the server applies resource limits in this row collectively to all connections by usera from any host in the example.com domain because all such connections use the same account.

Before MySQL 5.0.3, an account was assessed against the actual host from which a user connects. This older method of accounting may be selected by starting the server with the --old-style-user-limits option. In this case, if usera connects simultaneously from host1.example.com andhost2.example.com, the server applies the account resource limits separately to each connection. If usera connects again from host1.example.com, the server applies the limits for that connection together with the existing connection from that host.

To establish resource limits for an account at account-creation time, use the CREATE USER statement. To modify the limits for an existing account, use ALTER USER. (Before MySQL 5.7.6, use GRANT, for new or existing accounts.) Provide a WITH clause that names each resource to be limited. The default value for each limit is zero (no limit). For example, to create a new account that can access the customer database, but only in a limited fashion, issue these statements:

mysql> CREATE USER 'francis'@'localhost' IDENTIFIED BY 'frank'
    ->     WITH MAX_QUERIES_PER_HOUR 20
    ->          MAX_UPDATES_PER_HOUR 10
    ->          MAX_CONNECTIONS_PER_HOUR 5
    ->          MAX_USER_CONNECTIONS 2;

The limit types need not all be named in the WITH clause, but those named can be present in any order. The value for each per-hour limit should be an integer representing a count per hour. For MAX_USER_CONNECTIONS, the limit is an integer representing the maximum number of simultaneous connections by the account. If this limit is set to zero, the global max_user_connections system variable value determines the number of simultaneous connections. Ifmax_user_connections is also zero, there is no limit for the account.

Lire la suite…

How to measure packets per second or throughput on high speed network interface

10/04/2019 Aucun commentaire

throughput networkThere are many traffic monitoring tools available on Linux, which can monitor/classify network traffic, and report real-time traffic statistics in fancy user interfaces. Most of these tools (e.g., ntopng, iftop) are powered by libpcap, which is a packet capture library used to monitor network traffic in user space. Despite their versatility, however, libpcap-based network monitoring tools cannot scale to handle traffic on multi Gigabit rate network interfaces, due to the overhead associated with user-space packet capture.

In this tutorial, I will present simple shell scripts that can monitor network traffic on per-interface basis, without relying on slow libpcap library. These scripts are fast enough to support multi Gigabit rates, but only suitable if you are interested in « aggregate » network statistics on per interface basis.

The secret for the scripts lies in sysfs virtual filesystem which is used by the kernel to export device- or driver-related information to user space. Network interface related statistics are exported via /sys/class/net/<ethX>/statistics.

For example, the statistics on eth0 interface are found in these files:

  • /sys/class/net/eth0/statistics/rx_packets: number of packets received
  • /sys/class/net/eth0/statistics/tx_packets: number of packets transmitted
  • /sys/class/net/eth0/statistics/rx_bytes: number of bytes received
  • /sys/class/net/eth0/statistics/tx_bytes: number of bytes transmitted
  • /sys/class/net/eth0/statistics/rx_dropped: number of packets dropped while received
  • /sys/class/net/eth0/statistics/tx_dropped: number of packets dropped while transmitted

The numbers stored in the files are automatically refreshed in real-time by the kernel. Therefore, you can write scripts that calculate traffic statistics based on these files.

The following are two such scripts (thanks to joemiller). The first script counts the number of packets per second, received (RX) or sent (TX) on an interface, while the latter scripts measures the network bandwidth of incoming (RX) and outgoing (TX) traffic on an interface. For these scripts to work, you do not need to install anything.

Measure Packets per Second on an Interface

#!/bin/bash
INTERVAL="1"  # update interval in seconds
if [ -z "$1" ]; then
        echo
        echo usage: $0 [network-interface]
        echo
        echo e.g. $0 eth0
        echo
        echo shows packets-per-second
        exit
fi

IF=$1

while true
do
        R1=`cat /sys/class/net/$1/statistics/rx_packets`
        T1=`cat /sys/class/net/$1/statistics/tx_packets`
        sleep $INTERVAL
        R2=`cat /sys/class/net/$1/statistics/rx_packets`
        T2=`cat /sys/class/net/$1/statistics/tx_packets`
        TXPPS=`expr $T2 - $T1`
        RXPPS=`expr $R2 - $R1`
        echo "TX $1: $TXPPS pkts/s RX $1: $RXPPS pkts/s"
done
 

Measure Network Bandwidth on an Interface

#!/bin/bash
 
INTERVAL="1" # update interval in seconds
 
if [ -z "$1" ]; then
 echo
 echo usage: $0 [network-interface]
 echo
 echo e.g. $0 eth0
 echo
 exit
fi
 
IF=$1
 
while true
do
 R1=`cat /sys/class/net/$1/statistics/rx_bytes`
 T1=`cat /sys/class/net/$1/statistics/tx_bytes`
 sleep $INTERVAL
 R2=`cat /sys/class/net/$1/statistics/rx_bytes`
 T2=`cat /sys/class/net/$1/statistics/tx_bytes`
 TBPS=`expr $T2 - $T1`
 RBPS=`expr $R2 - $R1`
 TKBPS=`expr $TBPS / 1024`
 RKBPS=`expr $RBPS / 1024`
 echo "TX $1: $TKBPS kB/s RX $1: $RKBPS kB/s"
done
The following screenshot shows the above two scripts in action.

10700906313_cfa3cd1ed7_z

Source: Xmodulo 

How To Configure Redis Caching to Speed Up WordPress on Ubuntu 14.04

25/03/2019 Comments off

Introduction

redisRedis is an open-source key value store that can operate as both an in-memory store and as cache. Redis is a data structure server that can be used as a database server on its own, or paired with a relational database like MySQL to speed things up, as we’re doing in this tutorial.

For this tutorial, Redis will be configured as a cache for WordPress to alleviate the redundant and time-consuming database queries used to render a WordPress page. The result is a WordPress site which is much faster, uses less database resources, and provides a tunable persistent cache. This guide applies to Ubuntu 14.04.

While every site is different, below is an example benchmark of a default WordPress installation home page with and without Redis, as configured from this guide. Chrome developer tools were used to test with browser caching disabled.

Default WordPress home page without Redis:

804ms page load time

Default WordPress home page with Redis:

449ms page load time

Note: This implementation of Redis caching for WordPress relies on a well-commented but third-party script. The script is hosted on DigitalOcean’s asset server, but was developed externally. If you would like to make your own implementation of Redis caching for WordPress, you will need to do some more work based on the concepts presented here.

Redis vs. Memcached

Memcached is also a popular cache choice. However, at this point, Redis does everything Memcached can do, with a much larger feature set. This Stack Overflow page has some general information as an overview or introduction to persons new to Redis.

How does the caching work?

The first time a WordPress page is loaded, a database query is performed on the server. Redis remembers, or caches, this query. So, when another user loads the WordPress page, the results are provided from Redis and from memory without needing to query the database.

The Redis implementation used in this guide works as a persistent object cache for WordPress (no expiration). An object cache works by caching the SQL queries in memory which are needed to load a WordPress page.

When a page loads, the resulting SQL query results are provided from memory by Redis, so the query does not have to hit the database. The result is much faster page load times, and less server impact on database resources. If a query is not available in Redis, the database provides the result and Redis adds the result to its cache.

If a value is updated in the database (for example, a new post or page is created in WordPress) the Redis value for that query is invalidated to prevent bad cached data from being presented.

If you run into problems with caching, the Redis cache can be purged by using the flushall command from the Redis command line:

redis-cli

Once you see the prompt, type:

flushall

Additional Reference: WordPress Object Cache Documentation

Prerequisites

Before starting this guide, you’ll need to set up a sudo user and install WordPress.

  • Ubuntu 14.04 Droplet (1 GB or higher recommended)
  • Add a sudo user
  • Install WordPress. This guide has been tested with these instructions, although there are many ways to install WordPress

Lire la suite…

Categories: Logiciel Tags: ,

Des requêtes SQL pour vous sauver la vie avec WordPress

18/03/2019 Comments off

sql wordpressVous avez changé votre blog WordPress de serveur et de domaine et là plus rien ne marche?

Vous voulez faire le ménage dans votre base de données ?

Et bien voici quelques exemples de requêtes SQL qui vous aiderons à faire tout ça rapidement et sans prise de tête.

Sauvegarde:

Avant de commencer la chose la plus importante à faire c’est de sauvegarder votre base de donnée.

Pour cela, plusieurs solutions s’offre à vous.

Soit vous utilisez directement une extensions WordPress tel que WP-DB-Backup ou WP-DBManager, ou alors vous pouvez le faire manuellement depuis l’interface de phpMyAdmin en suivant la procédure suivante:

  1. Connectez vous sur phpMyAdmin.
  2. Sélectionnez votre base WordPress.
  3. Cliquez sur « exporter » dans le menu en haut de la page.
  4. Sélectionnez les tables à sauvegarder.
  5. Sélectionnez SQL pour avoir un fichier de sortie en .sql.
  6. Cochez la case « Transmettre«
  7. Vous pouvez choisir un type de compression afin de réduire la taille de votre fichier.
  8. Cliquez sur « Exécuter » et le fichier devrait se télécharger.

phpmyadmin-export

Lire la suite…

Benchmark Your Webpage with Siege

15/03/2019 Comments off

Siege_Benchmark_WebpageA few articles ago, I wrote about using the “apachebench” or “ab” utility to benchmark your website (see: apachebench). Ab is a great tool, but since then, I have found and fallen in love with a new tool for benchmarking your website. This new tool is named “siege”.

Install Siege on CentOS 6

CentOS 6 has added the siege package to the CentOS EPEL (Extra Packages for Enterprise Linux) repository which makes installation easy using yum if you have the EPEL repository installed. If you need to install it still, you can do it using these quick steps:

# wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# rpm -Uvh ./epel-release-6-8.noarch.rpm
Preparing...                ########################################### [100%]
package epel-release-6-8.noarch is already installed

Once the EPEL repository is installed, you can go ahead and install the siege package:

# yum install siege

Using Siege to Benchmark Your Website

We can get really detailed usage information on the siege utility in the man pages. We can get a list of the switches that siege can use by using the –help option:

# siege --help
SIEGE 3.0.0
Usage: siege [options]
       siege [options] URL
       siege -g URL
Options:
  -V, --version             VERSION, prints the version number.
  -h, --help                HELP, prints this section.
  -C, --config              CONFIGURATION, show the current config.
  -v, --verbose             VERBOSE, prints notification to screen.
  -q, --quiet               QUIET turns verbose off and suppresses output.
  -g, --get                 GET, pull down HTTP headers and display the
                            transaction. Great for application debugging.
  -c, --concurrent=NUM      CONCURRENT users, default is 10
  -i, --internet            INTERNET user simulation, hits URLs randomly.
  -b, --benchmark           BENCHMARK: no delays between requests.
  -t, --time=NUMm           TIMED testing where "m" is modifier S, M, or H
                            ex: --time=1H, one hour test.
  -r, --reps=NUM            REPS, number of times to run the test.
  -f, --file=FILE           FILE, select a specific URLS FILE.
  -R, --rc=FILE             RC, specify an siegerc file
  -l, --log[=FILE]          LOG to FILE. If FILE is not specified, the
                            default is used: PREFIX/var/siege.log
  -m, --mark="text"         MARK, mark the log file with a string.
  -d, --delay=NUM           Time DELAY, random delay before each requst
                            between 1 and NUM. (NOT COUNTED IN STATS)
  -H, --header="text"       Add a header to request (can be many)
  -A, --user-agent="text"   Sets User-Agent in request
  -T, --content-type="text" Sets Content-Type in request

Copyright (C) 2013 by Jeffrey Fulmer, et al.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.

so, with these options in mind, allow me to demonstrate a few of the coolest siege options that I like best:

Lire la suite…

Categories: Système Tags: , ,