Archive

Articles taggués ‘administration’

How to monitor a Linux server and desktop remotely from web browser

02/12/2021 Aucun commentaire

Monitoring Linux server

When it comes to monitoring a Linux server, there are more than enough options to choose from. While there are many production-quality monitoring solutions (e.g., Nagios, Zabbix, Zenoss), boasting of fancy UI, monitoring scalability, comprehensive reporting capabilities, etc., these solutions are probably an overkill for most of us end users. If all you need is to check the basic status (e.g., CPU load, memory usage, active processes, disk usage) of a remote Linux server or desktop, consider linux-dash.

linux-dash is a web-based lightweight monitoring dashboard for Linux machines, which can display, in real-time, various system properties, such as CPU load, RAM usage, disk usage, Internet speed, network connections, RX/TX bandwidth, logged-in users, running processes etc. linux-dash does not come with any backend database for storing long-term statistics. Simply drop in linux-dash app in an existing web server (e.g., Apache, Nginx), and you are good to go. It is a quick and easy way to set up remote monitoring for personal projects.

In this tutorial, I am going to describe how to set up linux-dash in Nginx web server on Linux. Nginx is preferred over Apache web server due to its lightweight engine.

Set up linux-dash on Debian, Ubuntu or Linux Mint

First, install Nginx web server with php-fpm.

$ sudo apt-get install git nginx php5-json php5-fpm php5-curl

Configure Nginx for linux-dash app by creating /etc/nginx/conf.d/linuxdash.conf as follows. In this example, we are going to use port 8080.

$ sudo vi /etc/nginx/conf.d/linuxdash.conf
server {
 server_name $domain_name;
 listen 8080;
 root /var/www;
 index index.html index.php;
 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;
 
 location ~* .(?:xml|ogg|mp3|mp4|ogv|svg|svgz|eot|otf|woff|ttf|css|js|jpg|jpeg|gif|png|ico)$ {
 try_files $uri =404;
 expires max;
 access_log off;
 add_header Pragma public;
 add_header Cache-Control "public, must-revalidate, proxy-revalidate";
 }
 
 location /linux-dash {
 index index.html index.php;
 }
 
 # PHP-FPM via sockets
 location ~ .php(/|$) {
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 fastcgi_split_path_info ^(. ?.php)(/.*)$;
 fastcgi_pass unix:/var/run/php5-fpm.sock;
 if (!-f $document_root$fastcgi_script_name) {
 return 404;
 }
 try_files $uri $uri/ /index.php?$args;
 include fastcgi_params;
 }
}

Disable the default site configuration.

$ sudo rm /etc/nginx/sites-enabled/default

Configure php-fpm by editing /etc/php5/fpm/pool.d/www.conf. Make sure to edit “user“, “group” and “listen” directives as shown below. You can keep the rest of the configuration unchanged.

$ sudo vi /etc/php5/fpm/pool.d/www.conf

. . .
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
. . .

Proceed to download and install linux-dash.

$ git clone https://github.com/afaqurk/linux-dash.git
$ sudo cp -r linux-dash/ /var/www/
$ sudo chown -R www-data:www-data /var/www

Restart Nginx web server as well as php5-fpm to finalize installation.

$ sudo service php5-fpm restart
$ sudo service nginx restart

Lire la suite…

How to monitor a log file on Linux with logwatch

11/11/2021 Comments off

Linux operating system and many applications create special files commonly referred to as “logs” to record their operational events. These system logs or application-specific log files are an essential tool when it comes to understanding and troubleshooting the behavior of the operating system and third-party applications. However, log files are not precisely what you would call “light” or “easy” reading, and analyzing raw log files by hand is often time-consuming and tedious. For that reason, any utility that can convert raw log files into a more user-friendly log digest is a great boon for sysadmins.

logwatch is an open-source log parser and analyzer written in Perl, which can parse and convert raw log files into a structured format, making a customizable report based on your use cases and requirements. In logwatch, the focus is on producing more easily consumable log summary, not on real-time log processing and monitoring. As such, logwatch is typically invoked as an automated cron task with desired time and frequency, or manually from the command line whenever log processing is needed. Once a log report is generated, logwatch can email the report to you, save it to a file, or display it on the screen.

A logwatch report is fully customizable in terms of verbosity and processing coverage. The log processing engine of logwatch is extensible, in a sense that if you want to enable logwatch for a new application, you can write a log processing script (in Perl) for the application’s log file, and plug it under logwatch.

One downside of logwatch is that it does not include in its report detailed timestamp information available in original log files. You will only know that a particular event was logged in a requested range of time, and you will have to access original log files to get exact timing information.

Installing Logwatch

On Debian and derivatives:

# aptitude install logwatch

On Red Hat-based distributions:

# yum install logwatch

Configuring Logwatch

During installation, the main configuration file (logwatch.conf) is placed in /etc/logwatch/conf. Configuration options defined in this file override system-wide settings defined in /usr/share/logwatch/default.conf/logwatch.conf.

If logwatch is launched from the command line without any arguments, the custom options defined in /etc/logwatch/conf/logwatch.conf will be used. However, if any command-line arguments are specified with logwatch command, those arguments in turn override any default/custom settings in /etc/logwatch/conf/logwatch.conf.

In this article, we will customize several default settings of logwatch by editing /etc/logwatch/conf/logwatch.conf file.

Detail = <Low, Med, High, or a number>

“Detail” directive controls the verbosity of a logwatch report. It can be a positive integer, or High, Med, Low, which correspond to 10, 5, and 0, respectively.

MailTo = youremailaddress@yourdomain.com

“MailTo” directive is used if you want to have a logwatch report emailed to you. To send a logwatch report to multiple recipients, you can specify their email addresses separated with a space. To be able to use this directive, however, you will need to configure a local mail transfer agent (MTA) such as sendmail or Postfix on the server where logwatch is running.

Range = <Yesterday|Today|All>

“Range” directive specifies the time duration of a logwatch report. Common values for this directive are Yesterday, Today or All. When “Range = All” is used, “Archive = yes” directive is also needed, so that all archived versions of a given log file (e.g., /var/log/maillog, /var/log/maillog.X, or /var/log/maillog.X.gz) are processed.

Besides such common range values, you can also use more complex range options such as the following.

  • Range = “2 hours ago for that hour”
  • Range = “-5 days”
  • Range = “between -7 days and -3 days”
  • Range = “since September 15, 2014”
  • Range = “first Friday in October”
  • Range = “2014/10/15 12:50:15 for that second”

To be able to use such free-form range examples, you need to install Date::Manip Perl module from CPAN. Refer to this post for CPAN module installation instructions.

Service = <service-name-1>
Service = <service-name-2>
. . .

“Service” option specifies one or more services to monitor using logwath. All available services are listed in /usr/share/logwatch/scripts/services, which cover essential system services (e.g., pam, secure, iptables, syslogd), as well as popular application services such as sudo, sshd, http, fail2ban, samba. If you want to add a new service to the list, you will have to write a corresponding log processing Perl script, and place it in this directory.

If this option is used to select specific services, you need to comment out the line “Service = All” in /usr/share/logwatch/default.conf/logwatch.conf.

14948933564_f6fb05382d_o

Format = <text|html>

“Format” directive specifies the format (e.g., text or HTML) of a logwatch report.

Output = <file|mail|stdout>

“Output” directive indicates where a logwatch report should be sent. It can be saved to a file (file), emailed (mail), or shown to screen (stdout).

Lire la suite…

Categories: Système Tags: ,

Packet Analyzer: 15 TCPDUMP Command Examples

10/11/2021 Comments off

Packet Analyzer: efficient network traffic analysispacket analyzer

tcpdump command is also called as packet analyzer.

tcpdump command will work on most flavors of unix operating system. tcpdump allows us to save the packets that are captured, so that we can use it for future analysis. The saved file can be viewed by the same tcpdump command. We can also use open source software like wireshark to read the tcpdump pcap files.

In this tcpdump tutorial, let us discuss some practical examples on how to use the tcpdump command.

1. Capture packets from a particular ethernet interface using tcpdump -i

When you execute tcpdump command without any option, it will capture all the packets flowing through all the interfaces. -i option with tcpdump command, allows you to filter on a particular ethernet interface.

$ tcpdump -i eth1
14:59:26.608728 IP xx.domain.netbcp.net.52497 > valh4.lell.net.ssh: . ack 540 win 16554
14:59:26.610602 IP resolver.lell.net.domain > valh4.lell.net.24151:  4278 1/0/0 (73)
14:59:26.611262 IP valh4.lell.net.38527 > resolver.lell.net.domain:  26364  PTR? 244.207.104.10.in-addr.arpa. (45)

In this example, tcpdump captured all the packets flows in the interface eth1 and displays in the standard output.

Note: Editcap utility is used to select or remove specific packets from dump file and translate them into a given format.

2. Capture only N number of packets using tcpdump -c

When you execute tcpdump command it gives packets until you cancel the tcpdump command. Using -c option you can specify the number of packets to capture.

$ tcpdump -c 2 -i eth0
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
14:38:38.184913 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 1457255642:1457255758(116) ack 1561463966 win 63652
14:38:38.690919 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 116:232(116) ack 1 win 63652
2 packets captured
13 packets received by filter
0 packets dropped by kernel

The above tcpdump command captured only 2 packets from interface eth0.

Note: Mergecap and TShark: Mergecap is a packet dump combining tool, which will combine multiple dumps into a single dump file. Tshark is a powerful tool to capture network packets, which can be used to analyze the network traffic. It comes with wireshark network analyzer distribution.

3. Display Captured Packets in ASCII using tcpdump -A

The following tcpdump syntax prints the packet in ASCII.

$ tcpdump -A -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
14:34:50.913995 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 1457239478:1457239594(116) ack 1561461262 win 63652
E.....@.@..]..i...9...*.V...]...P....h....E...>{..U=...g.
......G..7 KA....A...L.
14:34:51.423640 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 116:232(116) ack 1 win 63652
E.....@.@....i...9...*.V..*]...P....h....7......X..!....Im.S.g.u:*..O&....^#Ba...
E..(R.@.|.....9...i.*...]...V..*P..OWp........

Note: Ifconfig command is used to configure network interfaces

Lire la suite…

Categories: Réseau Tags:

What are useful CLI tools for Linux system admins ?

05/11/2021 Comments off

System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.

14853747084_e14cf18e8f_z

This article will present some of the most popular and useful CLI tools recommended for sysadmins in their day to day activities. If you would like to recommend any useful tool which is not listed here, don’t forget to share it in the comment section.

Network Tools

1. ping: Check end-to-end connectivity (RTT delay, jitter, packet loss) of a remote host with ICMP echo/reply. Useful to check system status and reachability.

2. hping: Network scanning and testing tool that can generate ICMP/TCP/UDP ping packets. Often used for advanced port scanning, firewall testing, manual path MTU discovery and fragmentation testing.

3. traceroute: Discover a layer-3 forwarding path from a local host to a remote destination host with TTL-limited ICMP/UDP/TCP probe packets. Useful to troubleshoot network reachability and routing problems.

4. mtr: A variation of traceroute which characterizes per-hop packet loss/jitter with running statistics. Useful to characterize routing path delays.

5. netcat/socat: A swiss army knife of TCP/IP networking, allowing to read/write byte streams over TCP/UDP. Useful to troubleshoot firewall policies and service availability.

6. dig: DNS troubleshooting tool that can generate forward queries, reverse queries, find authoritative name servers, check CNAME, MX and other DNS records. Can be instructed to query a specific DNS server of your choosing.

7. nslookup: Another DNS checking/troubleshooting tool. Works with all DNS queries and records. Can query a particular DNS server.

8. dnsyo: A DNS testing tool which checks DNS propagation by performing DNS lookup from over a number of open resolvers located across 1,500 different networks around the world.

9. lsof: Show information about files (e.g., regular files, pipes or sockets) which are opened by processes. Useful to monitor processes or users in terms of their open network connections or opened files.

10. iftop: A ncurses-based TUI utility that can be used to monitor in real time bandwidth utilization and network connections for individual network interfaces. Useful to keep track of bandwidth hogging applications, users, destinations and ports.

11. netstat: A network statistics utility that can show status information and statistics about open network connections (TCP/UDP ports, IP addresses), routing tables, TX/RX traffic and protocols. Useful for network related diagnosis and performance tuning.

12. tcpdump: A popular packet sniffer tool based on libpcap packet capture library. Can define packet capturing filters in Berkeley Packet Filters format.

13. tshark: Another CLI packet sniffer software with full compatibility with its GUI counterpart, Wireshark. Supports 1,000 protocols and the list is growing. Useful to troubleshoot, analyze and store information on live packets.

14. ip: A versatile CLI networking tool which is part of iproute2 package. Used to check and modifying routing tables, network device state, and IP tunneling settings. Useful to view routing tables, add/remove static routes, configure network interfaces, and otherwise troubleshoot routing issues.

15. ifup/ifdown: Used to bring up or shut down a particular network interface. Often a preferred alternative to restarting the entire network service.

16. autossh: A program which creates an SSH session and automatically restarts the session should it disconnect. Often useful to create a persistent reverse SSH tunnel across restrictive corporate networks.

17. iperf: A network testing tool which measures maximum bi-directional throughput between a pair of hosts by injecting customizable TCP/UDP data streams in between.

18. elinks/lynx: text-based web browsers for CLI-based server environment.

Lire la suite…

How to analyze and view Apache web server logs interactively on Linux

04/11/2021 Comments off

analyze apache logsWhether you are in the web hosting business, or run a few web sites on a VPS yourself, chances are you want to display visitor statistics such as top visitors, requested files (dynamic or static), used bandwidth, client browsers, and referring sites, and so forth.

GoAccess is a command-line log analyzer and interactive viewer for Apache or Nginx web server. With this tool, you will not only be able to browse the data mentioned earlier, but also parse the web server logs to dig for further data as well – and all of this within a terminal window in real time. Since as of today most web servers use either a Debian derivative or a Red Hat based distribution as the underlying operating system, I will show you how to install and use GoAccess in Debian and CentOS.

Installing GoAccess on Linux

In Debian, Ubuntu and derivatives, run the following command to install GoAccess:

# aptitude install goaccess

In CentOS, you’ll need to enable the EPEL repository and then:

# yum install goaccess

In Fedora, simply use yum command:

# yum install goaccess

If you want to install GoAccess from the source to enable further options (such as GeoIP location), install required dependencies for your operating system, and then follow these steps:

# wget http://tar.goaccess.io/goaccess-0.8.5.tar.gz
# tar -xzvf goaccess-0.8.5.tar.gz
# cd goaccess-0.8.5/
# ./configure --enable-geoip
# make
# make install

That will install version 0.8.5, but you can always verify what is the latest version in the Downloads page of the project’s web site.

Since GoAccess does not require any further configurations, once it’s installed you are ready to go.

Running GoAccess

To start using GoAccess, just run it against your Apache access log.

For Debian and derivatives:

# goaccess -f /var/log/apache2/access.log

For Red Hat based distros:

# goaccess -f /var/log/httpd/access_log

When you first launch GoAccess, you will be presented with the following screen to choose the date and log format. As explained, you can toggle between options using the spacebar and proceed with F10. As for the date and log formats, you may want to refer to the Apache documentation if you need to refresh your memory.

In this case, Choose Common Log Format (CLF):

15868350373_30c16d7c30

and then press F10. You will be presented with the statistics screen. For the sake of brevity, only the header, which shows the summary of the log file, is shown in the next image:

16486742901_7a35b5df69_b

Lire la suite…