All posts by Sumarsono

Set Up a Node.js Application for Production on Ubuntu 14.04

How To Set Up a Node.js Application for Production on Ubuntu 14.04

Introduction

Node.js is an open source Javascript runtime environment for easily building server-side and networking applications. The platform runs on Linux, OS X, FreeBSD, and Windows, and its applications are written in JavaScript. Node.js applications can be run at the command line but we will teach you how to run them as a service, so they will automatically restart on reboot or failure, so you can use them in a production environment.

In this tutorial, we will cover setting up a production-ready Node.js environment that is composed of two Ubuntu 14.04 servers; one server will run Node.js applications managed by PM2, while the other will provide users with access to the application through an Nginx reverse proxy to the application server.

The CentOS version of this tutorial can be found here.

Prerequisites

This guide uses two Ubuntu 14.04 servers with private networking (in the same datacenter). We will refer to them by the following names:

  • app: The server where we will install Node.js runtime, your Node.js application, and PM2
  • web: The server where we will install the Nginx web server, which will act as a reverse proxy to your application. Users will access this server’s public IP address to get to your Node.js application.

It is possible to use a single server for this tutorial, but you will have to make a few changes along the way. Simply use the localhost IP address, i.e. 127.0.0.1, wherever the app server’s private IP address is used.

Here is a diagram of what your setup will be after following this tutorial:

Reverse Proxy to Node.js Application

Before you begin this guide, you should have a regular, non-root user with sudo privileges configured on both of your servers–this is the user that you should log in to your servers as. You can learn how to configure a regular user account by following steps 1-4 in our initial server setup guide for Ubuntu 14.04.

If you want to be able to access your web server via a domain name, instead of its public IP address, purchase a domain name then follow these tutorials: Continue reading Set Up a Node.js Application for Production on Ubuntu 14.04

Protect your Linux Server Against the GHOST Vulnerability

Introduction

On January 27, 2015, a GNU C Library (glibc) vulnerability, referred to as the GHOST vulnerability, was announced to the general public. In summary, the vulnerability allows remote attackers to take complete control of a system by exploiting a buffer overflow bug in glibc’s GetHOST functions (hence the name). Like Shellshock and Heartbleed, this vulnerability is serious and affects many servers.

The GHOST vulnerability can be exploited on Linux systems that use versions of the GNU C Library prior to glibc-2.18. That is, systems that use an unpatched version of glibc from versions 2.2 to 2.17 are at risk. Many Linux distributions including, but not limited to, the following are potentially vulnerable to GHOST and should be patched:

  • CentOS 6 & 7
  • Debian 7
  • Red Hat Enterprise Linux 6 & 7
  • Ubuntu 10.04 & 12.04
  • End of Life Linux Distributions

It is highly recommended that you update and reboot all of your affected Linux servers. We will show you how to test if your systems are vulnerable and, if they are, how to update glibc to fix the vulnerability.

Check System Vulnerability

The easiest way to test if your servers are vulnerable to GHOST is to check the version of glibc that is in use. We will cover how to do this in Ubuntu, Debian, CentOS, and RHEL.

Note that binaries that are statically linked to the vulnerable glibc must be recompiled to be made safe—this test does not cover these cases, only the system’s GNU C Library. Continue reading Protect your Linux Server Against the GHOST Vulnerability

Use Bash’s Job Control to Manage Foreground and Background Processes

Introduction

In this guide, we’ll talk about how bash, the Linux system, and your terminal come together to offer process and job control. In a previous guide, we discussed how the ps, kill, and nice commands can be used to control processes on your system.

This article will focus on managing foreground and background processes and will demonstrate how to leverage your shell’s job control functions to gain more flexibility in how you run commands.

Managing Foreground Processes

Most processes that you start on a Linux machine will run in the foreground. The command will begin execution, blocking use of the shell for the duration of the process. The process may allow user interaction or may just run through a procedure and then exit. Any output will be displayed in the terminal window by default. We’ll discuss the basic way to manage foreground processes below.

Starting a Process

By default, processes are started in the foreground. Until the program exits or changes state, you will not be able to interact with the shell.

Some foreground commands exit very quickly and return you to a shell prompt almost immediately. For instance, this command: Continue reading Use Bash’s Job Control to Manage Foreground and Background Processes

Install Nagios 4 and Monitor Your Servers on Ubuntu 14.04

How To Install Nagios 4 and Monitor Your Servers on Ubuntu 14.04

Introduction

In this tutorial, we will cover the installation of Nagios 4, a very popular open source monitoring system, on Ubuntu 14.04. We will cover some basic configuration, so you will be able to monitor host resources via the web interface. We will also utilize the Nagios Remote Plugin Executor (NRPE), that will be installed as an agent on remote hosts, to monitor their local resources.

Nagios is useful for keeping an inventory of your servers, and making sure your critical services are up and running. Using a monitoring system, like Nagios, is an essential tool for any production server environment.

Prerequisites

To follow this tutorial, you must have superuser privileges on the Ubuntu 14.04 server that will run Nagios. Ideally, you will be using a non-root user with superuser privileges. If you need help setting that up, follow the steps 1 through 3 in this tutorial: Initial Server Setup with Ubuntu 14.04.

A LAMP stack is also required. Follow this tutorial if you need to set that up: How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 14.04.

This tutorial assumes that your server has private networking enabled. If it doesn’t, just replace all the references to private IP addresses with public IP addresses.

Now that we have the prerequisites sorted out, let’s move on to getting Nagios 4 installed.

Install Nagios 4

This section will cover how to install Nagios 4 on your monitoring server. You only need to complete this section once. Continue reading Install Nagios 4 and Monitor Your Servers on Ubuntu 14.04

Map User Location with GeoIP and ELK (Elasticsearch, Logstash, and Kibana)

How To Map User Location with GeoIP and ELK (Elasticsearch, Logstash, and Kibana)

Introduction

IP Geolocation, the process used to determine the physical location of an IP address, can be leveraged for a variety of purposes, such as content personalization and traffic analysis. Traffic analysis by geolocation can provide invaluable insight into your user base as it allows you to easily see where they users are coming from, which can help you make informed decisions about the ideal geographical location(s) of your application servers and who your current audience is. In this tutorial, we will show you how to create a visual geo-mapping of the IP addresses of your application’s users, by using a GeoIP database with Elasticsearch, Logstash, and Kibana.

Here’s a short explanation of how it all works. Logstash uses a GeoIP database to convert IP addresses into latitude and longitude coordinate pair, i.e. the approximate physical location of an IP address. The coordinate data is stored in Elasticsearch in geo_point fields, and also converted into a geohash string. Kibana can then read the Geohash strings and draw them as points on a map of earth, known in Kibana 4 as a Tile Map visualization.

Let’s take a look at the prerequisites now.

Prerequisites

To follow this tutorial, you must have a working ELK stack. Additionally, you must have logs that contain IP addresses that can be filtered into a field, like web server access logs. If you don’t already have these two things, you can follow the first two tutorials in this series. The first tutorial will set up an ELK stack, and second one will show you how to gather and filter Nginx or Apache access logs:

Download Latest GeoIP Database

MaxMind provides free and paid GeoIP databases—the paid versions are more accurate. Logstash also ships with a copy of the free GeoIP City database, GeoLite City. In this tutorial, we will download the latest GeoLite City database, but feel free to use a different GeoIP database if you wish. Continue reading Map User Location with GeoIP and ELK (Elasticsearch, Logstash, and Kibana)

How To Use Kibana Dashboards and Visualizations

How To Use Kibana Dashboards and Visualizations

Introduction

Kibana 4 is an analytics and visualization platform that builds on Elasticsearch to give you a better understanding of your data. In this tutorial, we will get you started with Kibana, by showing you how to use its interface to filter and visualize log messages gathered by an Elasticsearch ELK stack. We will cover the main interface components, and demonstrate how to create searches, visualizations, and dashboards.

Prerequisites

This tutorial is the third part in the Centralized Logging with Logstash and Kibana series.

It assumes that you have a working ELK setup. The examples assume that you are gathering syslog and Nginx access logs. If you are not gathering these types of logs, you should be able to modify the demonstrations to work with your own log messages.

If you want to follow this tutorial exactly as presented, you should have the following setup, by following the first two tutorials in this series:

When you are ready to move on, let’s look at an overview of the Kibana interface.

Kibana Interface Overview

The Kibana interface is divided into four main sections:

  • Discover
  • Visualize
  • Dashboard
  • Settings

We will go over the basics of each section, in the listed order, and demonstrate how each piece of the interface can be used. Continue reading How To Use Kibana Dashboards and Visualizations

Adding Logstash Filters To Improve Centralized Logging

Adding Logstash Filters To Improve Centralized Logging

Introduction

Logstash is a powerful tool for centralizing and analyzing logs, which can help to provide and overview of your environment, and to identify issues with your servers. One way to increase the effectiveness of your Logstash setup is to collect important application logs and structure the log data by employing filters, so the data can be readily analyzed and query-able. We will build our filters around “grok” patterns, that will parse the data in the logs into useful bits of information.

This guide is a sequel to the How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04 tutorial, and focuses primarily on adding filters for various common application logs.

Prerequisites

To follow this tutorial, you must have a working Logstash server, and a way to ship your logs to Logstash. If you do not have Logstash set up, here is another tutorial that will get you started: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.

Logstash Server Assumptions:

  • Logstash is installed in /opt/logstash
  • You are receiving logs from Logstash Forwarder on port 5000
  • Your Logstash configuration files are located in /etc/logstash/conf.d
  • You have an input file named 01-lumberjack-input.conf
  • You have an output file named 30-lumberjack-output.conf

Continue reading Adding Logstash Filters To Improve Centralized Logging

Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04

How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04

Introduction

In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on Ubuntu 14.04—that is, Elasticsearch 1.4.4, Logstash 1.5.0, and Kibana 4. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana 4 is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.

Our Goal

The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.
Continue reading Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04

Install Bacula-web on Ubuntu 14.04

How To Install Bacula-web on Ubuntu 14.04

Introduction

Bacula-web is a PHP web application that provides an easy way to view summaries and graphs of Bacula backup jobs that have already run. Although it doesn’t allow you to control Bacula in any way, Bacula-web provides a graphical alternative to viewing jobs from the console. Bacula-web is especially useful for users who are new to Bacula, as its reports make it easy to understand what Bacula has been operating.

In this tutorial, we will show you how to install Bacula-web on an Ubuntu 14.04 server that your Bacula server software is running on.

Prerequisites

To follow this tutorial, you must have the Bacula backup server software installed on an Ubuntu server. Instructions to install Bacula can be found here: How To Install Bacula Server on Ubuntu 14.04.

This tutorial assumes that your Bacula setup is using MySQL for the catalog. If you are using a different RDBMS, such as PostgreSQL, be sure to make the proper adjustments to this tutorial. You will need to install the appropriate PHP module(s) and make adjustments to the database connection information examples.

Let’s get started.

Install Nginx and PHP

Bacula-web is a PHP application, so we need to install PHP and a web server. We’ll use Nginx. If you want to learn more about this particular software setup, check out this LEMP tutorial.

Update your apt-get listings:

sudo apt-get update

Then, install Nginx, PHP-fpm, and a few other packages with apt-get:

sudo apt-get install nginx apache2-utils php5-fpm php5-mysql php5-gd

Now we are ready to configure PHP and Nginx.

Configure PHP-FPM

Open the PHP-FPM configuration file in your favorite text editor. We’ll use vi:

sudo vi /etc/php5/fpm/php.ini

Find the line that specifies cgi.fix_pathinfo, uncomment it, and replace its value with 0. It should look like this when you’re done.

cgi.fix_pathinfo=0

Now find the date.timezone setting, uncomment it, and replace its value with your time zone. We’re in New York, so that’s what we’re setting the value to: Continue reading Install Bacula-web on Ubuntu 14.04

Back Up an Ubuntu 14.04 Server with Bacula

Introduction

This tutorial will show you how to set up Bacula to create backups of a remote Ubuntu 14.04 host, over a network connection. This involves installing and configuring the Bacula Client software on a remote host, and making some additions to the configuration of an existing Bacula Server (covered in the prerequisites).

If you are trying to create backups of CentOS 7 hosts, follow this link instead: How To Back Up a CentOS 7 Server with Bacula.

Prerequisites

This tutorial assumes that you have a server running the Bacula Server components, as described in this link: How To Install Bacula Server on Ubuntu 14.04.

We are also assuming that you are using private network interfaces for backup server-client communications. We will refer to the private FQDN of the servers (FQDNs that point to the private IP addresses). If you are using IP addresses, simply substitute the connection information where appropriate.

For the rest of this tutorial, we will refer to the Bacula Server as “BaculaServer”, “Bacula Server”, or “Backup Server”. We will refer to the remote host, that is being backed up, as “ClientHost”, “Client Host”, or “Client”.

Let’s get started by making some quick changes to the Bacula Server configuration.

Organize Bacula Director Configuration (Server)

On your Bacula Server, perform this section once.

When setting up your Bacula Server, you may have noticed that the configuration files are excessively long. We’ll try and organize the Bacula Director configuration a bit, so it uses separate files to add new configuration such as jobs, file sets, and pools.

Let’s create a directory to help organize the Bacula configuration files:

  • sudo mkdir /etc/bacula/conf.d

Then open the Bacula Director configuration file:

  • sudo vi /etc/bacula/bacula-dir.conf

At the end of the file add, this line:

bacula-dir.conf — Add to end of file
@|"find /etc/bacula/conf.d -name '*.conf' -type f -exec echo @{} \;"

Save and exit. This line makes the Director look in the /etc/bacula/conf.d directory for additional configuration files to append. That is, any .conf file added in there will be loaded as part of the configuration.

Add RemoteFile Pool

We want to add an additional Pool to our Bacula Director configuration, which we’ll use to configure our remote backup jobs.

Open the conf.d/pools.conf file:

  • sudo vi /etc/bacula/conf.d/pools.conf

Add the following Pool resource:

conf.d/pools.conf — Add Pool resource
Pool {
  Name = RemoteFile
  Pool Type = Backup
  Label Format = Remote-
  Recycle = yes                       # Bacula can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 365 days         # one year
    Maximum Volume Bytes = 50G          # Limit Volume size to something reasonable
  Maximum Volumes = 100               # Limit number of Volumes in Pool
}

Save and exit. This defines a “RemoteFile” pool, which we will use by the backup job that we’ll create later. Feel free to change any of the parameters to meet your own needs. Continue reading Back Up an Ubuntu 14.04 Server with Bacula