Tag Archives: Nginx

How to Scale Django: Beyond the Basics

Getting Started

You’ve deployed Django to your Droplet and life is good. You bumped into some performance problems as your site’s traffic grew but you’ve found the bottleneck and fixed it. However, your site’s traffic keeps growing. Somehow you need more performance…what can you do?

Let’s dig into the guts of our application and server configuration a little. This article is written on the assumption that you’re using Ubuntu 12.04, but the principles work with any version of Linux.

If you’re using Apache then you should have followed the instructions for optimizing your webserver. If you are using Nginx, these tips will work for you as well. Continue reading How to Scale Django: Beyond the Basics

How To Install an Nginx, MySQL, and PHP (FEMP) Stack on FreeBSD 10.1

Introduction

Nginx, MySQL, and PHP can be combined together easily as a powerful solution for serving dynamic content on the web. These three pieces of software can be installed and configured on a FreeBSD machine to create what is known as a FEMP stack.

In this guide, we will demonstrate how to install a FEMP stack on a FreeBSD 10.1 server. We will be installing the software using packages in order to get up and running more quickly. These packages provide reasonable defaults that work well for most servers.

Install the Components

To begin, we will install all of the software we need using FreeBSD packages system. The “install” command will update our local copy of the available packages and then install the packages we have requested: Continue reading How To Install an Nginx, MySQL, and PHP (FEMP) Stack on FreeBSD 10.1

Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching

Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching

Introduction

In this guide, we will discuss Nginx’s http proxying capabilities, which allow Nginx to pass requests off to backend http servers for further processing. Nginx is often set up as a reverse proxy solution to help scale out infrastructure or to pass requests to other servers that are not designed to handle large client loads.

Along the way, we will discuss how to scale out using Nginx’s built-in load balancing capabilities. We will also explore buffering and caching to improve the performance of proxying operations for clients.

General Proxying Information

If you have only used web servers in the past for simple, single server configurations, you may be wondering why you would need to proxy requests.

One reason to proxy to other servers from Nginx is the ability to scale out your infrastructure. Nginx is built to handle many concurrent connections at the same time. This makes it ideal for being the point-of-contact for clients. The server can pass requests to any number of backend servers to handle the bulk of the work, which spreads the load across your infrastructure. This design also provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance.

Another instance where an http proxy might be useful is when using an application servers that might not be built to handle requests directly from clients in production environments. Many frameworks include web servers, but most of them are not as robust as servers designed for high performance like Nginx. Putting Nginx in front of these servers can lead to a better experience for users and increased security.

Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing. The result of the request is passed back to Nginx, which then relays the information to the client. The other servers in this instance can be remote machines, local servers, or even other virtual servers defined within Nginx. The servers that Nginx proxies requests to are known as upstream servers.

Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI, and uwsgi, or memcached protocols through separate sets of directives for each type of proxy. In this guide, we will be focusing on the http protocol. The Nginx instance is responsible for passing on the request and massaging any message components into a format that the upstream server can understand. Continue reading Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching

Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 14.04 LTS

Introduction

When using the Nginx web server, server blocks (similar to the virtual hosts in Apache) can be used to encapsulate configuration details and host more than one domain off of a single server.

In this guide, we’ll discuss how to configure server blocks in Nginx on an Ubuntu 14.04 server.

Prerequisites

We’re going to be using a non-root user with sudo privileges throughout this tutorial. If you do not have a user like this configured, you can make one by following steps 1-4 in our Ubuntu 14.04 initial server setup guide.

You will also need to have Nginx installed on your server. If you want an entire LEMP (Linux, Nginx, MySQL, and PHP) stack on your server, you can follow our guide on setting up a LEMP stack in Ubuntu 14.04. If you only need Nginx, you can install it by typing:

sudo apt-get update
sudo apt-get install nginx

When you have fulfilled these requirements, you can continue on with this guide.

For demonstration purposes, we’re going to set up two domains with our Nginx server. The domain names we’ll use in this guide are example.com and test.com.

You can find a guide on how to set up domain names with DigitalOcean here. If you do not have two spare domain names to play with, use dummy names for now and we’ll show you later how to configure your local computer to test your configuration.

Step One — Set Up New Document Root Directories

By default, Nginx on Ubuntu 14.04 has one server block enabled by default. It is configured to serve documents out of a directory at:

/usr/share/nginx/html

We won’t use the default since it is easier to work with things in the /var/www directory. Ubuntu’s Nginx package does not use /var/www as its document root by default due to a Debian policy about packages utilizing /var/www. Continue reading Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 14.04 LTS

Install Linux, nginx, MySQL, PHP (LEMP) stack on Ubuntu 14.04

Introduction

The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx web server. The backend data is stored in MySQL and the dynamic processing is handled by PHP.

In this guide, we will demonstrate how to install a LEMP stack on an Ubuntu 14.04 server. The Ubuntu operating system takes care of the first requirement. We will describe how to get the rest of the components up and running.

Note: The LEMP Stack can be installed automatically on your Droplet by adding this script to its User Data when launching it. Check out this tutorial to learn more about Droplet User Data.

Prerequisites

Before you complete this tutorial, you should have a regular, non-root user account on your server with sudo privileges. You can learn how to set up this type of account by completing steps 1-4 in our Ubuntu 14.04 initial server setup.

Once you have your account available, sign into your server with that username. You are now ready to begin the steps outlined in this guide. Continue reading Install Linux, nginx, MySQL, PHP (LEMP) stack on Ubuntu 14.04

Troubleshoot Common HTTP Error Codes

How To Troubleshoot Common HTTP Error Codes

Introduction

When accessing a web server or application, every HTTP request that is received by a server is responded to with an HTTP status code. HTTP status codes are three-digit codes, and are grouped into five different classes. The class of a status code can be quickly identified by its first digit:

  • 1xx: Informational
  • 2xx: Success
  • 3xx: Redirection
  • 4xx: Client Error
  • 5xx: Server Error

This guide focuses on identifying and troubleshooting the most commonly encountered HTTP error codes, i.e. 4xx and 5xx status codes, from a system administrator’s perspective. There are many situations that could cause a web server to respond to a request with a particular error code–we will cover common potential causes and solutions.

Client and Server Error Overview

Client errors, or HTTP status codes from 400 to 499, are the result of HTTP requests sent by a user client (i.e. a web browser or other HTTP client). Even though these types of errors are client-related, it is often useful to know which error code a user is encountering to determine if the potential issue can be fixed by server configuration.

Server errors, or HTTP status codes from 500 to 599, are returned by a web server when it is aware that an error has occurred or is otherwise not able to process the request.

General Troubleshooting Tips

  • When using a web browser to test a web server, refresh the browser after making server changes
  • Check server logs for more details about how the server is handling the requests. For example, web servers such as Apache or Nginx produce two files called access.log and error.log that can be scanned for relevant information
  • Keep in mind that HTTP status code definitions are part of a standard that is implemented by the application that is serving requests. This means that the actual status code that is returned depends on how the server software handles a particular error–this guide should generally point you in the right direction

Now that you have a high-level understanding of HTTP status codes, we will look at the commonly encountered errors. Continue reading Troubleshoot Common HTTP Error Codes

Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04

How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04

Introduction

In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on Ubuntu 14.04—that is, Elasticsearch 1.4.4, Logstash 1.5.0, and Kibana 4. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana 4 is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.

Our Goal

The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.
Continue reading Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04