These instructions are for the Amazon Linux AMI, but the configuration options themselves are platform agnostic.
Update: The original scope of this article was the t1.micro instance type. Amazon has since introduced the t2 family of instances, which work in a very similar way. This article has been updated with information about the new types and the new focus is the even cheaper t2.micro instance type, but the rest of the content is still relevant as-is.
The t2 family and the old t1.micro instance work in a special way with their virtual CPU. They're designed to work in a mostly idle state, with support for occasional bursts of processing power. In these cases, the CPU power can get boosted until the allotment is exhausted. The old t1.micro type would then get throttled down for an undetermined amount and an undetermined time, and the new t2 types will just stay at their baseline level. The basic idea seems like a really good fit for a small web server. Whether you'll need to micromanage this is entirely up to the actual sites/applications running on the instance.
For t2.micro, see: T2 Instances in AWS Documentation
For t1.micro, see: T1 Micro Instances in AWS Documentation
Working around the processing power bottleneck depends a lot on the code you'll be running on the server. If you keep getting hit with the throttling, your next steps (after all of the other things mentioned in this article) might be to…
- …upgrade the instance type. Moving from t2.micro to t2.small doubles the price but could be worth it. Then there are t2.medium and m3.medium which multiply the price once again, but come with a nice amount of extra CPU power and memory compared to the micro instance. This upgrade is easily done by shutting down the instance, changing its type, then starting it up again. If you can handle the couple of minutes of downtime, of course.
- …move your database to RDS. This option should be on the table if the problem ends up being resource contention between Nginx + PHP-FPM and MySQL server. Optimizing your SQL queries should have priority over doing this, but that's just not always easily done if you're not the author of the software you're using.
You can run a faily big site on a micro instance, but you'll have to consider the limitations. A high-traffic WordPress-based blog is entirely possible, as long as you keep in mind that a micro instance has a limited amount of memory and processing power. You just have to set up proper limits for your server and enable caching. You'd be surprised how much stress testing a micro instance can handle if everything's cached properly.
More throughts about the low end instances can be found in another article:
Cheapest EC2 instances: How they compare
Why Nginx?
Why Nginx instead of Apache? Nginx is getting increasingly more popular because it's lighter on resources and more intelligent with its processes. Things like using mod_rewrite
for fancy URLs and to redirect elsewhere, mod_headers
to set caching, and mod_deflate
to compress content is easily done with Nginx. Before getting all excited and moving everything you have to be served by Nginx, do make sure you don't use any special Apache functionality you can't replicate with Nginx.
Step 1: Initial setup
This article doesn't cover the basics of registering an AWS account, starting an EC2 instance and connecting to it by SSH. We'll start this from a new launched instance of a 64-bit Amazon Linux AMI.
Start working on a new instance by making sure everything's up to date with:
sudo yum update
I also prefer to install yum-plugin-changelog
to see what's new when I run updates manually. To do that, just run:
sudo yum install yum-plugin-changelog
Then, whenever you're updating, run yum with:
sudo yum update --changelog
Step 2: Swap file
We only have 1 GB of memory to spend and no swap file to extend it to. That alone is a major cause of server crashes and hangups. Simply creating a swap file is a well-known, effective micro instance tweak.
Create and activate a 1 GB swap file with the following:
sudo dd if=/dev/zero of=/swapfile bs=1M count=1024 sudo mkswap /swapfile sudo chmod 0600 /swapfile sudo swapon /swapfile
We generate the file with dd
, make a swap filesystem with mkswap
, set proper permissions with chmod
, and mount it with swapon
.
Finally, add the following line to /etc/fstab to make it persist over reboots:
/swapfile swap swap defaults 0 0
Step 3: Nginx and PHP-FPM
As of this writing, the php-fpm
package from Amazon's repo offers PHP 5.3. You can go with that and upgrade when it gets bumped to a newer version, or start with the php54-fpm
package. PHP 5.4 brings some memory and performance optimizations, which are always a plus for this project. We'll be using the php54-fpm
package here.
Install Nginx and PHP-FPM with:
sudo yum install nginx php54-fpm
Step 3.1: PHP-FPM configuration
Add the following to the end of /etc/php-fpm.d/www.conf:
[global] emergency_restart_threshold = 10 emergency_restart_interval = 1m process_control_timeout = 10s [www] listen = /var/run/php-fpm/php-fpm.sock listen.owner = nginx listen.group = nginx listen.mode = 0664 user = nginx group = nginx pm.max_children = 20 pm.start_servers = 5 pm.min_spare_servers = 5 pm.max_spare_servers = 20 pm.max_requests = 200 php_admin_value[memory_limit] = 64M
Here's a summary of the configuration:
- The
emergency_restart_*
andprocess_control_timeout
options are there to set useful auto-restart behavior for the rare case it's needed (source). - Basic
listen
,user
andgroup
options in thewww
pool are then set to make PHP-FPM work. A Unix socket is used instead of a TCP connection, but more on that a couple of paragraphs down. - The process manager is configured with
pm.*
options. Here, we're setting a sensible configuration for an average case. - PHP's
memory_limit
is lowered from the 128 MB that has been set in the default configuration.
This configuration confines PHP a bit to work with the available memory, but still allowing it to use the newly created swap file if it needs to.
A Unix socket is now used to communicate with PHP-FPM. This is a slightly faster option since it bypasses the TCP overhead, but not the best option after your traffic grows high enough. If you reach this point and get nginx-to-php communication errors, or just expect a fairly high visitor count, comment out or remove the listen = /var/run/php-fpm/php-fpm.sock
line, as the configuration will then use the preceding default of listen = 127.0.0.1:9000
. You'll also need to change the relevant Nginx configuration line below from fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
to fastcgi_pass 127.0.0.1:9000;
.
You'll also need to create a directory for PHP session files if you're going to be using sessions. Following the default configuration, you can do this with:
sudo mkdir /var/lib/php/session sudo chmod 1777 /var/lib/php/session
Step 3.2: Nginx configuration
Then create your Nginx configuration. Here's an example with some typical options, that you can create as eg. /etc/nginx/conf.d/web.conf:
server_tokens off; tcp_nopush on; gzip on; gzip_types text/css application/x-javascript; fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=CACHE:100m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; server { listen 80 default_server; return 444; } server { listen 80; server_name domain.com; root /www/domain.com; index index.php index.html; access_log /var/log/nginx/domain.com_access.log; error_log /var/log/nginx/domain.com_error.log; set $no_cache 0; if ($query_string != "") { set $no_cache 1; } if ($request_uri ~ "/admin/") { set $no_cache 1; } location ~* \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi.conf; fastcgi_cache CACHE; fastcgi_cache_methods GET HEAD; fastcgi_cache_valid 200 1m; fastcgi_cache_bypass $no_cache; fastcgi_no_cache $no_cache; } location ~* \.(css|js|jpg|png|gif)$ { expires 1w; } } server { listen 80; server_name www.domain.com; return 301 http://domain.com$request_uri; }
The above configuration includes additional blue lines. I'll explain these later, in Step 3.3: Optional Nginx configuration.
Nginx behaves really well out of the box, so I won't go into the deeper threading options here. A micro instance has only one virtual CPU available, too.
We're using /www as the location of our web content. Before launching the server, you can and should at least create the directory structure with:
sudo mkdir -p /www/domain.com
Here's a summary of the configuration:
server_tokens off
isn't in any way necessary, but just a small step to mask the precise version of nginx we're using. This'll make the server report itself as Server: nginx instead of the default format of Server: nginx/1.4.7.tcp_nopush on
helps to serve files to visitors with more efficiency. This requires asendfile on
setting, which is already enabled in the default configuration.gzip on
andgzip_types
enable some useful compression features. This can reduce bandwidth usage since that adds up on a high-traffic site. The extra processing power needed to compress files is mostly irrelevant nowadays, and it shouldn't even affect the micro instance.
A thing to note in the gzip configuration is that Nginx sets the MIME type of .js files by default asapplication/x-javascript
instead of Apache's defaulttext/javascript
. Nginx also compressestext/html
by default, so that's not needed on this list. You can add more MIME types to the list whenever needed.- The first
server
block defines the default virtual host. This is what you're going to get when you try to access the server by its IP, or just an unknown domain name. This configuration just returns an HTTP status of 444 No Response to reject all requests. You could also define aroot
and place an informative index.html file in it, or redirect to a proper domain by copying the example from the thirdserver
block. - The second
server
block defines our virtual host. We define the domain name, separate log files, enable PHP and set up moderate caching for common static files. - The third
server
block makes the server redirect any request to www.domain.com to domain.com. You can also switch these the other way around if you prefer to use the www. version.
The location
block for launching PHP is slightly different from examples found all over the internet:
- I added
try_files $uri =404
to fix a common issue with PATH_INFO. There are quite a few ways to fix it, and you might want to use another one if your chosen software doesn't play well with this one. You can read more about it here. - The
include
line usesfastcgi.conf
instead of a more commonly seenfastcgi_params
. The latter is an old version of the same thing and is missing the linefastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name
when compared to fastcgi.conf. That definition is critical for PHP to work, and lots of tutorials still include the old fastcgi_params and define SCRIPT_FILENAME separately.
Step 3.3: Optional Nginx configuration
The configuration has a highlighted section, which I'll explain here.
FastCGI caching
The worst slowdown your server is going to have is most probably the execution of heavy or slow PHP scripts. You might have a CMS that accesses a MySQL database and processes page content. While doing this once isn't a hard thing to do, it causes huge issues when the server has to do it concurrently with all PHP-FPM threads. For a high-traffic site or just a spike in traffic, additional visitors get rejected because the configuration doesn't allow for more threads.
Fortunately FastCGI in Nginx includes great caching functionality, so you don't have to install and integrate Varnish or memcached. As the context of the configuration options might reveal, this caching is only for dynamic PHP content. Other requests don't go through unnecessary caching, and Nginx does a great job in serving them. Now, you might want your dynamic content to have fresh content and skip any caching because of that, but that's why we're caching these for just one minute. It helps immensely in a high-traffic scenario, allowing Nginx to serve PHP content to visitors without having to reject a good chunk of requests.
If you allow visitors to log in for personalized content or absolutely require real-time dynamic content, don't enable FastCGI caching.
The configuration sets a common style of FastCGI caching, but with a shorter caching time and two exceptions:
- The query string isn't empty or missing. If a PHP page request has arguments, they might need to do live processing based on that data. You might not need this clause. If you're running a CMS without using pretty URLs (like domain.com/index.php?page=products instead of domain.com/products.html), you may want to leave this out of your configuration. Although I'd recommend looking up a way to clean up the URLs if that's the case.
- The request is for an admin directory. This is just an example, as your admin directory might not be at domain.com/admin/. This clause also enables you to exclude other URLs that shouldn't be cached. Change this one according to your needs, or just leave it out if it's not needed at all.
There's a chance that you might also need to try using this:
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
This is because FastCGI caching doesn't cache pages that set Cache-Control: no-cache
or similar, or ones that set cookies. This line, however, might break things. If you put it to use, make sure you test that the drawbacks don't affect you. Getting your PHP code or application to not send no-cache headers is also a good option to explore.
Step 3.4: Start the server
The last thing to do is to set these to run at startup, and start them up:
sudo chkconfig nginx on sudo chkconfig php-fpm on sudo service nginx start sudo service php-fpm start
Step 4: MySQL server
Install MySQL server and the related PHP extension with:
sudo yum install mysql-server php54-mysql
Just like we did before with the web server, start things up with:
sudo chkconfig mysqld on sudo service mysqld start
Finally, secure the database server by setting up a root password and accepting the other steps:
sudo mysql_secure_installation
Creating the swap file earlier was probably the best thing we could do for MySQL server, and for it and Nginx + PHP to live together on the same micro instance. Your mix of software, be it a blog, CMS or something else, ultimately defines if a micro instance is enough. If it just won't hold up and you can't optimize the code or queries, you can always try the options listed at the start of the article.
Don't stop there
Also read the follow-up articles: