PS Optimization

From DreamHost
Jump to: navigation, search

VPS

Understanding PS Memory Usage

In a DreamHost virtual private server (PS), every PS will use roughly 100MB of memory as a baseline without any user processes running. That's because there are a number of system processes that have to run in order for your PS to work (sshd, proftpd, etc). It's important that you keep this in mind when allocating memory for your PS. That initial 100MB of memory is largely unusable for your site processes. After that initial 100MB, most of the memory being used by your PS will consist of Apache and PHP processes (or in the case of Rails applications, Passenger and Rails processes). Whenever someone views your website the php file for the page they viewed has to be parsed and executed by a php cgi process. This will generally show up as a "php5.cgi" or "php.cgi" process in a process list. At the same time, an Apache process will need to run to serve the resulting HTML output from the script to the browser. These processes will tend to show up as longer strings in process lists, but will contain the word "apache" in it.

The number of these processes that run is proportional to the amount of traffic you have. As such, if you have a large influx of traffic, additional php and apache processes will spawn to facilitate that traffic. In fact, apache and php will obligingly continue to spawn processes as demand warrants it until your PS is completely out of memory if you aren't careful to set things up to prevent such problems. This often leads to problems where websites on a PS will stop working due to memory saturation during peak hours and will then happily start working again once the traffic dies down to more manageable levels.

Another very important thing to keep in mind is that all PSs are configured to scale the number of apache instances allowed to spawn with the amount of memory you allocate for it. In most cases, this works fine when your site isn't getting hammered with abusive accesses. As such, if your site is getting a significant amount of traffic increasing the memory allocation will also simply allow more of that significant traffic to be served who would have been getting extremely slow load times or errors previously. So, until you hit the amount of memory needed to serve all of your viewers optimally the memory usage for your site may scale upwards as you increase your memory limit. If you understand how apache configurations work and need custom settings or limits placed on this, you can contact DreamHost tech support to see if they could adjust your apache settings. This is an advanced request though and they may not do what you ask unless you have a good rationale for it and know what you're talking about. That said, this functionality works fine in the vast majority of cases.

Now that you have a basic idea of where memory usage is likely to be going, you need to find out how to see what's actually going on.

Seeing What's Going On

The memory usage graph on your web panel is helpful for seeing usage trends, but isn't really helpful for getting a good picture of what's actually going on in real time. To get more information on exactly what's going on, you'll need to SSH into your PS. In fact, you might want to open two connections in separate windows. Once you're in, the primary tools you'll be using are top -c, free -m, and ps aux. The top command will display the current active processes, the % of the CPU they're using, how much memory, which user they're being run by, etc. Once top is running you can press shift-m to have it sort processes by memory usage rather than CPU usage. In your other window, you can run free -m. That will display the current memory usage on your PS and should look something like this:

               total       used       free     shared    buffers     cached
  Mem:          4049       3941        107          0        123       1639
  -/+ buffers/cache:       2178       1870
  Swap:         6165         42       6122

The -m option tells the command to display the memory usage in megabytes, so the numbers you see are megabytes used and free. In the above example, the total available memory is roughly 4GB, 2.1GB are used and 1.8GB are free. The total you see should correspond to the amount of memory you allocated to your PS in your web panel.

The most useful information will be in top and ps. The ps command supplies you with similar information as top, but simply takes a snapshot of the active processes and their usage and dumps it to the screen. This is useful if you only want to see particular processes. For instance, if you only want to see running apache processes, you could run this:

  ps aux | grep apache

That will "pipe" (or pass) the output from ps to the grep command which will filter that output on a per line basis looking for the string "apache" in it. Any line with that output will be displayed. Another useful variation might be:

  ps aux | grep php

to let you see all the php processes running.

Interpreting Top

While examining your top output, it's important to know what information you need to be looking at. The memory allocation you set in your web panel relates to physical memory being used (as opposed to virtual memory). That memory corresponds to the RES column in top output. Below is what some top output sorted by memory (shift-m) might look like (the actual processes will likely be different on your PS).

Ps optimization top example.jpg

In the above example, you'll see a series of php5.cgi processes running. In the RES column, you can see how much memory each is taking up. In this case, they're all taking between 13-15MB of memory. On a busy PS, you'll likely have quite a few apache processes running. Those average around 9-11MB of memory each. Apache and PHP processes multiply as your sites start serving more requests, so you can see that memory usage can quite easily skyrocket if you start getting hammered with a lot of traffic.

Recovering from Memory Saturation

Memory saturation is when your PS is using up its entire allotment of memory. When this happens, a variety of things will start happening. Your sites will likely stop responding in many cases, but beyond that other essential processes on your PS will stop responding as well such as the SSH server, FTP server, streaming media server, etc. It's entirely possible for your PS to get into a state where you can't even log in due to memory saturation. You can usually tell if this is happening to you via a number of methods. First, check the Private Servers -> Manage Resources area of your web panel. If the resource usage graph there is showing your memory as spiking high above the memory you have allocated, then this is likely what's happening. If you can still log in to your PS via SSH, then try running the free -m command. This will show you how much free memory you have available. The closer that is to zero, the worse you are. At zero you've reached total memory saturation and exactly what happens can be unpredictable.

Once you've determined this is what you're experiencing, the fastest way to get out of it is to temporarily increase the memory allocation for your PS until you have some breathing room. At that point you can either leave the memory allocation there until you get things optimized and your memory usage under control or you can start disabling your active sites until your memory gets into a range where you're comfortable and gives you room to optimize without having to spend more money on memory. To disable a site, you can simply rename its web directory. For example, if you had a domain titled example.com, you could rename its web directory from example.com to example.com_disabled. By doing that, apache will start serving 404 Not Found error pages for that domain. This will still require memory, but will keep new PHP processes from starting and will likely reduce traffic significantly to the point where your server isn't choking.

Simulating High Traffic

If your problem has to do with your memory usage spiking at certain times of the day, it can be hard to figure out what's happening since things are often running well when you're able to check on them. When that's the case, it's important to be able to simulate the environment that's causing the problem. A great tool for this is the httperf command. This is a tool for UNIX-like operating systems that allows you to measure web server performance. This is a fairly complex tool that can do a lot, but the most simple usage would be something like this:

  httperf --hog --server=www.example.com --num-conns=1000 --rate=20 --timeout=5

That command will hit the supplied domain with 1000 connections at a rate of 20 connections per second. Connections that get no response within 5 seconds are dropped. The "--hog" parameter simply allows it to hog network ports on your system to allow for more outgoing connections. Depending on the amount of traffic you're trying to simulate, you'll need to adjust that.

So, to see what's going on, you'll likely want three terminal windows open. One will be SSH'd into your PS with top -c running sorted by memory (shift-m). Another will be SSH'd into your PS where you can repeatedly run free -m (or just run watch free -m) while httperf is running to monitor memory usage. And the third will be running httperf from your local machine (you do NOT want to run this from your PS itself).

This is great for profiling your sites and checking to see if caching is working. You can run this for each of your domains and see which ones affect your memory usage the most so you know where to focus your attention when working on optimizing things.

Another tool that can do something similar to this, but has fewer features is the Apache Benchmark tool. To use that you would use a command similar to this:

  ab -n 1000 -c 20 http://www.example.com

That would attempt to create 1000 connections to http://www.example.com limiting itself to 20 concurrent connections. Between both tools you should be able to get a good idea of what's going on with your sites under higher traffic.

You can find links to get more information on httperf and Apache Benchmark in the #External Links section below.

Optimizing Your PS

Now that you have a basic idea of what's going on with your PS from the above information you can start working on optimizing things to (hopefully) reduce your overall memory usage.

Blocking Abusive Access

A lot of the time problems for sites are caused by search engine indexing bots or abusive viewers hammering them. When search engine bots hit a site, they usually hit it pretty hard. If you haven't already, you should definitely set up a Robots.txt file. After that, it's possible you could be getting hit by someone who's crawling your site, systematically downloading every page, image, and file there is. This could especially be the case if your site contains much media (video, images, audio, etc). Finding abusive accesses like this can often be more of an art than a science, but you can get some data on this by analyzing the access.log file for your websites. There is a wealth of information regarding this topic in the Finding Causes of Heavy Usage page, but the skinny in this case is to navigate to the directory containing the access.log file for your site. Let's say you have a user named "bob" and a website at example.com. To find this file you would navigate to the following directory on your PS:

  /home/bob/logs/example.com/http/

Once you're in that directory you can try running the following command:

  cat access.log | awk '{print $1}' | sort | uniq -c | sort -n

That will print out a list of unique IP's and the number of times they've hit your site today sorted by hit count. If you have extremely high traffic, your log can get to be quite large. In fact, it can get large enough that the above command won't be able to run successfully on your PS. In those cases, you'll need to download the log file and run the command locally to get a full listing. In the mean time, you can use a variation that allows you to limit the number of lines from the end of the file to use:

  tail -10000 access.log | awk '{print $1}' | sort | uniq -c | sort -n

That command will use the last 10,000 lines from your access.log file. In cases of really large log files, you can try incrementing that number until the command fails. With logs that are 500+ MB in size, you should be able to go as high as 1,000,000 or so before it starts dying. Once you get that listing, you can further investigate the IP's with high hit counts by running host command on the IP:

  $ host 209.85.171.100
  100.171.85.209.in-addr.arpa domain name pointer cg-in-f100.google.com.

The above shows this being done to google.com's IP. The important part is on the right side. You can tell that this IP belongs to Google. In many cases, you can find search engine indexing bots like this because they'll contain "crawler" or "bot" in the resolved IP string. It's up to you to determine which IPs are abusive and which are legitimate. Once you find IPs you suspect are abusive, you can block them from your site by adding the lines mentioned in the Finding Causes of Heavy Usage article.

You'll need to repeat this process for each of your sites.

Preventing Image Hotlinking

Hotlinking is when a site other than your own includes an image in their site that is actually hosted on your server by directly referencing the images URL from your site in their own. This can cause massive problems for you if some images you host on your site are hotlinked in another popular site. Even a relatively low traffic site can bring the server its on to its knees if something like this happens. To prevent this from happening, you can review the Preventing hotlinking article.

Caching Your Site

As you've probably seen by now, PHP processes can take up a significant amount of memory. If your site is getting hammered with traffic, the fewer hits that have to spawn a php cgi process the better. This can be accomplished by using caching functionality for what PHP software you're using. If what you're using doesn't have this capability, you should probably look into finding some that does. Essentially, it renders a php script and then "caches" that HTML output as a file. The next time that page is accessed the server feeds up the static HTML page rather than processing the PHP script again. The task of serving a static file requires far less memory and generally happens much more quickly than the processing and serving of a php script. As such, if you're having memory trouble on your PS, it's essential that you make sure you're running software that has caching functionality and that you enable that functionality.

Enabling FastCGI and XCache

Another big optimization you can do to reduce memory usage is turn on FastCGI and XCache for the domains you have running PHP-based sites. Without FastCGI enabled, an entire PHP binary process has to be started up for each php page that's viewed to parse and process it. This can add up dramatically as you can see pictured in the #Interpreting Top area above. Enabling FastCGI for all PHP switches your site over to using mod_fcgid (see Fcgid php). What that means is when a PHP script is hit by a browser the PHP interpreter process is started up, processes the script and then stays in memory waiting for another request. When a new request hits, it will get sent to this already running interpreter to be processed rather than spawning a new one. If you have a lot of sites or a lot of traffic, this can greatly reduce your overall memory usage.

Enabling this also allows you to enable XCache. XCache essentially will caches compiled PHP code in memory to send when PHP scripts are requested. If configured well, this can potentially increase the rate of page generation by up to 5x. For more information on configuring this after it's enabled you can look at the XCache wiki page. If you'd like additional information about XCache, the introduction page of the official XCache wiki is a good place to start.

To actually enable these options, you need to go to the Domains -> Manage Domains area of your web panel and edit the hosting options for each domain. The options you need to check are pictured below (note that only the first option is available to start and each additional option becomes available as you activate the prior one).

Ps php optimization options.jpg

Unfortunately, there isn't really a way to globally enable these options for all your sites at once. There is an option on your PS configuration page to enable PHP cache (pictured below), but that only will enable XCache for sites using mod_php (which isn't the default of any sites and really isn't recommended). So you'll likely just end up ignoring that and leaving it "inactive".

Ps xcache global option.jpg

If all domains are set to use FCGI or CGI for the PHP mode, you may safely deactivate mod_php in the PS configuration to save a good deal of memory. All modern PHP applications will run fine exclusively on fcgi.

WordPress-specific Optimizations

If you're running a lot of WordPress blogs on your PS you should definitely take a look at the WordPress Optimization and Fine Tuning Your WordPress Install articles. This effort can make a huge impact. The skinny is to make sure you're running the most recent version of WordPress, make sure you only have essential plugins installed and that they're up-to-date (be sure you completely remove unused plugins as opposed to just deactivating them), make sure the theme you're using isn't generating any errors in your error.log file (located in the same directory the access.log is in) and take any steps you might need to optimize it for better load time (see Optimizing Page Load Time), and make sure you have the Super Cache plugin installed and configured correctly (and that you have the wp-cache plugin uninstalled!).

Please keep in mind that running Super Cache alongside xcache can actually drive memory usage up. It's best to pick either one or the other and use it solely for your caching needs.

Deactivating Web Servers

If you do not need any DreamHost-supplied Web Server to run (Apache, Lighttpd, nginx, etc.), you can disable it from the Panel at Configure Server.

External Links