1

I'm setting up my first VPS and it seems to be working well. Installed Nginx, php-fpm (as a unix socket), APC, Varnish and MySQL on Ubuntu 12.04 server with OnApp, and everything works and is damm fast, at least on my end.

Atm i have a VPS with 1 core (Xeon(R) X5660 is what the VPS use iirc), 1.2GHz and 768MB RAM, everything limited with OnApp. Doing an ab test, i got this:

ab -c 10 -n 1000 http://198.136.50.39/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 198.136.50.39 (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: 198.136.50.39 Server Port: 80 Document Path: / Document Length: 6482 bytes Concurrency Level: 10 Time taken for tests: 41.695 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 6952000 bytes HTML transferred: 6482000 bytes Requests per second: 23.98 [#/sec] (mean) Time per request: 416.946 [ms] (mean) Time per request: 41.695 [ms] (mean, across all concurrent requests) Transfer rate: 162.83 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 167 203 57.3 182 614 Processing: 173 212 82.9 189 2015 Waiting: 169 206 59.8 185 726 Total: 345 415 126.5 373 2419 Percentage of the requests served within a certain time (ms) 50% 373 66% 388 75% 410 80% 430 90% 504 95% 708 98% 866 99% 931 100% 2419 (longest request) 

While i did the test i was taking a look at the VPS's stats with htop, and it seems like it did not used more than 230mb RAM in the entire test, and CPU stayed at 2~4% usage, what is prety cool i guess. But it seems like the Requests per second was kinda low. What do you guys think? Does it look ok for the setup i have and i'm being paranoic, or it's bad? With default settings i used loadimpact.com to take a look, with 25 users (default free test) and default load as 130ms... after i started messing up with the settings, did the test again and it jumped to 250ms, so i guess i'm doing something wrong.

I started trying to optimize MySQL for it, using tutoriais i found on the internet for low end boxes and https://tools.percona.com/. Percona gave me some really big numbers, so i did a mix of both.

I also optimized php-fpm and Nginx, reading they wikis and tutorials all over the internet. I will use this VPS for a WordPress website with about 5k daily visitors and 13~15k daily pageviews. W3 Total Cache is setup to do database and object cache using APC, and minify/page cache with disk enhanced... but before i migrate the site to this server and go live with it, i want to optimize everything and be sure it will be fast.

I also use MaxCDN (not active on the VPS atm) and will use CloudFlare as a DNS server. Anyone can help me optimize it, please?

My MySQL config looks like this atm:

[mysqld_safe] open_files_limit = 8192 [mysqld] skip_external_locking skip_slave_start bind-address = 127.0.0.1 key_buffer = 64M join_buffer_size = 1M read_buffer_size = 1M sort_buffer_size = 2M max_allowed_packet = 16M max_connect_errors = 10 thread_stack = 192K myisam-recover = BACKUP max_connections = 400 table_cache = 1024 thread_cache_size = 286 interactive_timeout = 25 wait_timeout = 1000 query_cache_type = 1 query_cache_limit = 1M query_cache_size = 32M max_write_lock_count = 1 expire_logs_days = 10 max_binlog_size = 100M innodb_flush_method = O_DIRECT innodb_buffer_pool_size = 10M skip_name_resolve sql_mode = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY tmp_table_size = 16M max_heap_table_size = 16M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M 

My Nginx config looks like this:

worker_processes 1; events { worker_connections 1024; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; server_tokens off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; client_body_buffer_size 8K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 2 1k; client_body_timeout 10; client_header_timeout 10; send_timeout 10; include /etc/nginx/mime.types; default_type application/octet-stream; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; 

My site's Nginx config looks like this:

server { listen 8080; server_name www.ubuntubrsc.com ubuntubrsc.com; root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; error_log /var/log/nginx/blog.error_log; if ($host ~* ^[^.]+\.[^.]+$) { rewrite ^(.*)$ http://www.$host$1 permanent; } location / { try_files $uri $uri/ /index.php; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; location ~ \.(css|js|htc)$ { root /var/www/ubuntu-online/; expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { root /var/www/ubuntu-online/; expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } location ~ \.(gif|gz|gzip|ico|jpg|jpeg|jpe|swf)$ { root /var/www/ubuntu-online/; expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } 

The cache part for ubuntu-online is because i don't know if W3 Total Cache's settings are avaiable for all folders inside /www/var/, so i added those with "root /var/www/ubuntu-online/" to be sure. Should i erase them?

On php.ini i edited some stuffs to increase security, like open_basedir and other stuffs, and also enabled a php internal cache editing 2 lines, but can't remember what was it.

Also, those are the APC settings:

[APC] apc.enabled = 1 apc.cache_by_default = 1 apc.stat = 1 apc.shm_segments = 1 apc.shm_size = 64 apc.ttl = 7200 

And finaly, my php-fpm pool:

listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm.max_children = 9 pm.start_servers = 3 pm.min_spare_servers = 2 pm.max_spare_servers = 4 pm.max_requests = 200 

Also, how do i know Varnish is working with WordPress? I know i have to make some specific configs to varnish and WP play well together, and i did follow this https://github.com/nicolargo/varnish-nginx-wordpress, but how do i know it's working?

Thanks in advanced everyone (:

2
  • ab is a great tool if you need to perform simple tests. In your case, I do not think it's the case. I think that your home page also contains files css, javascript, images, ajax queries, etc. On the other hand, your test doesn't handle headers cache. ... So ab does not correctly simulate users and your benchmark is not relevant. Commented May 25, 2012 at 20:58
  • @Yohann So the test doesn't represent a real world situation? What about the settings... are they ok? Commented May 26, 2012 at 0:55

1 Answer 1

2

You have very good setup for your load. 20rps to php scripts is enough for 200k+ daily pageviews.

Things to tune for you are: innodb_buffer_pool_size = 10M - this is pretty small value. All your active data in innodb should fit to buffer pool, leaving enough space for transactional logs.

worker_connections 1024; - you can run out of this pretty fast. I'd suggest you to check nginx connections under real load using stubstatus module and tune this first few days your service will be running.

You may also increase php concurrency pm.max_children = 9 - to make use of extra RAM and CPU. Do it if you have problems with full tcp backlog (ss -nl)

You can hit max_children limit if you have high request rate and/or your scripts are slow enough. Increasing max working threads/processes will increase Load Average if your scripts is cpu-bound.

Let me show you the basic and approximate math. You have scripts that works 100ms and uses 5ms cpu time (100ms is disk/network IO wait) - in average.

If you have over 9*1000/100=90 requests per second your tcp backlog will start growing and new requests will wait some time until it will start.

Your scripts will consume 90*10/1000 = 45% cpu user time from single CPU core. Not much, ain't it?

If you increase max_children to 15 you may have 150 requests per second without slowing down, but scripts may consume 75% of single core cpu time.

It's good until you have too much load. If you will not have enough CPU or RAM to work with chosen concurrency - your server will hit Load average - it means that scripts will slow down due to CPU congestion. Server with Load Average values about 2-4 times of cpu core count usually is responsive enough. Requests processing will be somewhat slower, but you may handle higher request rate. If you will not have enough RAM - server will start swapping, loading disks and throttling CPU.

So, too few max_children - you will not handle high request rate. Too much - and your server will hang under high load.

2
  • Thanks for your answer @DukeLion (: i set innodb pool size to 10mb cause i saw somewhere that the pool size should be about 110% of your DB's size. Since my db is about 8mb, i thought 10mb would be a good number. About pm.max_children... what can i get with it if i raise the value? Commented May 27, 2012 at 20:57
  • Yeah, with database that small it's enough. Commented May 29, 2012 at 6:51

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.