A quick overview of Symfony performance with proper configuration
What is the expected page load time of a Standard Edition Symfony application? How does it change in a production environment? How much time can be shaved off while not sacrificing whole components?
At foodpanda, we are prototyping a base for a new REST API and the following tests are executed against a skeleton which should be common for all requests, i.e. a Symfony secured Route (with the help of the FOSOAuthServerBundle) where the authentication provider makes a Redis query to verify an access token. The controller generates a JsonResponse by loading a result set from MySQL via DQL because unfortunately a good few milliseconds are added while booting the ORM. A few tricks are on the side of nginx and php-fpm, our web server setup, Apache won’t be considered.
class BenchmarkController extends Controller
public function jsonAction()
return new JsonResponse($this->getData());
private function getData()
FROM FoodpandaApiBundle:Order o
JOIN o.user u
JOIN o.items i
dev vs. prod
This might sound dumb, but don’t run the application with development settings. Let’s assume I’m a clueless developer, I just set up this application on my dev machine and I’m running a first benchmark.
The results are not pretty: 395ms median request, 427ms on average. What’s wrong? XDebug. Now that I’ve learned my lesson and unloaded the extension, the results are a bit more comforting: 165ms both median and average times.
I then realize I’ve been benchmarking against the
dev environment, which checks files for changes and updates the cache if necessary. The
prod environment clocks at 101ms both median and average.
Let’s get closer to a production environment. Symfony heavily relies on composer and it comes with a
--no-dev option. The difference? Zero. This actually surprised me, I’ve registered up to 10ms boosts by generating the class map instead of relying on PSR name parsing.
PHP and nginx
Leaving the realm of the application itself, there’s a lot which can be done on to get little boosts here and there. The first thing you’ll find when looking for performance tips for PHP is enabling opcache. In PHP5.5 it’s bundled by default, in previous versions APC or other extensions are available. Some developers incorrectly assume that it’s already configured in a default installation of PHP from PHP5.5.
In php.ini, set
opcache.enable=1 and possibly
opcache.validate_timestamps=0 (you will have to restart PHP FastCGI after any changes to files).
Most package manager versions of PHP are built with
--enable-opcache, but sometimes, the extension itself is not loaded. Check by looking at
php -v, you should see something along the lines of:
$ php -v
PHP 5.5.18 (cli) (built: Nov 3 2014 14:15:40)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
with Zend OPcache v7.0.4-dev, Copyright (c) 1999-2014, by Zend Technologies
The result is pretty impressive: 30ms for one request.
HHVM for the final stretch
Opcache looks cool but what about HHVM, the new player on the field? Unfortunately, I cannot reproduce our results while I’m writing this article, because HHVM is currently cryptically broken on Mac OS X. However, if you’ll take my word for it, on comparable hardware on Ubuntu, the same page loaded somewhere between 13-15ms including the two requests to storage.
I still feel a bit uneasy about running HHVM on production, but the latest stable version seems to be very promising. I’d love to hear anyone who has experience with their applications running on it under stress.
All of the numbers were taken from 500 sampled requests run on one thread with JMeter. Raising concurrency doesn’t increase the times by much until you run out of either the nginx maximum worker processes or your cores. Even under 20 concurrent requests, the times land somewhere between the 100-200ms mark.
Increasing concurrency would also mean tuning the storage backends which PHP is connecting to and wasn’t the goal of this demonstration.
In the wild
We host on Amazon and the first thing we’ll notice after deployment is that everything will be slower than on my laptop. Network latency is unfortunately the tax for scaling and hosting in cloud. We also have an approach of many small servers which just don’t have the specs of the computer on which I’m writing this.
I don’t know if the final results will be fast enough for us in the end but either way, this small exercise on the proportions of the effect of the basic tweaks anyone can do was a fun one.