I’ve followed the techempower benchmarks, and every now and then I check out benchmarks of various projects (usually PHP) to see what the relative state of things are. Inevitably, someone points out that “these aren’t testing anything ‘real world’ – they’re useless!”. Usually it’s from someone who’s favorite framework has ‘lost’. I used to think along the same lines; namely that “hello world” benchmarks don’t measure anything useful. I don’t hold quite the same position anymore, and I’ll explain why.
The purpose of a framework is to provide convenience, structure, guidance and hopefully some ‘best practices’ for working with the language and problem set you’re involved with. The convenience and structure come in the way of helper libraries designed to work a certain way together. In the form of code, these have a certain execution cost. What a basic “hello world” benchmark is measuring is the cost of at least some of that overhead.
What those benchmark results are telling you is “this is about the fastest this framework’s request cycle can be invoked while doing essentially nothing”. If a request cycle to do ‘hello world’ is, say, 12ms on hardware X, it will *never* be any faster than 12ms. Every single request you put through that framework will be 12ms *or slower*. Adding in cache lookups, database calls, disk access, computation, etc – those are things your application will need to do regardless of what supporting framework you’re building in (or not), but the baseline fastest performance framework X will ever achieve is 12ms.
These benchmarks are largely about establishing that baseline expectation of performance. I’d say that they’re not always necessarily presented that way, but this is largely the fault of the readers. I used to get a lot more caught up in “but framework X is ‘better'” discussions, because I was still reading them as a qualitative judgement.
But why does a baseline matter? A standard response to slow frameworks is “they save developer time, and hardware is cheap, just get more hardware”. Well… it’s not always that simple. Unless you’re developing from day one to be scalable (abstracted data store instead of file system, centralized sessions vs on disk, etc), you’ll have some retooling to do. Arguably this is a cost you’ll have to do anyway, but if you’re using a framework which has a very low baseline, you may not hit that wall for some time. Secondly, ‘more hardware’ doesn’t really make anything go faster – it just allows you to handle more things at the same speed. More hardware will never make anything *faster*.
“Yeah yeah yeah, but so what?” Google uses site speed in its ranking algorithm. What the magic formula is, no one outside Google will ever know for sure, but sites that are slower to your competitors *may* have a slight disadvantage. Additionally, as mobile usage grows, more systems are SOA/REST based – much of your traffic will be responding to smaller calls for blobs of data. Each request may not be huge, but they’ll need to respond quickly to give a good experience on mobile devices. 200ms response times will likely hurt you, even in the short term, as users just move to other apps, especially in the consumer space. Business app users might be a bit more forgiving if they have to use your system for business reasons, sort of like how legions of people were stuck using IE6 for one legacy HR app. They’ll use it, but they’ll know there are better experiences out there.
To repeat from above, throwing more hardware at the problem will never make things *faster*, so if you’ve got a slower site that needs to be measurably faster, you’ve possibly got some rearchitecting to do. Throw some caching in, and you may get somewhat better results, but at some point, some major code modifications may be in order, and the framework that got you as far as it did may have to be abandoned for something more performant (hand rolled libraries, different language, whatever).
Of course, there’s always a maintainability aspect – I don’t recommend PHP devs throw everything away and recode their websites in C. While this might be the most performant, it might take years to do, vs some other framework or even a different language. I’ve incorporated Java web stacks in to my tool belt, and have some projects in Java as well as some PHP ones. I benchmarked a simple ‘hello world’ in laravel 4, zf2 and java just this morning. On the same hardware, the java stack was about 3-4 times faster (yes, APC cache was on). Does this mean that all java apps are 4 times faster than PHP apps? This was on PHP 5.4.34 – I’m interested in trying out PHP 7 soon to see what the improvements will be overall.