University rankings are not particularly useful because they only measure the output and hardly the input. And if they measure the input, having more inputs is better. This means that rankings reflect size and resources, not how well resources are used, and how much students improve from when they started their studies. This implies that any university that does not have a medical school or an engineering school starts with a disadvantage. Internationally, university rankings have become very important, to the point that, for example, France is now reversing the splitting of its large universities in field specific institutions. The new monster universities, now again covering all fields, will rank much better thanks mostly to their sheer size.
To counteract all this, you need to measure the efficiency of universities. Thomas Bolli does this for 273 universities across the world by estimating a production possibilities frontier. Unfortunately, the sole measured input it full-time equivalents of staff, while possible outputs are FTE of undergraduate and graduate students, and citation numbers. But it is a start. Universities in Switzerland and Israel appear to be very efficient (and indeed they are small are generate a good amount of research) while those in the UK seem particularly inefficient. That should fan some flames in the debate on university financing there.