Ethz small

Why university rankings suck – A first-person perspective

Switzerland’s universities hold excellent positions in various international research and education rankings. Time to celebrate? Not really, thinks Richard Oberdieck, our new science blogger.

This week, the new QS World University Ranking appeared, and yet another round of «look how great my former/current/future university is» began on social media and elsewhere. Almost like a clockwork, everybody seems to be anxiously waiting for the moment where some «objective» metric somewhere in the world tells them how much their education is worth. Because clearly, if I graduated from a Top10 university, I am also a Top10 person, right?


As a graduate from ETH Zurich (ranked at 9 in the QS ranking) and currently doing my PhD at Imperial College London (ranked at 8 in the QS ranking), I should be boasting with pride about my position, my university, my degrees. Unless you ask yourself three simple questions which will make you arrive at the conclusion that university rankings suck.

First, let us start with the premise of university rankings, which is that there is an objective way of quantifying the quality of a university. Now the first problem with this approach is: what does «quality of a university» mean? What is the «currency» of a university? I would argue: knowledge. Its generation (research), development and understanding (teaching). Others might very well say: personal development. Even others might say: monetary profit. And so on. But let us stick with knowledge for a moment. The idea of a university ranking directly implies that there is a way to quantitatively describe knowledge. But how? By the number of research papers multiplied by the impact factor of the journal plus the number of teaching assistants per student? Everybody who has ever read a scientific paper knows that in good journals there are bad articles and vice versa. Just check out this article by David Colquhoun, where he shows how stupid the entire publishing process is. Similarly for teaching assistants. At ETH Zurich, the best teaching assistant I ever had was the most crowded class (because we were allowed to switch TA’s) with 35 other students. Still, he was much better than having a demotivated and bored TA for 3 students.

This brings me to my second point. The numbers. The rankings are based on numbers which can be inherently manipulated. For instance, one criteria is, as mentioned above, the number of TA’s per student. Let me take the examples of ETH Zurich and Imperial College to show why this metric (or any metric for that matter) is a terrible idea. At ETH, we used to have homework and TA sessions every week. As a result, we were normally between 20 to 30 students per TA, but the sessions were generally very productive (1 hour, discussion of homework etc.) The TA’s were always available for questions and so on. On the contrary at Imperial we had TA sessions every 3 weeks, but we were only 6 students. Clearly, this is great for a ranking (1 TA per 6 students is better than 1 TA per 30 students), but I still think that seeing my TA once per week is much more helpful for my knowledge development process than seeing my TA once every three weeks. But the metric does not account for this. And even if it did, there are infinite possibilities of changing the numbers. As Winston Churchill famously did not say (Goebbels spread the rumour to make him look bad) «Do not believe a statistic you did not fake yourself.»

Lastly, let me talk about the impact of these rankings, and why universities are not companies. Within companies, the goal is pretty straightforward: make as much money as you can. The more money/profit a company (legally) makes, the better a company it is1. For a university, this is not so easy. First, virtually no university is profitable (in the short run). They do not produce a product, they produce knowledge and expertise. The impact a good university has can for example be seen by their students. But this impact will become apparent years, maybe decades later. Then, a John Nash can say that his time at MIT or Princeton helped him develop the genius he had. Similarly for research. Most great ideas and publications did not get instant fame and success. Even the great Albert Einstein had to wait until Arthur Eddington confirmed his general theory of relativity to start his run of fame and success. Before that, he was merely a quirky guy from Switzerland with crazy ideas about space and time. But when people, and more importantly, the universities themselves start acting like companies and try to follow short-term goals, then the long-term achievements of universities will fade and be replaced by questions such as «how many papers did you publish?», «Did you get any grant money for the university?», «Is the teaching good according to these metrics and standards?», then the quality of universities will decrease and the focus will shift.

The quality of your education should become apparent in how good you are at what you do, not whether a computer program says so.


Richard Oberdieck completed his PhD in Chemical Engineering and now works for the world leader in offshore wind energy, DONG Energy/Ørsted on advanced modeling and optimization problems. As a (former) scientist and employee in industry, he is passionate about the bridge between academia and industry.

Die Beiträge auf dem Reatch-Blog geben die persönliche Meinung der Autor*innen wieder und entsprechen nicht zwingend derjenigen von Reatch oder seiner Mitglieder.

Zum Kommentieren mehr als 20 Zeichen im Text markieren und Sprechblase anklicken.

Wir freuen uns über nützliche Anmerkungen. Die Kommentare werden moderiert.