QS ranks the world's top universities using large opinion surveys. QS publishes probably the most quoted rankings. The company lists the majority of Australian universities.
The QS rankings depend heavily on academic reputation – similar to the Times university rankings.
- Unlike Shanghai and Webometrics rankings, QS relies on opinions.
- Many thousands of "experts" are asked to rate universities across the world.
- Unis are each ranked by adding scores on 5 components (details below).
How, other than through knowledge of research articles, conference presentations, historical prestige and so on, is an academic in, say, Belgium likely to be aware of a university in Australia?
QS Ranking Components
1. Academic reputation (40%)
Academic reputation is scored using a global survey. QS asks academics to rate universities in their field.
The value of this approach has been widely called into question because few academics are experts on university performance or are likely to spend much time monitoring it. Responses may be casual, uninformed, anecdotal or based on friendships and other links between universities.
2. Employer reputation (10%)
The employer reputation component is similar to academic reputation. QS asks employers to identify universities that produce the best graduates.
This employer reputation indicator is almost farcical because of how hard it is for employers to properly identify the better universities.
Even in a controlled study (e.g. you hire 10 candidates from 3 different universities and compare "quality"), it would be extremely difficult to work out which university is better. The many thousands of employers surveyed don't have the luxury of this kind of sampling. So, while some employers may have relevant experience and insight, we believe the majority rely on anecdotes, guesswork and superficial impressions.
3. Student-to-faculty ratio (20%)
The ratio quantifies how many academic staff are employed for every student enrolled. It is intended to measure teaching quality. A higher ratio suggests each student might receive more individual attention.
The student-to-faculty ratio is generally considered a poor measure of teaching quality. The ratios do not match up well against national course survey results in the UK and Australia. There is no strong correlation between student-to-faculty ratios and course satisfaction rates (for final-year students and recent graduates).
4. Citations per faculty (20%)
Citations per faculty measures research output. A citation is where a piece of research is referred to within another piece of research. More citations indicate more influential research.
It is great for the academic staff of a university to have many citations. However, it doesn't help students in any direct sense.
5. International faculty ratio (5%) and student ratio (5%)
How "international" staff and students of a university are is the last ranking component.
The percentages of staff and students who are foreign is a partial indicator of university quality. But it is not a reliable indicator. For example, some of the most international universities achieve this by setting low fees for foreign students. There are also many prestigious universities that accept relatively few international students at the undergraduate level.
What the Rankings Show
The QS methodology means the final ordering reflects university prestige.
- Highly prestigious (well known and highly regarded) universities do well in opinion surveys – by definition.
- They also attract large amounts of funding to spend on research and staff.
QS ranks several Australia universities among the world's top 100. The overall order is quite different from rankings based on course ratings and graduate outcomes.
I think people should just ignore this ranking. There are too many flaws. Even the citations per faculty is flawed since it favors smaller departments full of deadwood professors. Just look at the subject rankings and how ridiculous they are.