At the Academy of Management conference in Chicago last August — one of the world’s largest and most important annual gatherings of business school academics — one theme was prominent in numerous presentations: balancing rigour and relevance in research.
Among the hundreds of projects discussed in the space of a week, a number of impressive studies stood out. But many others appeared more than a little esoteric and theoretical. Research with practical applications — let alone a focus on addressing the most important issues facing society, such as climate change, poverty and inequality — was less evident.
That points to a broader debate about the role of business schools, their relationship to the world beyond universities, and the nature of academic governance and incentives around research.
This FT research insights report is an attempt to explore that discussion. It looks at potential new ways to measure, showcase and stimulate greater focus on research which — while still rigorous — is also focused both on relevance to the most pressing problems facing the planet, and has resonance with the world of practice.
To some academics, any attempt to hold them accountable through measurement of their outputs — let alone to influence the direction of their research — is a threat to their independence in principle and doomed to fail as a way to drive better outcomes in practice.
But, as George Feiger, a veteran business school dean who also worked as a management consultant and banker, puts it: “The real issue is a truly dramatic lopsidedness in focus in business schools, where academics measure their success — and their institutions evaluate them — almost entirely according to publications in a proliferating array of journals that nobody reads.”
The FT, like other organisations seeking to assess academic research, has long used as its yardstick “high impact” peer-reviewed academic articles published in leading journals — even though the approach has limitations. Publication in journals highly regarded by academics is a reasonable proxy for the best articles that have been recently released and are considered valuable by other academic experts from the relevant journals’ editorial advisory boards and peer reviewers.
The “FT50” is a list of journals selected in consultation with business schools. Other influential lists include those of the University of Texas Dallas and the Chartered Association of Business Schools’ Academic Journal Guide.
These metrics are closely watched by those seeking ways to assess academic quality. They include librarians considering which journals to stock, and external funders and scholars themselves when considering recruitment and promotion — benchmarking against other institutions and reviewing draft papers and grant applications.
But the “impact factor” of the journal is an average measure calculated from the mean impact of past articles, rather than a guarantee that each new paper it publishes is of the same significance. As academic signatories of the Declaration on Research Assessment have long argued, it is deeply reductive.
Inertia within existing academic fields, the power of personal networks and the views of established scholars and their work may lead to rejection of papers submitted by scholars who challenge “conventional wisdom”. Original research may be dismissed ahead of a paradigm shift.
Any specialist list of high-impact journals may exclude articles and publications in emerging fields or interdisciplinary work that has traditionally sat beyond the core domains of business schools, such as artificial intelligence, sustainability or neuroscience. One academic at Chicago’s Booth School of Business declares: “We don’t look at the metrics, we read the papers!” And those with expertise are best able to make such assessments of their peers’ work. But metrics will inevitably continue to have a role for those under time pressure or without the same level of insight.
The calculation of journal impact is usually based on citations — direct references to an article in other publications. Yet it typically takes several years for a representative volume to emerge. That limits the value of these figures in a regularly updated ranking like the FT’s, which seeks to capture quality recent academic output.
In addition, the level of citations varies between different academic disciplines, even within business. Other problems include “gaming” through self-citation by authors in their own subsequent papers and by others seeking to carry favour with them. Some journals seek to increase their impact by encouraging citations from past work they have published.
To get round these problems, this report aims to explore the impact of individual academics’ recent work (over the past three years) and, by extension, their business schools. It focuses on three characteristics that add nuance to traditional citations: rigour, resonance and relevance.
Rigour is a measure of quality. Academic research should be grounded in credible work that draws on past scholarship, for which peer review remains the gold standard. So the analysis in this ranking takes as a benchmark the continued necessity for research to have been published in the FT50 list of top journals.
Resonance considers how far academics’ work has been disseminated more broadly. Of all academic departments, business schools ought to engage with practice, by disseminating the best ideas and engaging with policymakers and decision-takers in the public, private and non-profit worlds. We explore proxies for how far scholars’ work is read or acted upon by those beyond universities.
Relevance expresses how far research aligns with the most pressing needs of society. At a time when so many concerns — from climate change to poverty and inequality — are affecting individuals, organisations and political systems across the planet.
This approach is not intended to undermine academic independence or to dismiss the importance of theoretical work distant from any practical application. Nor can it fully demonstrate the ultimate “impact” of research on society.
Many ideas once in vogue have been shown to be flawed and even counter-productive. Valuable concepts that might first have been developed in academia may be significantly changed before adoption. Implementation is often uncertain and slow and practitioners may be reluctant to credit innovations to their original authors.
Tracking, measuring and comparing ultimate “real-world” impact seems largely unattainable. Using intermediate measures of resonance offers more promise. This ranking seeks to provide a starting point for discussion. We take advantage of the explosion of online data, analytics including large language models, and new organisations seeking to track academic impact — which demonstrate the demand for new forms of assessment of scholarship at scale.
In considering rigour, we draw on OpenAlex, an open access tool that permits analysis of academic outputs through multiple metrics. We use its “contextualised citations” which are “field-weighted” to normalise the different levels of citations in different academic fields.
We draw on the academic assessment company Scite, deploying its “positive citations” approach to analyse the context in which scholarship is cited. It judges how far references to past research are both supportive and central to the argument of a new academic paper.
And on relevance, we look at how far the content of academic articles aligns with the United Nations Sustainable Development Goals (SDGs), drawn up by governments and civil society groups to prioritise societal concerns to be addressed by 2030.
For our main ranking, we use OpenAlex analysis, drawing on key SDG terms identified by the Aurora Universities. Elsewhere in this report, we compare that with alternative frameworks developed by Clarivate, a commercial provider of academic data, and two academic groups: Rotterdam School of Management and St Joseph’s University.
None of these approaches is perfect. The algorithms periodically throw up papers that appear to have little direct alignment to the SDGs and the weightings they assign are debatable. But they provide a tentative indicator in aggregate of how far business school academics are focusing their scholarship on important objectives.
In analysing resonance, we use SSRN, a platform on which academics make their research accessible for free by uploading draft papers (most subsequently published, albeit sometimes in modified form). To mitigate the risks of gaming by bots or individuals artificially inflating the scores, we filter the data to consider only downloads by users identified as working in governments or companies.
We use Overton, which tracks academic papers cited in government consultation documents and think-tank reports. That provides third-party recognition of valuable insights from academia that may influence policy.
In another article, we use Lens.org, which tracks citations of academic work in patents, to see how far companies consider academic research powerful enough to justify citing it in their intellectual property protection filings.
Ironically, only a dozen business school projects gained such recognition — too small a sample to meaningfully construct a ranking. The scholars cited were also rarely aware that their work had been cited, let alone having explicitly worked with the companies that made the filings.
Similarly, few business school scholars rely on external grants for their research compared with other disciplines. They can lean on internal funds often generated from high tuition fees, which may give them easier access to resources, but also reduces their external accountability. We explore those that do declare external funders.
But the greatest form of resonance in disseminating business school research for impact should be through the students, most of whom will leave academia and go to work in business or government. Therefore, we explore ways to assess academics’ production of teaching materials.
One source is teaching cases, which are not used equally by every teacher or business school but remain a foundational resource for most. We worked with the Case Centre, one of the “big three” publishers and distributors of cases around the world, to track which authors and business schools produced the most popular cases — a proxy for quality.
Another benchmark comes from Open Syllabus, which scrapes from the web the details of university courses including reading lists, signalling how far business school authors’ textbooks and journal articles are assigned by teachers elsewhere.
We share the individual rankings of business schools across all these different data points, as well as providing a tentative weighting to combine them into an overall aggregate score. Elsewhere in this report, we explore some of the individual articles that the different approaches surface.
Readers may disagree over the importance of the individual papers highlighted, the metrics, or the way they are combined. At the same time, they provide a starting point for a wider debate on measuring academic rigour, resonance and relevance in business schools. They raise wider questions on how academic authors, business schools and publishers should classify and share their outputs with consistent data. That will assist those seeking to conduct evaluations, and may also help society at large.
Methodology
This research ranking is a one-off analysis separate from the research calculations in the FT’s annual business school rankings, which credit academics for papers written in FT50 journals in the past three years.
To track academic rigour, it considers articles published in FT50 journals and analyses their wider impact. It uses OpenAlex to identify how far they are cited in other academic journals over the past three years, weighted relative to the discipline of the article. It uses Scite’s analysis to assess the share of those citations that are positive.
To explore resonance beyond academia, it uses SSRN’s tracking of the number of downloads made by non-academics in government and companies of pre-prints of articles available for free in the past three years. It also uses analysis by Overton to track citations of articles in government consultation documents and other reports for policymakers.
To assess the influence of academic research on learning, it used the impact index produced by the Case Centre to identify the most popular recent teaching cases for business. It also uses the presence of business school authors’ textbooks and articles on reading lists in universities around the world, tracked by Open Syllabus.
Each of these six measures has been weighted at 14 per cent. A further 10 per cent is allocated to OpenAlex’s classification of the extent to which the content of FT50 articles align with the UN’s Sustainable Development goals, as a proxy for the societal importance of academic research topics.
It allocates academics’ works to the business schools to which they are affiliated. A final metric at 6 per cent ranks schools based on the number of FT50 articles per academic faculty member.
Research for this table also tracked citations of academic papers in patent applications and the number of external grants authors received. These metrics were excluded because the numbers identified for the former were too limited, and there is no data on the extent of funding. The data is shared in articles in the associated magazine.