Karl Marx is the most influential scholar ever, according to a discipline-corrected ranking system. Credit: Roger Viollet Collection/Getty

Is theoretical physicist Ed Witten more influential in his field than the biologist Solomon Snyder is among life scientists? And how do their records of scholarly impact measure up against those of past greats such as Karl Marx among historians and economists, or Sigmund Freud among psychologists?

Performance metrics based on values such as citation rates are heavily biased by field, so most measurement experts shy away from interdisciplinary comparisons. The average biochemist, for example, will always score more highly than the average mathematician, because biochemistry attracts more citations.

But researchers at Indiana University Bloomington think that they have worked out the best way of correcting this disciplinary bias. And they are publishing their scores online, for the first time letting academics compare rankings across all fields.

Their provisional (and constantly updated) ranking of nearly 35,000 researchers relies on queries made through Google Scholar to normalize the popular metric known as the h-index (a scientist with an h-index of 20 has published at least 20 papers with at least 20 citations each, so the measure takes into account quantity and popularity of research). It found that as of 5 November, the most influential scholar was Karl Marx in history, ahead of Sigmund Freud in psychology. Number three was Edward Witten, a physicist at the Institute for Advanced Study in Princeton, New Jersey. The ranking appears on the website Scholarometer, developed by Filippo Menczer, an informatician at Indiana University Bloomington, and his colleagues Jasleen Kaur and Filippo Radicchi. (A current top-ten list is shown above, put together by graduate student Mohsen JafariAsbagh).

Universal metrics

Physicist Edward Witten comes out as the most influential scientist, according to Scholarometer. Credit: Randall Hagadorn, Institute for Advanced Study

“We think there is a hunger for this. Our colleagues use Google Scholar all the time, and yet it only shows the h-index," says Menczer. "We are constantly asking ‘how do we evaluate people in a discipline we don’t understand?’”

In October, Menczer's team published a paper1 arguing that the best statistical way to remove disciplinary bias is to divide a researcher’s h-index by the average of their scholarly field.

Using this correction, Marx scores more than 22 times the average h-index of other scholars in history (but 11 times that of the average economist). Witten has more than 13 times the average physicist, and so on. The effect is to ensure that those in, say, the top 5% of their discipline also appear in the top 5% of all scholars.

The idea is not new. Metrics experts have invented numerous methods to solve bias, often using averages based on age, journal and scholarly field. Normalized measures are available from commercial information firms such as Thomson Reuters.

First time for everything

But Scholarometer pushes boundaries in two ways. Most importantly, its normalized scores are freely accessible, unlike those of most sites. Thomson Reuters analyses are based on proprietary databases and cannot be made public. Another site, Publish or Perish, does return a variety of age and field-normalized metrics from public queries to Google Scholar — but only to one individual at a time. The problem is that Google Scholar blocks automated computer programmes that hit it with multiple queries, making it impossible to collate scores.

Nature special: Metrics

The Indiana team’s solution is to create an automated program that does not query Google Scholar itself, but rather scrapes the results of individual Google Scholar queries placed through a Scholarometer browser extension. Over years, they have built up a dynamic public database, with h-indices constantly revised as new Google Scholar queries come in. Menczer says that an age-corrected h-index that allows comparison of scholars at different career stages may follow.

The normalization problem is also much trickier than it seems — how do you decide what constitutes a field? A stem-cell researcher may think it unfair for their score to be corrected by the average of all biologists, for example. The Scholarometer team puts its faith in crowd-sourcing, placing researchers in multiple fields based on tags suggested in Google Scholar queries. Marx, for example, is tagged as a historian, economist and philosopher, with his highest score in history. 

Scholarometer's success depends on the accuracy of Google Scholar, which is far from comprehensive or consistent. “A user-based tool like Scholarometer can hardly deliver consistent results for fair comparison and field-normalization,” says Werner Marx, who studies scholarly metrics at the Max Planck Institute for Solid State Research in Stuttgart, Germany. And the corrected h-index is only one measurement. Experts recommend using a basket of metrics, together with peer-reviewed opinions, to compare researchers.

Nature special: Impact

“I tend not to put a whole lot of weight on these numbers and I’ve never heard of the h-index,” says James Ihle, a biochemist at St. Jude Children's Research Hospital in Memphis, Tennessee — who at one stage placed fourth overall in the Scholarometer ranking. If you, as an evaluator, have to rely solely on corrected h-indices to compare academics, says Ihle, “then you’re dumb, and you don’t understand what you are doing”.

But the point, says Menczer, is to publicly correct the bias of popular metrics. “It allows people to think beyond their discipline.”

Was T. rex covered in fur? Why an astronaut who fell into a black hole would be incinerated Genome hacker finds 13-million-member family tree