Collect and analyse your Altmetric data

17 11 2020

Last week I reported that I had finally delved into the world of R Shiny to create an app that calculates relative citation-based ranks for researchers.

I’m almost slightly embarrassed to say that Shiny was so addictive that I ended up making another app.

This new app takes any list of user-supplied digital object identifiers (doi) and fetches their Altmetric data for you.

Why might you be interested in a paper’s Altmetric data? Citations are only one measure of an article’s impact on the research community, whereas Altmetrics tend to indicate the penetration of the article’s findings to a much broader audience.

Altmetric is probably the leading way to gauge the ‘impact’ (attention) an article has commanded across all online sources, including news articles, tweets, Facebook entries, blogs, Wikipedia mentions and others.

And for those of us interested in influencing policy with our work, Altmetrics also collate citations arising from policy documents.

Read the rest of this entry »




The ε-index app: a fairer way to rank researchers with citation data

9 11 2020

Back in April I blogged about an idea I had to provide a more discipline-, gender-, and career stage-balanced way of ranking researchers using citation data.

Most of you are of course aware of the ubiquitous h-index, and its experience-corrected variant, the m-quotient (h-index ÷ years publishing), but I expect that you haven’t heard of the battery of other citation-based indices on offer that attempt to correct various flaws in the h-index. While many of them are major improvements, almost no one uses them.

Why aren’t they used? Most likely because they aren’t easy to calculate, or require trawling through both open-access and/or subscription-based databases to get the information necessary to calculate them.

Hence, the h-index still rules, despite its many flaws, like under-emphasising a researcher’s entire body of work, gender biases, and weighting towards people who have been at it longer. The h-index is also provided free of charge by Google Scholar, so it’s the easiest metric to default to.

So, how does one correct for at least some of these biases while still being able to calculate an index quickly? I think we have the answer.

Since that blog post back in April, a team of seven scientists and I from eight different science disciplines (archaeology, chemistry, ecology, evolution & development, geology, microbiology, ophthalmology, and palaeontology) refined the technique I reported back then, and have submitted a paper describing how what we call the ‘ε-index’ (epsilon index) performs.

Read the rest of this entry »







%d bloggers like this: