Medical specialties, including nephrology, evolve through a process of review and analysis of research studies that is only achieved by exposing, comparing and subjecting them to peer review. This implies—among other things—the action of searching, reading and selecting relevant articles in an ever-increasing number of publications. This task is usually very complex and laborious.
In addition, research studies may affect medical practice only if they are read or reviewed by other professionals. Of the nearly 2 million manuscripts published annually, up to 50% are read only by the authors, the journal editor, and the reviewers, and most of them are never cited.1
Along with these challenges, the enormous development of the different digital media allows us to search for an article, save it, discuss it, share it or cite it.2
The tools classically used to evaluate the impact of an article are mainly based on the citations of an article and are not able to measure its importance in the new digital scenario, where the interconnection and expansion of resources for free medical education and free access, better known by its Anglo-Saxon acronym FOAMED (Free Open Access Medical Education), have allowed the emergence of new article metrics, that are named erroneously as alternative metrics (AM) or Altmetrics.3
The dissemination of scientific content on social networks would increase and better define the impact of the work in the scientific community.4 Recently, a retrospective study showed that an active promotion strategy on Twitter increases the probability of being quoted by 11 times (75% vs. 7%); in addition, the tweeted articles were cited previously.1 On the other hand, older randomized studies could not confirm this association, having the impression that possibly t social network users did not have the need to access the full article and, therefore, experienced knowledge of the article, but not online access to main source.5
Until recently, the quality and relevance of a published scientific article was measured by applying traditional measurement systems (metrics) that count the number of citations or views. The result of this analysis was applied to assess the research capacity of an author (eg, H index) or the relevance of a scientific journal, with the impact factor (Anglo-Saxon acronym, IF) being one of the most used. These indices are usually subrogated (substitution or substitute methods) to assess the quality of the work analyzed and they do generate some controversy in their assessment.6 In both cases the value of these metrics is limited because they depend on a citation count of subsequent publications.7 This is a slow process, which can take months or even years, resulting in a long wait for researchers before readers discover the importance or relevance of an article or clinical trial.
In addition, the citations of an article are not always a reflection of its quality, since there are articles that have been cited to a large extent as a result of a rebuttal process rather than because of their quality; this limitation also applies to new metrics.
During the last decade, we have observed a vertiginous increase in biomedical publications that have challenged the existing classic systems to evaluate and select scientific works, such as peer review. However, once published, it may take months or even years to find out the importance or relevance of an article or a clinical trial.
The overwhelming spread of social networks and the increase in the number of open access articles (Open Access Journals) have meant that the reading and discussion of scientific papers has extended beyond narrow academic circles, spreading between public opinion.8 Social networks are an alternative for a greater dissemination, exchange and discussion of scientific articles. These very substantial changes have forced the search for alternative and complementary data sources to the traditional metrics (TM), as defined in Altmetrics: A manifesto, of 2010.9 In this sense, the term article metrics (Article-Level Metrics, ALMs) has been proposed as a tool to be able to evaluate the impact of a scientific study in a different way.10
Due to its rapid spread, it is easy to see how Altmetrics or AMs are terms used to mistakenly refer to the real article metrics.
AMs measure the impact that an individual article has on various domains based on activity rates, in real time.11,12 They collect public data through source codes and artificial algorithms from different platforms and web pages (blog, journal club) that are characterized by allowing access to the article (Pubmed, Figshare), saving it (Mendeley, CiteUlike), discussing and sharing (Twitter, Facebook ), recommending (Faculty Opinions) or citing (Crossref, Scopus).
The numbers of views, citations, likes, and retweets make it possible to calculate both the academic or social impact of the article, as well as the detailed composition of the impact and, finally, to measure the research activity of an author13,14 (Fig. 1).
Donut Altmetrics: example of article metrics that counts the number of citations and views (“traditional metric”), as well as the number of shares via social networks or blog. The size of the donut colors will change proportionally to the interactions received by each source.
Source: https://www.altmetric.com/about-our-data/the-donut-and-score/.
There are several companies that have developed programs and offer their services to different scientific journals, among the most important are Altmetrics.com, Plum Analytics, Impact Story, among others. These are counting tools able to search in Internet for data to assess the social impact of academic research. They are presented by very clever visual designs attached to scientific articles or publications and provide immediate feedback allowing the reader to see the sources accessing the article very quickly. It should be noted that Altmetrics.com has based its business primarily on providing metrics and indicators to academic publishers. Likewise, PlumX has found a way to develop its activity in institutions, offering a “dashboard” to track its social impact. This commercial practice has inserted their scores into the academic publishing environment long before they have been proven to be reliable and meaningful research indicators, causing considerable misunderstanding about their meaning and importance.
Another opportunity offered by the analysis of articles through the alternatives metrics are some platforms such as Semantic Scholar (Semantic Scholar|About Us).15 It is a free academic content search engine that uses machine learning or artificial intelligence to analyze publications, including figures, tables, semantic associations and the metrics of scientific articles to order the results that an algorithm considers most relevant.
A subsequent application is the possibility of analyzing what are the fields of interest of public opinion or of certain scientific groups and being able to guide the editorial outlines to respond to this demand.
These novel tools have been increasingly adopted by institutions that want to help their researchers strengthen funding applications and find interesting stories to tell about their work.
Most sponsors, whether they are public or private, provide funds based on the quality of an idea and an applicant's track record, of which academic resume is often an important part. AMs can help the review committees to analyze the scope and influence that the academic work of the candidate, would have both inside and outside the academic sphere, perhaps reducing any reliance on assumptions that research published in most “prestigious” journals necessarily has a greater impact.16
Is it correct to benefit a researcher who publishes many non-original articles on the same topic, but in international journals, or is it better to benefit one with few—but original—, curious articles, with novel perspectives published in regional journals?17
AMs can be particularly beneficial for emerging researchers, especially those who may not have had the opportunity to accumulate a sufficient number of publications to be competitive on traditional indicators, or those researchers whose fields of interest limit their ability to publish in high impact factor journals.16
The same sponsors and corporate organizations could consult the results of their programs and modulate their research strategies.18
Finally, these tools could help researchers not only track the results of their research, but also plan ahead and actively promote their research for funding sources to achieve whatever impact it may have.
With this purpose, the author should develop their own strategies to promote and disseminate their work as widely as possible, evaluating which journal to choose to publish, whether it has an AM program or if it is linked to the main social media. In this regard, the researcher should ask himself some of the following questions: Does the journal have a presence on Twitter or Facebook, or a blog? Does it have any other presence in free access media or social networks, such as LinkedIn, Wikipedia, YouTube?19
Dissemination of one's own work is of vital importance in order to obtain research funds. Could the story of Prof. Mojica, discoverer of the CRISP-Cas9 technique and ignored for years by the public funding system, have been different if he had known how to communicate its discovery at the social level?20
One of the main doubts about these new tools is the existence of a correlation between highly cited articles and their AM score.
Evidence from systematic reviews on this topic appears to be scant. In fact, there are only 3 to date: the first focused on medical research outcome and reported significant associations between Altmetrics and traditional citations, without linking impact measures.21 The second aimed to evaluate the methodological quality in cutaneous psoriasis, determining that the IF of a journal could predict the number of tweets, while the years of publication and the number of Mendeley readers predict the number of citations in Google Scholar.7 However, the authors concluded that it does not seem to be a connection between scientific quality, social networks, activity and the use of the article.22
In a more recent systematic review looking at the association between Altmetric scores and bibliometrics, the results were mixed. No significant correlations were reported in 12 of 44 studies (27%), with weak or moderate correlations in 30% and positive associations in the remaining 43%. In particular, there was a positive association between Mendeley readers (reads and downloads) and subsequent citations.23
These data are similar to the findings reported in one of the impact studies, which found significant correlations between the number of downloads of an article and the subsequent number of citations (r = 0.52; P < .001), which were even higher in the intervention group (r = 0.67; P < .001).24
However, in another 2 studies, no significant associations were found between MT and social networks25 and between the Altmetric score, the IF of the journal, the readings of ResearchGate, a repository of scientific articles, and the number of article downloads.26
Despite the fact that in the literature there are studies with discordant results, it is expected that an improvement in the correlation of these algorithms will only be a matter of time.27,28
Comparisons between CM and AM are controversial, as it is often difficult to assess the credibility of those making the criticisms and the validity of their comments. However the AMs attempt to address these limitations by examining and weighting the authors' contributions in each citation; but obviously, the degree of dissemination on social networks does not replace the critical capacity of readers.29
Another limitation in the use of AMs for scientific articles is the lack of transparency about the source codes and APIs (application programming interface: a code that allows 2 software programs to communicate with each other), which are used to track citations in different sources. Furthermore, it is difficult to create a replicable system of algorithms and source code because the speed with which web platforms evolve is astounding.
The sources from which the data for a scientific article are obtained are very heterogeneous and hardly comparable (eg, is a “like” on Facebook the same as a citation on a blog?). Finally, AMs are susceptible to being manipulated: a good marketing campaign can increase the impact of a publication, even applying Search Engine Optimization (SEO) techniques, positioning in searching tools.30
In an attempt to solve these problems, a series of initiatives have been developed, such as the “Metric Tide” report in the United Kingdom, the “Metrics Toolkit” and the “Group of experts on indicators”, created by the European Commission with the purpose to provide evidence based information on how each metric is calculated, where and how it should (and should not) be applied.31–33
The exponential increase in scientific publications and the multiplication of tools that allow their content to be disseminated generates uncertainty about the measurement of the true scientific impact of manuscripts, since it is based on traditional systems which present certain limitations in the current context. Article metrics based on presence and prevalence in a digital and social context can complement “traditional” bibliometric indicators (well established, known and accepted by most researchers) and they are here to stay. However, and despite their imperfections, as in many other fields revolutionized by new technologies, they will offer opportunities to those who are willing to take on the new challenge and will allow us to guide ourselves to better understand what our colleagues and patients are reading and listening to. In addition, the analysis of the true scientific and social impact of a research topic would help to establish adequate strategies to optimally respond to the needs of society.
Please cite this article as: Montomoli M, Taco Sanchez O, D’Marco L, Gorriz Teruel JL. Impacto de un artículo en la era social: ¿es lo mismo tuitear que citar? Nefrologia. 2022;42:125–129.