**3. Problems associated with collecting and interpreting altmetrics**

A semantic analysis of contributions in social media is lacking for the most part, which is a major issue making the evaluation of altmetrics counts so difficult. References are mostly counted based on identifiers such as the DOI; however, which references should be evaluated as positive and which as negative cannot be handled, which means that a "performance paradox" develops [18]. This paradox also exists in a similar form in classical bibliometrics and must be considered as an inherent problem of quantitative metrics in use [19].

Furthermore, Fraumann et al. ascertained that duplicates can be found in several types of sources on altmetric.com, which makes the credibility of the attention score uncertain [23]. This attention score is currently used by many scientific publishers and institutions as a marketing tool in the form of the "Altmetric Donut" (see **Figure 1**). The Altmetric Donut is implemented on the websites of the journals *Nature* and *Science* among others, and in the repositories of the universities of Cambridge and Zürich. The composition of the attention score is based on an algorithm that adds up the attention—weighted differently—of scientific output in diverse sources. This trend is regarded skeptically in science, viewing the Altmetric Donut as a successful gimmick that is meaningless for science [9]. In general, simply adding up counts in a single metric is "impossible and undesirable" [12]. Thus, benchmarks such as the attention score do not represent the impact of scientific performance, but are suited solely

Altmetrics: State of the Art and a Look into the Future http://dx.doi.org/10.5772/intechopen.76874 127

To date, the European Commission ascribes high significance to altmetrics, particularly against the backdrop of open science. This is also reflected in the establishment of the associated expert group. The efforts have so far led to a compilation of twelve recommendations within the open science context. In the political context of the European Union's supranational level, the importance of guidelines for the conscientious application of metrics is emphasized. These guidelines are interlaced in the following with the demands from the Leiden Manifesto

The Leiden manifesto emphasizes the aspect of complementarity as a central principle and basis of any evaluation practice. According to it, for the existing qualitative practices, the aim

to filter out those articles that have sparked interest in social media [24].

**Figure 1.** Example of the representation of the Altmetric Donut and its composition.

**4. Requirements of altmetrics**

for research metrics.

Furthermore, the coverage of scientific publications is relatively low and the distribution varies heavily both across disciplines and across platforms. Haustein et al. found that 21.5% of all scientific publications in Web of Science in 2012 were mentioned in at least one Tweet, while the proportion of these publications in other social media was mentioned less than 5% [20]. In percentage comparison, 67% of the publications were cited in Web of Science at least once. A feasibility study conducted by BMBF shows strong variation concerning coverage on altmetric.com between the scientific disciplines: publications from the field of medicine are represented considerably more often than, for example, publications from the engineering sciences [7]. Differences in coverage appear to benefit the humanities sciences in particular. While these are scarcely considered in established databases such as Web of Science, their coverage is considerably better in the field of altmetrics, according to a study conducted by Hammarfelt: over 61% of the investigated publications in this field have at least one reader on Mendeley and more than 20% have already been discussed on Twitter [21].

In general, the data basis underlying altmetrics is often problematic: the reproduction of data is almost impossible because data providers change, modify their data stock, or disappear completely [4]. For example, platforms such as Weibo or LinkedIn, which are included in the sources covered by altmetric.com, are now no longer analyzed since these data providers no longer grant access. Quality control, such as a validity check of accounts or the clean-up of duplicates, rarely occurs for social media platforms and complicates the aggregating and filtering of data for altmetrics service providers [22].

**Figure 1.** Example of the representation of the Altmetric Donut and its composition.

Furthermore, Fraumann et al. ascertained that duplicates can be found in several types of sources on altmetric.com, which makes the credibility of the attention score uncertain [23]. This attention score is currently used by many scientific publishers and institutions as a marketing tool in the form of the "Altmetric Donut" (see **Figure 1**). The Altmetric Donut is implemented on the websites of the journals *Nature* and *Science* among others, and in the repositories of the universities of Cambridge and Zürich. The composition of the attention score is based on an algorithm that adds up the attention—weighted differently—of scientific output in diverse sources. This trend is regarded skeptically in science, viewing the Altmetric Donut as a successful gimmick that is meaningless for science [9]. In general, simply adding up counts in a single metric is "impossible and undesirable" [12]. Thus, benchmarks such as the attention score do not represent the impact of scientific performance, but are suited solely to filter out those articles that have sparked interest in social media [24].
