Volume index - Journal index - Article index - Map ---- Back


Comunicar Journal 41: Black holes of Communication (Vol. 21 - 2013)

Altmetrics: New indicators for scientific communication in Web 2.0

https://doi.org/10.3916/C41-2013-05

Daniel Torres-Salinas

Álvaro Cabezas-Clavijo

Evaristo Jiménez-Contreras

Abstract

In this paper we review the socalled altmetrics or alternative metrics. This concept raises from the development of new indicators based on Web 2.0, for the evaluation of the research and academic activity. The basic assumption is that variables such as mentions in blogs, number of twits or of researchers bookmarking a research paper for instance, may be legitimate indicators for measuring the use and impact of scientific publications. In this sense, these indicators are currently the focus of the bibliometric community and are being discussed and debated. We describe the main platforms and indicators and we analyze as a sample the Spanish research output in Communication Studies. Comparing traditional indicators such as citations with these new indicators. The results show that the most cited papers are also the ones with a highest impact according to the altmetrics. We conclude pointing out the main shortcomings these metrics present and the role they may play when measuring the research impact through 2.0 platforms.

Keywords

Science, scientific communication, Information, communication, Internet, social networks, quantitative methods, Social Web; Web 2.0

PDF file in Spanish

PDF file in English

1. Introduction

Altmetrics is a very new term, and can be defined as the creation and study of new indicators for the analysis of academic activity based on Web 2.0 (Priem & al., 2010). The underlying premise is that, for example, mentions in blogs, number of re-tweets or saves of articles in reference management systems, may be a valid measure of the use of scientific publications. However, measuring the visibility of science on the Internet is not a new phenomenon. The origin of altmetrics arose in the nineties with webometrics, the quantitative study of the characteristics of the web (Thelwall & al., 2005). This was derived from the application of bibliometric methods to online sites, and encompasses various disciplines including communication. Despite the web playing an increasingly important role in social and economic relations, this discipline has not been able to overcome certain limitations inherent in the methodologies, methods and information sources used. However, it has contributed a complementary perspective to the traditional analysis of citations by means of the study of links, mailing list communications or analysis of the structure of the academic web. Shortly afterwards, the consolidation of scientific communication by journals and electronic media such as repositories opened the door to new indicators.

The so-called «bibliometrics usage» (Kurtz & Bollen, 2010), based on downloads of scientific materials, reveals that indicators of use of publications measure a different dimension to that of bibliometric indicators (Bollen & al., 2009), and demonstrate different behaviour patterns to citation (Schloegl & Gorraiz, 2010). With a view to measuring scientific impact, these indicators offer complementary information. Without doubt, the idea that traditional bibliometric measures and the sources on which they base their calculations are insufficient permeates throughout the scientific community. This leads to the emergence of new indicators, such as SJR (González-Pereira & al., 2010) or the Eigenfactor (Bergstrom, West & Wiseman 2008), which are based on the idea of Google’s PageRank algorithm. There is a clear symbiotic relationship between web based and bibliometric methods. This move is motivated by the dissatisfaction of many scientists with bibliometric methods, in particular the highly criticised Impact Factor (Seglen, 1997; Rossner, Van Epps & Hill., 2007), exacerbated by the appearance of new databases such as Scopus and Google Scholar. This search engine’s power and coverage, but also its normalisation problems, illustrate both the wealth of academic information on the web, and the difficulty of adequately understanding and analysing this information (Torres-Salinas, Ruíz-Pérez & Delgado, 2009; Delgado & Cabezas-Clavijo, 2012).

It is in this context, with the arrival of Web 2.0 and scientists’ gradual use of said platforms as tools for the diffusion and receipt of scientific information (Cabezas-Clavijo, Torres-Salinas & Delgado, 2009) and with part of the scientific community relatively receptive, that scientometrics 2.0 (Priem & Hemminger, 2010), or altmetrics (Priem & al., 2010), began to be discussed. Although, in a wider sense, any unconventional measure for the evaluation of science can be considered an alternative indicator, sensu stricto it would be more accurate to speak of indicators derived from 2.0 tools; that is to say, measures generated from the interactions of social web users (primarily but not exclusively scientists) with researcher produced material. One of the principal strengths of altmetrics lies in its provision of information at article level (Neylon & Wu, 2009), which enables assessment of the impact of papers beyond the bounds of publication sources. Various studies have stated that altmetrics can be used for measuring other levels of aggregation, such as journals (Nielsen, 2007) or universities (Orduña & Ontalba, 2012). Additionally, altmetrics offer a new perspective, considering the almost real time information provided on research impact. This monitoring, in the form of revision by peer collectives or peer revision following publication (Mandavilli, 2011), is undoubtedly an element that introduces new forms of scrutiny by the scientific community.

Taking into account the impact of Web 2.0 and its now central position within communication research, this paper undertakes a review of altmetrics, focusing on quantative studies of the same. Firstly, an explanation is given of the main platforms and indicators, followed by the comparative evaluation of a selection of communication papers showing the number of citations received and their 2.0 indicators. Next, a review of the principal empirical studies is undertaken, centering on the correlations between bibliometric and alternative indicators. To conclude, the main limitations of altmetrics are highlighted alongside a reflective consideration of the role altmetrics may play when it comes to understanding the impact of research in Web 2.0 platforms.

2. Principal platforms and altmetric indicators

The placing on-line of bibliographic reference management systems and favourites, where personal libraries and researchers’ references are regularly managed, has generated a series of original indicators. For example, the number of times a study has been marked as favourite (bookmarking) or the number of times it has been added to a bibliographic collection. Such indicators point to the reader interest aroused by scientific papers and the use made of them (Haustein & Siebenlist, 2011). On the other hand, some authors such as Taraborelli (2008), note that these indicators represent a form of quick review, by reflecting the degree to which papers are accepted by the scientific community. Among the most usual platforms for extracting these types of indicators are CiteUlike, Connotea or Mendeley (Li, Thelwall & Giustini, 2011). Of these, Mendeley currently arouses the most interest. According to its web page statistics, more than 2 million users have uploaded a total of 350 million documents, figures that mean an article’s number of Mendeley readers has become one of the most accepted metrics for evaluating an articles impact within altmetrics.

Other usual measurements are the mentions papers can receive in the multiple social networks in existence, these being a reflection of the diffusion and dissemination of publications (Torres-Salinas & Delgado, 2009). Normally, general social networks are used to calculate indicators, as in the case of Facebook or Twitter, by analysing the number of «likes», the number of times an article is shared or the tweets and retweets received. Alternative metrics also include the blog citations received by scientific articles, especially in scientific blogs such as those included in the Nature Blogs or Research Blogging networks (Fausto & al., 2012). This is also true for the citations received by articles, journals and authors in the popular Wikipedia (Nielsen, 2007). These measurements are quantitative approximations of the measure of interest aroused within the scientific community, and also amongst a general public, which transcend or compliment the impact of traditional citation indexes. Finally, it is worth mentioning that news promotion systems such as Menéame or Reddit, or platforms with subject specialisation such as Documenea, can also offer indicators of research impact amongst a non-specialised public (Torres-Salinas & Guallar, 2009).

As can be seen in table 1, there exists a large number of indicators of distinct nature, origin and degree of normalisation. This means that the first difficulty faced when compiling information for a specific publication, and the subsequent altmetric calculation, is the high cost in time and effort. To solve this problem, a series of tools have emerged to help monitor impact. Generally, these types of platforms, once one or more documents are included, use a unique identification number such as the DOI or the PUBMEID to return the grouped metrics. Some of these tools are altmetric.com, Plum Analytics, Science Card, Citedin or Impact Story. For scientific papers, statistics are normally presented from Facebook (Clicks, Shares, Likes or Comments), Mendeley (Readers, Number of Groups), Delicious, Connotea and Citeulike (Bookmarks) and Twitter (Tweets and Influential Tweets). In their favour, it has to be said that these tools enable the easy recuperation of statistics of collections of papers. However, they are limited by the presentation of contradictory results and only partially recover the statistics.

3. Altmetrics versus bibliometrics: examples in the field of communication

In order to illustrate the tools and their derived indicators, data has been compiled from the 30 journal papers from the field of communication most cited in Web of Science for the years 2010, 2011 and 2012 (the ten most cited for each year). This sample has been compared with a random control group of another 30 papers, comprised of uncited articles from the same journals and years. In this way, the objective is to verify if a connection exists between the most cited articles and those that show superior data from alternative indicators. Once both samples of articles were downloaded from Web of Science (n=60; date: 04/02/2013), the altmetrics information was compiled using ImpactStory and Altmetric.com as sources. The following indicators were calculated for each article: mentions of the paper on Twitter, readers who have saved it in Mendeley and number of times it has been marked as favourite in Citeulike (table 2). The high occurrence of zeros among the most cited articles can be confirmed, in particular with regard to the indicators of Citeulike. This demonstrates one of the limitations of these statistics, as does the scant representation of some of these tools for reflecting scientific activity.

The frequently cited articles were tweeted on more occasions than studies from the control sample (table 3). According to the first source (Impact Story), the cited articles were tweeted on average once more than the control sample, which did not receive any tweets. These figures increase to 2.5 and to 0.8 respectively, according to Altmetric.com. Although, due to the large number of papers not tweeted, the median in all cases is zero. Turning to Citeulike, the social bookmarking tool for scientists, the articles most cited between 2010 and 2012 were saved an average of 1.5 times (1.3 according to Altmetric.com), against 0.1 for the control sample; although only between 23% and 30% of the studies show values different to zero. However, the most representative data is that from Mendeley, where the most cited studies have been saved by an average of 18.6 readers (15.2 according to Altmetric.com), whilst the control sample shows an average of 4.6 readers (2.4 according to Altmetric.com). That is, the most cited papers are also saved more times by academics than uncited papers from the same journals. This indicator is the most representative of the amount by which between 57% and 62% of the articles, depending on the source consulted, present indicators different to zero.

4. Relationships between bibliometric indicators and altmetrics

An interesting underlying theme, in view of the data presented and the different studies that have been undertaken, is the relationship that exists between classic bibliometric indicators and the new metrics. These studies are of interest because they reveal whether the altmetrics correlate with papers’ citations or if the opposite situation is produced, that is to say they reflect a new impact dimension. Clearly, in the sample of 60 communication studies, the correlation coefficients between citation in Web of Science and the altmetrics is low and of little significance (table 4). The highest achieved is between Pearson’s correlation coefficient between citations and the number of readers of Mendeley, but it barely reaches 0.52.

These results are in accordance with those obtained in other scientific papers (table 4). Cabezas-Clavijo & Torres-Salinas (2010) demonstrate that, for articles published in the journal PloS One, there is no connection between citation and comments and blog links received. A similar situation occurs if the Impact Factor or the EigenScore are used instead of citations (Fausto, 2012). With regard to the correlation between citation and Twitter, Eysenbanch (2011) observes very poor correlations in a global sample of 286 articles. The highest correlations between bibliometric indicators and altmetrics are produced, above all, when the former are compared with the number of readers in Mendeley; this is demonstrated by Li, Thelwall & Giuistini (2011) using the citations received in Google Scholar as an indicator. The correlation with Mendeley reaches 0.60 for a collection of papers published in «Science» and «Nature». If more specific fields of knowledge such as bibliometrics are taken into account, the correlation between readers in Mendeley and citations in Scopus rises to 0.45 (Bar-Ilan & al., 2012), a figure similar to that arrived at in this paper.

Therefore, in scientific literature to date, the correlation between any of the altmetrics and the number of citations remains to be convincingly demonstrated. However, evidence does exist of a certain association between highly cited or frequently downloaded and highly tweeted articles. For example Eysenbanch (2011), on isolating 55 highly cited articles from his sample, showed that in 75% of cases they were also highly tweeted, reaching a correlation coefficient of 0.69, the highest calculated to date. In addition, Shuai, Pepe & Bollen (2012), working with a sample of pre-prints deposited in ArXiv, observed greater download levels for papers promptly disseminated on Twitter. In the present case the most cited sample (table 3) also had higher rates of activity in social networks.

The results presented in table 4 suggest that altmetrics measure a dimension of scientific impact that is still to be determined. As stated by Priem, Piwowar & Hemminger (2012), there is a need for additional research into the validity and precise significance of these metrics, as, for example, in the case of the readers of Mendeley (Bar-Ilan, 2012). It seems apparent that altmetrics capture a different dimension, which could be entirely complementary to citation, given that the different platforms have audiences more diverse than the merely academic. If, for example, the phenomenon is observed from the other perspective, that of papers with greater altmetric impact, the studies most widely diffused across social networks in 2012 were not always related to strictly scientific interests, but to cross curricular subjects that better reflected the interests of the general public. For example, some of the scientific articles arousing the greatest interest in social networks in 2012 were related to very topical issues such as the Fukushima nuclear accident; cross curricular subjects, such as the effect of coffee consumption on health; or interests closely linked to the profile of a social network user, such as an analysis of classic Nintendo games (Noorden, 2012). Therefore, it is not strange that altmetrics are starting to equate with the social impact of research.

5. By way of conclusion: current problems for altmetrics

Without doubt, altmetrics offers a different outlook when it comes to measuring the visibility, in the widest sense, of scientific and academic papers. These new indicators should be welcomed as being complementary to traditional metrics. However, due to being very new, and only recently applied in scientific contexts, the use of altmetrics still has certain limitations that have to be taken into account. Among these is its place within the so-called liquid culture, as opposed to solid culture (Area & Ribeiro, 2012). This situation is clearly shown by the evanescent nature of its sources; whereas citation indexes such as Web of Science are stable and have trajectories of decades, the same cannot be said of the 2.0 world (Torres-Salinas & Cabezas-Clavijo, 2013). In general, platforms which archive papers, and ultimately generate indicators, usually have very exiguous life cycles and can disappear, as happened with the recent disappearance of Connotea in March 2013. Platforms can also eliminate certain functions, as occurred with Yahoo’s removal of the command Search by Site, which shook the foundations of all cybermetrics (Aguillo, 2012). This means that it is currently difficult to choose a reference tool which guarantees medium term continuity. Many uncertainties still exist as to the reproducibility and final significance of results, especially concerning the scientific relevance of the same. This in turn makes it difficult for these tools to be incorporated into the list of evaluative tools.

Additionally, the proliferation of sources and users indexing articles aggravates traditional bibliometric problems of normalisation (Haustein & Siebenlist, 2011). In the 2.0 environment, an article can be found indexed or mentioned in multiple ways: by a normalised number, by a URL copied from a web, by part of the title, etc. This causes the compilation of direct mentions, and not indirect article reviews, to be a laborious matter. For example, if an article has been reviewed in a blog, should the diffusion of this entry or its comments be added to the article’s original impact? Finally, it has to be mentioned that the empirical study undertaken has also enabled confirmation of the scant concordance of ImpactStory or Almetric.com, which provide different statistics, related only to normalised numbers (DOIs or other type of identifier). Not only is compilation difficult, but also, in most instances, data gathered from many platforms produces very low numbers. Added to this has to be the global difficulty faced by these tools in making data from some of the 2.0 services freely available (Howard, 2012). Despite Adie & Roe (2013) having calculated that more than 2.8 million articles since 2011 have at least one altmetric indicator calculated, the magnitudes provided remain lower than those of citation, even in the majority of cases (see for example the numbers provided in the case studies of Bar-Ilan & al., 2012 or Priem, Piwowar & Hemminger, 2012).

If these indicators are indeed wanted, beyond mere experiments and academic studies, for use in the evaluation of scientific activity, there is no doubt that the many theoretical (significance), methodological (valid sources) and technical (normalisation) problems should still be resolved. These indicators should clearly be used for measuring the social impact of science and, above all, for measuring the impact or immediate visibility of publications, an impossibility for citation. The new metrics have a very short journey, with an initial burst of activity capturing the visibility of papers at the very moment of publication (Priem & Hemmiger, 2010). This facet complements the classic indicators and even expert reviews, which altmetrics should not aspire to substitute, a situation and a function noted by most scientists (Nature Materials, 2012). Additionally, an identifiable role can be played in fields were bibliometrics is most lacking, as may be the case in humanities (Sula, 2012). It can be stated that new forms of scientific communication require new forms of measurement. For the moment, the only definite conclusion seems to be that altmetrics is here to stay, to enrich the possibilities and dimensions of impact analysis, in all fields of scientific research, and to illuminate from a new perspective the relationship between science and society.

References

Adie, E. & Roe, W. (2013). Altmetric: Enriching Scholarly Content with Article-level Discussion and Metrics. Learned Publishing, 26(1), 11-17. (DOI:10.1087/20130103).

Aguillo, I. (2012). La necesaria evolución de la cibermetría. Anuario ThinkEPI, 6, 119-122. (www.thinkepi.net/la-necesaria-evolucion-de-la-cibermetria) (02-03-2013).

Area-Moreira, M. & Ribeiro-Pessoa, M.T. (2012). De lo sólido a lo líquido: Las nuevas alfabetizaciones ante los cambios culturales de la Web 2.0. Comunicar, 38, 13-20. (DOI:10.3916/C38-2012-02-01).

Bar-Ilan, J. (2012). JASIST@Mendeley. ACM Web Science Conference 2012 Workshop. (JASIST@Mendeley. ACM Web Science Conference) (03-02-2013).

Bar-Ilan, J., Haustein, S. & al. (2012). Beyond Citations: Scholars’ Visibility on the Social Web 1. (http://arxiv.org/ftp/arxiv/papers/1205/1205.5611.pdf) (03-02-2013).

Bergstrom, C.T., West, J.D. & Wisemanp, M.A. (2008). The Eigenfactor Metrics. The Journal of Neuroscience, 28(45), 11433–11434. (DOI:10.1523/JNEUROSCI.0003-08.2008).

Bollen, J., Van de Sompel, H., Hagberg, A. & Chute, R. (2009). A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE, 4(6), e6022. (DOI:10.1371/-journal.pone.0006022).

Cabezas-Clavijo, A. & Torres-Salinas, D. (2010). Indicadores de uso y participación en las revistas científicas 2.0: el caso de PLoS One. El Profesional de la Información, 19(4), 431-434. (DOI:10.3145/epi.2010.jul.14).

Cabezas-Clavijo, A.; Torres-Salinas, D. & Delgado López-Cózar, E. (2009). Ciencia 2.0: Catálogo de herramientas e implicaciones para la actividad investigadora. El Profesional de la Información, 18 (1), 72-79. (DOI: 10.3145/epi.2009.ene.10).

Delgado-López-Cózar, E. & Cabezas-Clavijo, Á. (2012). Google Scholar Metrics: an unreliable tool for assessing scientific journals. El Profesional de la Información, 21(4), 419-427. (DOI:10.3145/epi.2012.jul.15).

Eysenbach, G. (2011). Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact. Journal of Medical Internet Reseach, 13(4), 123. (DOI: 10.2196/jmir.2012).

Fausto, S., Machado, F. & al. (2012). Research blogging: indexing and registering the change in science 2.0. PloS one, 7(12), e50109. (DOI:10.1371/journal.pone.0050109).

González-Pereira, B., Guerrero-Bote, V. P. & Moya-Anegón, F. (2010). A new approach to the metric of journals’ scientific prestige: The SJR indicator. Journal of Informetrics, 4(3), 379–391. (DOI:10.1016/j.joi.2010.03.002).

Haustein, S. & Siebenlist, T. (2011). Applying Social Bookmarking Data to Evaluate Journal Usage. Journal of Informetrics, 5(3), 446-457. (DOI:http://dx.doi.org/10.10-16/j.joi.2011.04.002).

Howard, J. (2012). Scholars Seek Better Ways to Track Impact Online. Chronicle of Higher Education (http://chronicle.com/article/As-Scholarship-Goes-Digital/130482/) (02-03-2013).

Kurtz, M.J. & Bollen, J. (2010). Usage bibliometrics. Annual Review of Information Science and Technology, 44 (1), 1-64. (DOI: 10.1002/aris.2010.1440440108).

Li, X., Thelwall, M. & Giustini, D. (2011). Validating Online Reference Managers for Scholarly Impact Measurement. Scientometrics, 91(2), 461-471. (DOI:10.1007/s11192-011-0580-x).

Mandavilli, A. (2011). Trial by Twitter. Nature, 469, 286-287.

Nature Materials. (2012). Alternative Metrics. Nature Materials, 11, 907-908.

Neylon, C. & Wu, S. (2009). Article-level Metrics and the Evolution of Scientific Impact. PLoS biology, 7(11), e1000242. (DOI:10.1371/journal.pbio.1000242).

Nielsen, F. (2007). Scientific citations in Wikipedia. First Monday, 12(8-6) (http://-firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1997/1872) (03-02-2013)

Noorden, R.V. (2012). What Were the Top Papers of 2012 on Social Media. Nature News Blosg (http://blogs.nature.com/news/2012/12/what-were-the-top-papers-of-2012-on-social-media.html) (03-02-2013).

Orduña-Malea, E. & Ontalba-Ruipérez, J.A. (2012). Selective Linking from Social Plat-forms to University Websites: A Case Study of the Spanish Academic System. Scientometrics. (DOI:10.1007/s11192-012-0851-1).

Priem, J. & Hemminger, B.M. (2010). Scientometrics 2.0: Toward New Metrics of Scholarly Impact on the Social Web. First Monday, 15(7-5). http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/ fm/article/view/2874/2570. (03-02-2013).

Priem, J., Parra, C., Piwowar, H., Groth, P. & Waagmeester, A. (2012). Uncovering Impacts: A Case Study in Using Altmetrics Tools. Workshop on the Semantic Publishing SePublica 2012 at the 9th Extended Semantic Web Conference. (http://sepublica.mywikipaper.org/sepublica2012.pdf#page=46) (07-02-2013).

Priem, J., Piwowar, H. & Hemminger, B.M. (2012). Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact. ACM Web Science Conference 2012 Workshop (http://arxiv.org/abs/1203.4745) (03-02-2013).

Priem, J., Taraborelli, D., Groth, P. & Neylon, C. (2013). Altmetrics: A Manifesto. (http://altmetrics.org/manifesto/) (02-03-2013).

Rossner, M., Van Epps, H. & Hill, E. (2007). Show me the Data. Journal of Cell Biology, 179(6), 1091-1092.

Schloegl, C. & Gorraiz, J. (2010). Comparison of Citation and Usage Indicators: The Case of Oncology Journals. Scientometrics, 82(3), 567-580. (DOI: 10.1007/s11192-010-0172-1).

Seglen, P. (1997). Why the Impact Factor of Journals Should not be Used for Evaluating Research. British Medical Journal, 314(7079), 498–502.

Shuai, X., Pepe, A. & Bollen, J. (2012). How the Scientific Community Reacts to Newly Submitted Preprints: Article Downloads, Twitter Mentions, and Citations. PloS one, 7(11), e47523. (DOI:10.1371/journal.pone.0047523).

Sula, C.A. (2012). Visualizing Social Connections in the Humanities: Beyond Bibliometrics. Bulletin of the American Society for Information Science and Technol-ogy, 38(4), 31-35. (DOI:10.1002/bult.2012.1720380409).

Taraborelli, D. (2008). Soft Peer Review: Social Software and Distributed Scientific Evaluation. Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP ’08) (http://eprints.ucl.ac.uk/8279/) (03-02-2013).

Thelwall, M., Vaughan, L. & Björneborn, L. (2005). Webometrics. Annual Review of Information Science and Technology, 39(1), 81-135. (DOI: 10.1002/aris.1440390110).

Torres-Salinas, D. & Cabezas-Clavijo, J. (2012). Altmetrics: no todo lo que se puede contar, cuenta. Anuario Thinkepi, 7. (www.thinkepi.net/altmetrics-no-todo-lo-que-se-puede-contar-cuenta) (03-02-2013).

Torres-Salinas, D. & Delgado-López-Cózar, E. (2009). Estrategia para mejorar la difusión de los resultados de investigación con la Web 2.0. El Profesional de la Información, 18(5), 534-539. (DOI:10.3145/epi.2009.sep.07).

Torres-Salinas, D. & Guallar, J. (2009). Evaluación de DocuMenea, sistema de promoción social de noticias de biblioteconomía y documentación. El Profesional de la Información, 18(2), 171-179. (DOI:10.3145/epi.2009.mar.07).

Torres-Salinas, D., Ruiz-Pérez, R. & Delgado-López-Cózar, E. (2009). Google Scholar como herramienta para la evaluación científica. El Profesional de la Información, 18(5), 501–510. (DOI:10.3145/epi.2009.sep.03).