Volume index - Journal index - Article index - Map ---- Back


Comunicar Journal 44: Moocs in Education (Vol. 22 - 2015)

Usability and satisfaction in multimedia annotation tools for MOOCs

https://doi.org/10.3916/C44-2015-06

Juan-José Monedero-Moya

Daniel Cebrián-Robles

Philip Desenne

Abstract

The worldwide boom in digital video may be one of the reasons behind the exponential growth of MOOCs. The evaluation of a MOOC requires a great degree of multimedia and collaborative interaction. Given that videos are one of the main elements in these courses, it would be interesting to work on innovations that would allow users to interact with multimedia and collaborative activities within the videos. This paper is part of a collaboration project whose main objective is «to design and develop multimedia annotation tools to improve user interaction with contents». This paper will discuss the assessment of two tools: Collaborative Annotation Tool (CaTool) and Open Video Annotation (OVA). The latter was developed by the aforementioned project and integrated into the edX MOOC. The project spanned two academic years (2012-2014) and the assessment tools were tested on different groups in the Faculty of Education, with responses from a total of 180 students. Data obtained from both tools were compared by using average contrasts. Results showed significant differences in favour of the second tool (OVA). The project concludes with a useful video annotation tool, whose design was approved by users, and which is also a quick and user-friendly instrument to evaluate any software or MOOC. A comprehensive review of video annotation tools was also carried out at the end of the project.

Keywords

Usability, satisfaction, design tools, evaluation software, multimedia annotations, educational software, MOOC, university education

PDF file in Spanish

PDF file in English

1. Introduction

The development of digital video has allowed users greater accessibility; it has made its way into our homes and lives, turning consumer and retail services such as YouTube into a sociological phenomenon. YouTube viewings currently account for an average of 6 million video hours per month1. Clearly much has changed since the Lumière brothers invented cinema (Díaz-Arias, 2009: 64). This development has provided the gateway for developing technologies that allow users to share and collaborate (Computer Supported Collaborative Learning: CSCL). Such technologies also include collaborative video annotation technologies (Yang, Zhang, Su & Tsai, 2011), which have led to the emergence of innovative social projects where video annotation tools are collectively used (Angehrn, Luccini & Maxwell, 2009). The digitization of videos (Bartolomé, 2003) opened up new interactive possibities in education, along with hypermedia (García-Valcarcel, 2008), and has represented a breakthrough for learning and teaching by leaving behind the passive reading of videos (Colasante, 2011). There is a long history of experimental studies on how to apply videos in education (Ferrés, 1992; Cebrián, 1994; Bartolomé, 1997; Cabero, 2004; Area Moreira, 2005; Aguaded and Sánchez, 2008; Salinas, 2013). In the field of teacher training, there are examples related to the concept of microteaching, which has been questioned due to its reductionist approach to teacher initial training. Nevertheless, it was such an effort to come up with a rather rigorous idea of teaching. Leaving aside the theoretical starting point of this paper, there are some recent studies and developments of video annotation tools that, supported by other conceptions of teaching (Schön, 1998; Giroux, 2001), have shown efficacy in meta-evaluations for initial training (Hattie, 2009). The application contexts of the above studies are many and varied, and address processes such as reflection, shared evaluation and collective analysis of classroom situations. Therefore, they have proven to be effective tools for teachers and teacher trainees to collectively analyse everyday teacher practice (Rich & Hannafin, 2009a; Hosack, 2010; Rich & Trip, 2011; Picci, Calvani & Bonaiuti, 2012; Etscheidt & Curran, 2012; Ingram, 2014). In relation to initial training and the development of reflective skills, Orland-Barak & Rachamim (2009) carried out an interesting review and study by comparing different models of reflection using videos as a support. Rich and Hannafin (2009b) conducted another significant review of technological solutions and the potential of video annotation tools for teaching. They conducted a comparative analysis of these tools based on the following criteria: how to use, note style, collaboration, safety, online-offline, format, resource import vs. export, learning curve and cost (free/hiring research teams). We then found an even more extensive review (Rich & Trip, 2011), shown in table 1, which was completed by solutions, presented in the last international workshop on multimedia notes ‘iAnnote14’2.

2. Integrating collaborative annotation tools in MOOC

Video and other related emerging technologies (analysis of big data, ontologies, semantic web, geolocation, multimedia notes, rubric-based assessment, federation technologies, etc.) quickly gained prominence in MOOCs, shaping the core structure of these courses. The appealing and widespread use of videos may have played a role in the boom of MOOCs, prompting a search for new interactive ways to read videos and general contents. It was only recently that MOOCs have incorporated previous experiences and developments on the features of collaborative multimedia annotations; allowing for a more interactive, multimedia learning process, and sharing users’ views on these platforms. This has also provided the gateway for a new model of learning community within the MOOC, which can manage a significant flow of meanings extracted from reading contents and from annotations in different codes, namely: video, text, image and sound notes, as well as hyperlinks and eRubrics (Cebrián-de-la-Serna & Bergman, 2014; Cebrián-de-la-Serna & Monedero Moya, 2014).

These notes can be made in different formats and codes showing contents, such as: annotations in videos, texts, images, maps, charts, etc. as well as annotations created by users. The above possibilities open up a whole new line of new technological developments and research on the dynamic narrative of messages, given the speed with which MOOC platforms and courses are being implemented worldwide. Therefore, we need to innovate in the design and content of video tools based on their new interactive possibilities, in order not to replicate mistakes from the past, when, in the early stage of a new technology, the narrative models of preceding technologies would be incorporated without exploring the interactive potential of the new formats. Something similar happened during the transition from radio messages to television messages, as pointed out by Guo, Kim & Rubin (2014), who conducted a study on the video sessions of four edX courses. They checked the different formats used and concluded that recording cannot be extrapolated to MOOC, because students do not pay enough attention. As a consequence, they suggested a list of recommendations that can be summarized as follows: more interactive and easy- o-edit videos, shorter (6 minutes), and easy-to -share notes. The development of educational software and the possibilities offered by free software have generated a community of developers who share their experience. The fact that these products get feedback from users also constitutes a model of software production; as communities of practice emerge around tools, services and specific platforms such as GitHub3.

The symbiotic relationship between developers and communities of practice has allowed MOOCs to evolve from structured approaches (xMOOCs) to communicative and collaborative approaches (cMOOC)s in their platforms and courses. However, both approaches require new interactive features in the videos. An example of such features is the project here presented, which has been led by the HarvardX team for integration into the edX MOOC, and whose objectives are as follows: on the one hand, designing high-capacity multimedia annotation tools to create multimedia meaning and sharing it with users; and on the other, competence assessment, self-assessment and peer assessment through eRubrics. In order to quickly introduce these changes of great impact, we must count on assessment strategies for end-users to evaluate tools while they are being developed. Tools must be quick and easy to use, in order to collect data that will guide production (technical and content production), even before the beta version emerges. This is why our GTEA group carries out a design, test and evaluation line for educational software, which aims to find a balance between educational innovation and technological innovation, i.e. between generating new environments and users’ usability and satisfaction. The ultimate aim is for new interactive methodologies such as multimedia annotation tools for MOOCs, to be validated by end-users. To do so, we need to create a parallel line of research and evaluation instruments that are reliable and valid for decision-taking when designing educational software. We must take into account all possible elements for software evaluation from the users’ perspective (satisfaction, usability, cost, portability, productivity, accessibility, safety, etc.), in order to examine their ease of use (aka usability), regardless of their context, personal differences, different supports (tablets, mobile phones, computers, etc.).

This paper uses the following definition of usability: ‘the extent to which a product can be used by certain users to achieve specific goals with effectiveness, efficiency and satisfaction in a particular context of use’ (Bevan, 1997). Satisfaction is often seen as a construct within usability studies and instruments, although we believe it is rather the opposite. The ease of use of a tool or service is an element that belongs to the overall user satisfaction. The satisfaction of technological tools and services can even be considered as a sub-category within user satisfaction studies, as shown by studies on students’ satisfaction of university life (Blázquez, Chamizo, Cano & Gutiérrez, 2013). This is a live debate, given the massive presence of technological services and resources, and the digitisation that most communication, teaching, research and administration processes have recently gone through within universities. Both usability and user satisfaction are measured by questionnaires completed by users. We can find usability questionnaires in websites and systems (Bangor, Kortum & Miller, 2008; 2009; Kirakowski & Corbett, 1988; Molich, Ede, Kaasgaard & Karyukin, 2004; Sauro, 2011), satisfaction questionnaires, and questionnaires on both usability and satisfaction (Bargas-Avila, Lötscher, Orsini & Opwis, 2009; McNamaran & Kirakowski, 2011).

3. Methodology

The present project started from the mutual interest shared by our team and HarvardX Annotation Management in creating tools to facilitate meaning processes based on collective multimedia annotations. The general aim of the project was to create a new tool for multimedia annotations specifically designed to respond to the new features of technological progress (e.g. semantic web, annotation ontology, etc.), as well as to the social practices that are currently being developed by users on the Internet (learning in communities of practice, using mobile devices, collaborative work, communication in social networks, creating eRubrics, etc.). The tool is currently integrated into the edX MOOC, and has been in use since January 2014 in the courses offered by HarvardX4. The technological development started from scratch, although it was based on the progress that had been made in the field of multimedia annotations on the Open Annotation Community Group, and taking into account the aforementioned literature as well as other developments by Harvard University. The results presented here are part of a collaborative project and show users’ opinions on the usability and user satisfaction in relation to an instrument designed to assess web tools. Such data is often required to design and improve tools. This is why the methodology used in this paper contrasted end-users’ usability and satisfaction in the Collaborative Annotation Tool (CaTool) (created by Harvard University, 2012), against the added features of the new tool created by the Open Video Annotation project (OVA).

For methodological purposes, the new added features of video annotation were considered as the independent variable. The development had a dual purpose: to serve as a collective multimedia annotation service, and to integrate the new features into the edX MOOC. The present paper will only show the results of assessing the video annotation features that had been added to the edX MOOC. However, this platform hosted the full-featured OVA video annotation, text, sound and quality image (the last two in experimental stages).

The study was divided into two parts: a) The first stage during the 2012-13 academic year, where the Collaborative Annotation Tool (CaTool) was trialled on groups of different subjects in the Faculty of Educational Sciences at the University of Malaga (Spain). The usability and user satisfaction instrument that we had already created for other tools was also tested during this stage. b) The second stage during the 2013-14 academic year, where the usability and user satisfaction instrument designed during the fist stage was improved and applied to two groups from the Degree of Education that shared the same teacher, methods and tasks; we compared two different annotation tools: CaTool and a beta tool that only had the OVA video annotation feature. In the first stage (2012-13) the Collaborative Video Annotation tool was tested in the class within the Education department and on different types of subjects within the degree programme (core subjects, elective subjects, internships, etc.). The tool was federated by our team, and its combination with other tools, such as eRubric and federation technology, had provided interesting features in practice (see Image no.1). The state of the art in relation to the design, creation and assessment of previous video annotation tools was also collected at this stage.

At the second stage, during the second half of 2013, a new Open Video Annotation (OVA)5 was created (image 2), which responded to an interactive and communicative teaching model in the MOOC. The creation and design of this tool was guided by the HarvardX annotation manager, and included the following features: a) Editing entries could be done in a multimedia format (video, text, image, etc.). b) Multimedia annotations could be added within the resource itself (in the video, image, etc.). c) Annotations could be shared and discussed by a large number of users, so that when someone received a message with a note on it, a simple click would take them to that particular note within the resource. d) Editing tags in a database of ontological annotations was possible. As an option, each entry also had the possibility of geolocating. f) Annotations could easily be shared on social networks. eRubrics could be created when editing annotations.

Figure 1: eRubric tool integrated into CaTool annotations.

Figure 2: Multimedia annotation tool.

During the 2013-14 academic year, CaTool and OVA were tested. The test involved the same teacher, methodology, class lab and all the student groups (180 in total) of the mandatory second year technological resources course within the degree of Education in the Faculty of Educational Sciences at the University of Malaga. After this, the enhanced instrument of usability and user satisfaction from stage 1 was used. The first experiment was performed on the CaTool, and the second on a beta tool (a month later); but only on the OVA video annotation feature, and with some limitations (it could only be used with the Chrome browser).

4. Analysis and results

The participant sample consisted of all the students from the aforementioned mandatory course in the Faculty of Educational Sciences who got to work with these tools for the first time. Once they performed the task set by the teacher, they were asked to answer a questionnaire on usability and user satisfaction. The questionnaire consisted of a series of descriptive questions (age, gender, user level, etc.), followed by 26 sentences to be rated on a Likert rating scale of 1 to 5. There were direct sentences (1=the worst; 5=the best) as well as indirect sentences (1= the best; 5=the worst). As for usability, there were 17 sentences: 5 direct and 12 indirect. For user satisfaction there were 9: 7 direct and 2 indirect. The order of the sentences in the questionnaire was random, in order to avoid answering without reading. There was an open question at the end, for students to write free comments. The average response time was 4 minutes. The questionnaire was filled out online by using LimeSurvey, while data was analyzed by using the SPSS (version 20). For analysis purposes, we ensured answers had to be thought through, and sentences could not be rated by simply filling out the questionnaire. To this end, we detected 16 answers that marked similar values ??in blocks corresponding to direct and indirect sentences, so they were therefore considered as non-valid answers. We carried out the y=6-x transformation in the values ??of indirect sentences, so that calculations could not be counteracted.

Significant differences were found in favour of OVA among the means of the questionnaire. When analysing the questionnaire by blocks, significant differences were also found in the usability blocks, but not in the user satisfaction blocks (table 2).

The contrast of the usability and satisfaction instrument between the two tools throws up significant differences in favour of OVA in the following items: ‘I found the application to be pleasant’, ‘I found the application exhausting to use’, ‘The application does not need explaining to be used’, I needed help to access the application’, ‘I ran into technical problems’, ‘It requires expert help’, ‘The response time in the interaction is slow’.

Figure 3: Histograms of total scores on the two tools.

Graph 1 shows the histograms of the total scores for each tool. It shows that, from the 105 score onwards, there are more ratings for OVA than for CaTool, while the opposite goes for scores under 105. According to their comments, respondants support the questionnaire results: they consider these tools to be easy, useful and innovative. The negative aspects were mainly attributed to technical issues: Internet access, slow server or browser limitations in the beta version.

5. Discussion and conclusions

The potential of the video digitalizing process has been foreseen for a long time, along with new teaching processes at universities (Aguaded & Macías, 2008: 687), except that nowadays we look forward to even further possibilities that go beyond past predictions. Socialization and distribution of information, free access to premium content, networks and learning communities to share and generate new ways of learning, the technological development of the Internet (augmented reality, mobile technology, wearable, network capacity, etc.) are forcing universities to respond to new challenges.

MOOC platforms are not immune to these changes, and will soon incorporate experiences and developments in the area of collective multimedia annotations. Innovations find in these massive platforms an ideal setting for developing, testing and experimenting with educational research. Certainly, we consider this new environment as an ideal setting for conducting new experiments, studies and educational projects such as the one put forward here. The present project has shown that collective multimedia annotations are generally highly-rated by students when they are easy to use (as observed in the aforementioned mean differences), and when displaying certain features that are fashionable amongst the young. For instance, features related to mobility, social networks, collective interaction and broadcast of shared meanings, as could be observed in the best rated features and in the open essay answers when the two tools were compared. These features were added to the new Open Video Annotation (OVA) tool, which aims to be in line with university students’ symbolic and communicative competence. Students should be therefore more critical and prepared for what Castell (2012: 23-24) defines as mass self-communication. He considers this to be vital in symbolic construction, as it mainly depends on «the created frameworks, i.e. the fact that the transformation of the communication environment directly affects the way in which meaning is constructed».

We believe that collective multimedia annotation has many educational possibilities in university teaching. Some of these possibilities go beyond the existing format, reaching the aforementioned ‘created framework’ nowadays represented by MOOCs. Their application and research can be interesting in further educational settings beyond those studied in this project, such as: a) Blended learning models currently developed at universities, which use materials and resources to support teaching; b) Learning objects with multimedia annotations and semantic web (García-Barriocanal, Sicilia, Sánchez-Alonso & Lytras, 2011); c) Supervision during the Practicum (Miller & Carney, 2009) with ePortfolios (electronic portfolios), filled with multimedia proof of learning and where the meanings given to annotations can be shared. d. Dissemination of scientific knowledge, as suggested by Vázquez-Cano (2013: 90), by combining the written format with the video-article and the scientific pill. Such combination would provide scientific production with more visibility, broadcast and flow of exchange. All the above contexts and experiences are innovative and consistent with the practice that we wish to widely implement in universities, thus representing a strong leadership in the knowledge society.

Support and notes

The collaborative project was entitled Open Video Annotation Project (2012-2014) (http://goo.gl/51W37d) and was made possible through the joint funding of institutions such as: Talentia scholarships and Gtea Group (http://gtea.uma.es) PAI SEJ-462 Andalusian Regional Government, University of Malaga and Center for Hellenic Studies –CHS– (Harvard University) (http://chs.harvard.edu (09-07-2014).

1 YouTube Statistics (http://goo.gl/AlYrCL) (09-07-2014).

2 International Workshop on Multimedia Annotations ‘iAnnote14’, San Francisco, California (USA), April 3-6, 2014 http://iannotate.org (09-07-2014).

3 Open Source Platform http://github.com.

4 The first course using OVA was ‘Poetry in America: Whitman’, in edX Harvard University http://goo.gl/I9bupN (09-07-2014).

5 OVA Tool (http://openvideoannotation.org) (09-07-2014).

References

Aguaded, I. & Sánchez, J. (2008). Niños adolescentes tras el visor de la cámara: expe-riencias de alfabetización audiovisual. Estudios sobre el Mensaje Periodístico, 14, 293-308.

Aguaded, J. & Macías, Y. (2008). Televisión universitaria y servicio público. Co-municar, 31, XVI, 681-689. (http://doi.org/cd4fkw).

Angehrn, A., Luccini, A. & Maxwell, K. (2009). InnoTube: A Video-based Connection Tool Supporting Collaborative Innovation. Interactive Learning Environments, 17, 3, 205-220. (http://doi.org/bw48vv).

Area, M. (2005). Los criterios de calidad en el diseño y desarrollo de materiales didácticos para la www. Comunicación y Pedagogía, 204, 66-72.

Bangor, A., Kortum, P.T. & Miller, J.T. (2008). An Empirical Evaluation of the System Usability Scale. International Journal of Human-Computer Interaction, 24(6), 574-594.

Bangor, A., Kortum, P.T. & Miller, J.T. (2009). Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. Journal of Usability Studies. 4 (3), 114-123.

Bargas, J.A., Lötscher, J., Orsini, S. & Opwis, K. (2009). Intranet Satisfaction Questionnaire: Development and Validation of a Questionnaire to Measure User Satisfaction with the Intranet. Computers in Human Behavior, 25, 1241-1250. (http://doi.org/b39md8).

Bartolomé, A. (1997). Uso interactivo del vídeo. In J. Ferrés & P. Marques (Coord.), Comunicación educativa y nuevas tecnologías. Barcelona: Praxis. 320 (1-13).

Bartolomé, A. (2003). Vídeo digital. Comunicar, 21, 39-47. (http://goo.gl/MDcYOt) (29-04-2014).

Bevan, N. (1997). Quality and Usability: A New Framework. In Van Veenendaal, E. & McMullan, J. (Eds.), Achieving Software Product Quality. Netherlands: Tutein Nolthenius, 25-34.

Blázquez, J.J., Chamizo, J., Cano, E. & Gutiérrez, S. (2013). Calidad de vida universitaria: Identificación de los principales indicadores de satisfacción estudiantil. Revista de Educación, 362, 458-484. (http://doi.org/tp5).

Cabero, J. (2004). El diseño de vídeos didácticos. In J. Salinas, J. Cabero & I. Aguaded (Coords.), Tecnologías para la educación: diseño, producción y evaluación de medios para la formación docente (PP. 141-156). Madrid: Alianza.

Castells, M. (2012). Redes de indignación y esperanza. Madrid: Alianza.

Cebrián-de-la-Serna, M. & Bergman, M. (2014). Formative Assessment with eRubrics: an Approach to the State of the Art. Revista de Docencia Universitaria. 12, 1, 23-29. (http://goo.gl/A4cpaa).

Cebrián-de-la-Serna, M. & Monedero, J.J. (2014). Evolución en el diseño y fun-cionalidad de las rúbricas: desde las rúbricas «cuadradas» a las erúbricas federadas. Revista de Docencia Universitaria, 12, 1, 81-98. (http://goo.gl/xNhnqR).

Cebrián-de-la-Serna, M. (1994). Los vídeos didácticos: claves para su producción y evaluación. Pixel Bit. Sevilla, 1, 31-42. (http://goo.gl/w3Ayi6).

Colasante, M. (2011). Using Video Annotation to Reflect on and Evaluate Physical Education Pre-service Teaching Practice. Australasian Journal of Educational Technology, 27(1), 66-88. (http://goo.gl/f2HfZB).

Díaz-Arias, R. (2009). El vídeo en el ciberespacio: usos y lenguaje. Comunicar, 33, 17, 63-71. (http://doi.org/ftt5qr).

Etscheidt, S. & Curran, C. (2012). Promoting Reflection in Teacher Preparation Pro-grams: A Multilevel Model. Teacher Education and Special Education 35(1) 7-26. (http://doi.org/dk53x2).

Ferrés, J. (1992). Vídeo y educación. Barcelona: Paidós.

García-Barriocanal, E., Sicilia, M.A., Sánchez-Alonso, S. & Lytras, M. (2009). Se-mantic Annotation of Video Fragments as Learning Objects: A Case Study with YouTube Videos and the Gene Ontology. Interactive Learning Environments, 19, 1, 25-44. (http://doi.org/b2pkpf).

García-Valcárcel, A. (2008). El hipervídeo y su potencialidad pedagógica. Revista Lati-noamericana de Tecnología Educativa (RELATEC), 7, 2, 69-79.

Giroux, H.A. (2001). Cultura, política y práctica educativa. Barcelona: Graó.

Guo, P., Kim, H. & Rubin, R. (2014). How Video Production Affects Student En-gagement: An Empirical Study of MOOC Videos. Proceedings of the First ACM Conference on Learning @ scale conference (pp. 41-50. March 4-5, Atlanta, Georgia, USA. (http://doi.org/tp6).

Hattie, J. (2009). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. New York, NY: Routledge.

Hosack, B. (2010). VideoANT: Extending Online Video Annotation Beyond Content Delivery. TechTrends, 54, 3, 45-49.

Ingram, J. (2014). Supporting Student Teachers in Developing and Applying Pro-fessional Knowledge with Videoed Events. European Journal of Teacher Education, 37(1), 51-62. (http://doi.org/tp7).

Kirakowski, J. & Corbett, M. (1988). Measuring User Satisfaction. 4ª Conference of the British Computer Society Human-Computer Interaction Specialist Group, 329-338.

McNamara, N. & Kirakowski, J. (2011). Measuring User-satisfaction with Electronic Consumer Products: The Consumer Products Questionnaire. International Journal Human-Computer Studies. 69, 375-386. (http://doi.org/d5xzqn).

Miller, M. & Carney, J. (2009). Lost in Translation: Using Video Annotation Software to Examine How a Clinical Supervisor Interprets and Applies a State-mandated Teacher Assessment Instrument. The Teacher Educator, 44(4), 217-231, (http://doi.org/dhj2bv).

Molich, R., Ede, M.R., Kaasgaard, K. & Karyukin, B. (2004). Comparative Usability Evaluation. Behaviour & Information Technology, 23(1), 65-74.

Orland-Barak, L. & Rachamim, M. (2009). Simultaneous Reflections by Video in a Second-order Action Research-mentoring Model: Lessons for the Mentor and the Mentee. Reflective Practice, 10, 5, 601-613. (http://doi.org/db82mr).

Picci, P., Calvani, A. & Bonaiuti, G. (2012). The Use of Digital Video Annotation in Teacher Training: The Teachers’ Perspective. Procedia, Social and Behavioral Sciences, 69, 600-613. (http://doi.org/tp8).

Rich, P. & Trip, T. (2011). Ten Essential Questions Educators Should Ask When Using Video Annotation Tools. TechTrends, 55, 6, 16-24.

Rich, P. J., & Hannafin, M. (2009a). Scaffolded Video Self-analysis: Discrepancies between Preservice Teachers’ Perceived and Actual Instructional Decisions. Journal of Computing in Higher Education, 21(2), 128-145.

Rich, P.J. & Hannafin, M. (2009b). Video Annotation Tools. Technologies to Scaffold, Structure, and Transform Teacher Reflection. Journal of Teacher Education, 60, 1, 52-67. (http://doi.org/dzdv4n).

Salinas, J. (2013). Audio y vídeo Podcast para el aprendizaje de lenguas extranjeras en la formación docente inicial. IV Jornadas Internacionales de Campus Virtuales. 14-15 Febrero. Universidad de las Islas Baleares. (http://goo.gl/EHq2Jo) (29-04-2014).

Sauro, J. (2011). Measuring Usability with the System Usability Scale (SUS) (http://goo.gl/63krpp) (29-04-2014).

Schön, D.A. (1998). El profesional reflexivo: ¿cómo piensan los profesionales cuando actúan? Barcelona: Paidós.

Vázquez-Cano, E. (2013). El videoartículo: nuevo formato de divulgación en revistas científicas y su integración en MOOC. Comunicar, 41, XXI, 83-91.(http://doi.org/tnk).

Yang, S., Zhang, J., Su, A. & Tsai, J. (2011). A collaborative multimedia annotation tool for enhancing knowledge sharing in CSCL. Interactive Learning Environments 19, 1, 45-62. (http://doi.org/cdtxd7).