Technology-Enhanced Learning (TEL) is the research field that aims at “improving the quality and outcomes of learning, in all those varied circumstances where technology plays a significant supportive role” (Goodyear & Retalis, 2010, p. 7). The TEL research field is intrinsically interdisciplinary, nurtured by different technological and socio-cultural scientific areas (Balacheff et al., 2009).
Research in TEL covers a wide range of educational technologies, learning theories and pedagogical approaches, that address the needs of different educational stakeholders (students, teachers, institutions, policy makers, etc.), thus giving birth to multiple research sub-fields: Computer-supported collaborative learning, mobile learning, Ubiquitous Learning, Learning Analytics, among others (Duval et al., 2017; Rubia-Avi & Guitert-Catasús, 2014). As a consequence of this interdisciplinarity and wide scope, radically different research worldviews and methodological approaches can be found in TEL research works. However, and in most cases, research data in TEL not only includes ethnographic observations, increasingly coming from digital sources, but also data coming from the supporting technologies that are the focus of TEL interventions.
This complexity in the data gathering and analysis techniques in TEL research is alleviated by the use of technological tools. The technological support for TEL research not only refers to increasingly powerful and open software packages for statistical analysis, but also to tools specifically aimed at qualitative analysis (Duca & Metzler, 2019; Hai-Jew, 2015). The role of technological support for TEL research becomes even more relevant in the case of mixed-methods research designs (Greene, 2007).
In such designs, TEL researchers need to incorporate analysis techniques that enable the effective and efficient triangulation of data coming from both qualitative and quantitative sources. In many cases, such analysis techniques would not be feasible without the support provided by technological tools (Hesse-Biber & Griffin, 2013; Hai-Jew, 2015).
However, this increasing reliance on technological tools for research calls for a deeper reflection on how research processes in TEL, and especially those based on mixed methods designs, are being influenced by the choice and particular usage of these technological aids. Therefore, this paper explores the above-mentioned concerns by addressing the following research question: What are the advantages, disadvantages, and limitations of employing research-supporting technologies in a complex mixed-methods TEL research?
In order to answer the above-mentioned research question, the paper explores a particular long term mixed-methods TEL research design, in which the authors are currently involved. This research design aims at understanding the adoption (or lack thereof) of changes in teaching practice implied by the so-called Learning Design (LD). The LD research community focuses on the development of tools and methods aimed at supporting teachers in designing educational interventions with technology (Lockyer et al., 2009; Persico, et al., 2013). Although LD research is at the core of TEL, the adoption of LD tools and methods by teachers is still very limited and remains a research challenge (Mor et al., 2013; Hernández-Leo et al., 2018). However, research on LD adoption tends to be limited in short term studies and is mainly focused on single tool evaluations, where the issue of adoption is not thoroughly explored (Dagnino et al., 2018).
Understanding the adoption of the changes (or lack thereof) in teaching practice implied by LD goes beyond the analysis of a specific type of technological tool, as in (Katsamani & Retalis, 2013). LD adoption deeply affects the very role of teachers (Laurillard, 2012) in the complex path towards Information and Communications Technologies (ICT) integration in teaching practice which requires, among other factors: understanding and overcoming contextual, cognitive, and affective obstacles faced by teachers (Ertmer & Ottenbreit-Leftwich, 2013), and adapting Teacher Professional Development (TPD) approaches (Asensio-Pérez et al., 2017) to LD principles.
Exploring and understanding these context-dependent factors calls for an interpretive research stance (Orlikowski & Baroudi, 1991) that can be adequately supported by a mixed-methods research design following an explanatory strategy (Creswell, 2014). Indeed, mixed-methods research designs have previously been employed in other similar research projects aimed at understanding teachers' decisions to adopt new technologies (Sugar et al., 2004), or in the research track of ICT and Education in the 2020 international symposium of mixed methods on social research and education (https://bit.ly/3fDbtSf). This is the reason why the authors believe that a mixed-methods research design on this topic is a good example for exploring the role of research-supporting technologies. The research design carried out by the authors addresses two categories of stakeholders (teachers and experts in the field) and makes use of different research methods, namely: Systematic Literature Review, Case Study and Delphi Study.
The paper presents the methodological decisions made by the authors in the explored research design and focuses on the role played by supporting technologies in the associated research process, without depending on a single theoretical framework for the study. From the beginning of the research design it was clear that the authors would need to address a considerable amount of data, and to approach different stakeholders. For these reasons, technologies were extensively used in order to support the various phases of this research, as it will be illustrated in the paper.
Given the research question of this paper, it should be made clear that we do not aim at presenting here the results of the specific mixed-methods research, but rather we will reflect on the implications of using technologies in the overall research design. In the following section we present the research design, as well as the technologies that were employed to support this research process. Then, lessons learned regarding the role of research-supporting technologies are discussed, concluding with reflections for those researchers who need to implement mixed-methods designs supported by technologies.
Mixed-methods research design and supporting technologies
As explained in the introduction, the authors are involved in a research process aimed at understanding the adoption of the changes (or lack thereof) in teaching practice implied by LD. Thus, the authors are progressing along a mixed-methods research design, making intensive use of supporting technologies for research data gathering and analysis. The following subsections introduce the main characteristics of the ongoing research design, highlighting how research supporting tools were employed. This section sets the foundation and context for the discussion that is presented in the third section.
The overall research design
The issue of LD methods and tools adoption is complex and multifaceted, while research in the field often remains parcelled. The literature reports attempt to understand the factors limiting the adoption that remain very focused on single methods and tools and take into account teachers’ specific experiences or expert opinions (Neumann et al., 2010).
Asensio-Pérez et al. (2017), in an effort to outline factors affecting adoption, identify three main points for analysis: 1) Characteristics of LD tools, e.g., tools should be flexible, support all the phases of the design process and support teachers as members of designer communities (Hernández-Leo et al., 2013); 2) Teachers’ mindset (Dimitriadis & Goodyear, 2013), in the sense that teachers should be equipped with an LD mindset adequate to their contexts; 3) Adequate training (Bennett et al., 2017), referring to the need for an appropriate training to make teachers drivers of innovation.
This initial analysis, conducted by the authors at the beginning of the research project, highlighted the need to involve actors belonging to different categories (researchers and teachers) and called for a comprehensive approach aimed at integrating results. Moreover, it was evident for the authors that different kinds of data could probably enrich the comprehension of the phenomenon: if a systematic analysis of the literature could provide with an initial framework, the in-depth analysis of a real case study could represent a useful source for exploring the issue, from both quantitative and qualitative viewpoints.
Finally, directly involving experts in a Delphi study might complement the findings, providing both quantitative ratings and possibly explanatory qualitative views.
The above-mentioned considerations led the authors to design a mixed-methods research. Figure 1 provides an overview of the research design.
The diagram shows the three main methods adopted (SRL, Case Study and Delphi Study), the stakeholders/participants involved at each step (researchers, teachers, and experts, respectively), as well as the research data. Moreover, the diagram shows the technological tools used for data collection, management, processing, analysis and visualization. In particular, we borrowed a classification of possible research supporting technologies by Hai-Jew (2015), who distinguishes among:
Technological tools for secondary information collection, as (in our case) online bibliographic databases for literature reviews.
Technological tools for primary information collection. These can be different in nature, but in our case, we used: tools for the delivery of online questionnaires (Limesurvey (https://bit.ly/3bn8MRp)), video-conferencing systems (Microsoft Skype (https://bit.ly/2WkimjA)) to manage online interviews, text processors for writing diaries, and TEL systems (in our case it was an LD system, called the Pedagogical Planner, PP (Bottino et al., 2011; Pozzi et al., 2020)) to track participants' actions and store participants' artefacts (in our case learning designs).
Technological tools for data management. In our case we used Microsoft Excel (https://bit.ly/2yE8DvL) and MAXQDA (https://bit.ly/2WKQeFs).
Technological tools for data processing, visualization and analysis. In our case, it was mainly SPSS (https://ibm.co/2WlQs6L).
Overall, the authors adopted a design that may be mostly understood as an “explanatory sequential design”, according to Creswell (2014), in which qualitative methods aim to elaborate on quantitative results obtained in an initial phase. In other words, the ‘explanatory sequential design’ envisages a ‘quantitative-qualitative-interpretation’ sequence.
As shown in Figure 2, the authors followed the sequence as they adopted a quantitative research method first (the Systematic Literature Review) and then proceeded with two more qualitative methods, i.e., the Case Study and the Delphi Study.
Nonetheless, the authors partially deviated from the original model, as they actually implemented methods that were inherently mixed, i.e., the Systematic Literature Review was mostly quantitative but also contained qualitative aspects, and the Case Study, as well as the Delphi Study, were mostly qualitative but with some quantitative components, as it will be further described.
Probably, this research design might also have been classified as an “embedded design”, but an “explanatory sequential design” category fits better, since the overall research was oriented to detect generalisable teachers’ needs and barriers to LD adoption. Hereunder, a more in-depth description of each one of the methods is provided.
The Systematic Literature Review (SLR)
The review was carried out in 2017 in accordance with the guidelines proposed by Kitchenham and Charters (2007) for SLR, and covered all the phases: planning, conducting and reporting. As suggested, the authors established a review protocol including: the research questions driving the SLR; the search strategy for retrieving primary studies (including search strings and databases to be searched); the study selection criteria (inclusion and exclusion criteria) and the related procedures; the data extraction and synthesis procedure. The search was carried out in five academic databases frequently used by the TEL community (ACM digital library, IEEE Xplore, Scopus, SpringerLink, Web of Science). 2,408 records were initially retrieved, including journal and conference papers, and book chapters.
A first selection round was carried out by reading title and abstract, checking the relevance of the contribution to the topics explored and the inclusion criteria. 26 papers out of 423 passed this round and were selected. These works were read, and 20 papers finally met the inclusion criteria. These works were analysed following both inductive and deductive strategies. Papers were read and tagged: some key themes were already acknowledged and discussed in the literature (e.g., the issue of flexibility of tools in relation to educational contexts of learning theories) and were used to set up pre-existing categories for tagging. Others (e.g., teachers’ motivation) emerged from the analysis and were added to the list of themes. The review provided a systematic overview of the knowledge developed in the LD field, focusing especially on a) teachers' needs for LD tools; b) main barriers to the adoption of LD tools and design practices. The results of the complete SLR are available in (Dagnino et al., 2018). These results informed subsequent phases of the research design.
The case study
The authors, as part of the overall research design, set up and carried out a single instrumental case study (Stake, 2005) aimed at getting a deeper understanding of the barriers for LD methods and tools adoption, that were identified during the SLR, based on the opinions and visions of in-service teachers. This specific Case Study was chosen on a ‘convenience’ basis, since the involved school asked for training on TEL to some of the authors. The context was a Vocational Training School for bakers and graphic designers which is located close to Milan (Italy). The school is small, with eight groups of students (one for each year of the two areas of study: bakery and graphics). The teaching staff is composed of trainers and professionals (in charge of teaching the subjects that are specific for the professions), tutors and support teachers (helping students with cognitive disabilities and special needs). Trainers have different backgrounds: some of them are just working professionals without teaching training, while others have a background in education or pedagogy. Such heterogeneity was acknowledged by the Principal, who contacted the authors asking for specific training. Teachers were enrolled in a training course in TEL, that started in spring 2017 and was carried out till September 2019. LD was one of the topics taught during the first sessions of the training (May-June 2017). Participants received lessons about theoretical foundations of LD and had also the possibility to use a particular LD tool (the Pedagogical Planner). There was a follow-up and recap in November 2017 and, afterwards, teachers were involved in a design task assignment that they carried out between December 2017 and February 2018.
Since the training covered three school years, the cohort of involved teachers partially changed. The initial cohort of teachers was composed of 12 teachers, among which only five teachers followed the whole training; one additional teacher was included in November 2017.
According to Stake (2005), in order to reach a comprehensive understanding of a phenomenon in real life, researchers should collect and analyse varied sources of data, so to obtain multiple perspectives. Therefore, in this study teachers' use of the proposed LD tool and their opinions were monitored through: questionnaires to teachers (at the beginning, middle and at the end of the training); data tracked by the Pedagogical Planner during usage, as well as artefacts produced by teachers and stored by the tool during the learning design process; reflective diaries, written by teachers during the learning design process; and, interviews to teachers, close to the end of the training.
The Delphi Study
The Delphi Study is a group technique designed to obtain “the most reliable consensus of opinion of a group of experts” (Dalkey & Helmer, 1963:1). The authors carried out a Delphi Study, as part of their research design on LD adoption, with the goal of getting a reliable opinion from a panel of experts (Landeta, 2006) in the field of LD research. The authors’ ultimate motivation was confirming/rejecting the findings obtained during the SLR and Case Study phases of the research design in relation with the barriers for LD adoption.
In a Delphi Study, experts are usually consulted individually and separately several times. The answers are analysed by the study proponents and feedback regarding the position of the whole group is returned to the participants, so they can reconsider their initial opinion in view of the results from the previous iterations. If the first round is usually exploratory and based on open questions, the questions of the final rounds are formulated to carry out a statistical analysis of the results at group level.
The Delphi Study carried out by the authors involved two rounds. Due to the availability of the results of the SLR (see subsection 2.1.1) the authors got in advance a clear literature base from which they developed the questionnaire for the first round. In the second round, the experts were contacted again to answer the same questions in the light of the feedback regarding the position of the group during the first round. Questionnaires included both closed and open-ended questions. Experts were required to: 1) Express their opinion about the relative importance of three categories of factors (teachers’ needs, extrinsic and intrinsic barriers) with respect to the adoption of LD tools and methods; 2) Express their opinion about the extent to which specific needs and barriers affect the adoption of LD tools and methods.
In the second round, experts were additionally asked to propose possible solutions for overcoming the identified barriers.
The experts involved were recognized LD researchers, who had authored publications in peer-reviewed journals on the topic and who were active in the LD community through participation in conferences and expert networks. The experts were mainly researchers working in European institutions with backgrounds in education, engineering, computer science or both: 25 experts were involved in the first round (20 filled in the questionnaire); 20 experts participated in the second round (18 filled in the questionnaire).
Use of supporting technologies along the research design
As Hesse-Biber and Griffin (2013) highlighted, technologies may bring considerable advantages to mixed-methods research. They can be used in various phases and with different purposes, ranging from statistical processing in quantitative methods, to transcription and coding in qualitative methods; their potential can be exploited in terms of communication, as well as data interpretation, just to mention a few. Thus, technology was extensively adopted in this research and used in the different steps.
To manage the data tracked and stored by the Pedagogical Planner, Excel was again used, as it allowed an easy way to handle the data, through data filtering and queries. Moreover, the interviews to the teachers involved in the Case Study were carried out online, by means of Skype calls. The software in this case was chosen because teachers who were to be interviewed said they were familiar with it. During the calls, the authors shared the screen with the interviewee, thus showing from time to time selected keywords (presented in slides) that had the objective of introducing the various topics covered and of triggering the discussion about key issues. In terms of technological tools to manage data, in this case the transcriptions of the interviews were tagged by two independent coders using MAXQDA, a software that allowed the analysis of different kinds of data (such as texts, images audio/video files, etc.) and triangulation of data coming from different sources. The same software was also used to manage other qualitative data coming from the teachers’ reflective diaries, as well as from the open-ended questions both in the teachers’ questionnaires and in the Delphi Study.
Finally, the main technological tool adopted for data processing, analysis and visualisation was SPSS, that was especially used in this research to carry out descriptive and inferential statistical analysis.
Results and discussion
The use of technologies to support the study had an impact both at the level of single methods and on the whole process. In the following section we present the main lessons learnt and organize them according to the Hai-Jew’s classification (2015) of the technological tools, so that we can reflect on the main implications of technology through technology.
In terms of tools for “secondary data collection”, in our SRL, technologies undoubtedly allowed for richer and more reliable results, since the authors were able to search exactly what they were interested in within a mass amount of literature stored in online databases; additionally, the literature of interest was in almost all cases directly accessible. Searching freely through a browser provided a big amount of papers and also grey literature (like project reports), whose existence might have remained hidden and that might be difficult to retrieve differently.
On the other hand, the authors had to face the challenge of managing the big amount of data returned by the databases; this brings us to another category of tools in the Hai-Jew’s classification (2015), i.e., technologies for data management. As already mentioned, in our research we have often relied on Excel for this phase, which usually has turned out to be easy to use. Nonetheless, as far as the SRL was concerned, we need to say some technical capabilities were required to automatically merge the datasets (which was done through a software script). Thus, in this particular case, data management required the authors to ask for technical support from other colleagues, who were not originally included in the team. The other software used for data management and processing was MAXQDA. As a matter of fact, as Fielding (2012) underlined, software tools like MAXQDA enable the integration of qualitative data with quantitative data, matching, for example, the interview analysis with information from rating scales or survey responses. In other terms, these kinds of software allow to build a bridge between the qualitative and the quantitative dimensions, support comparisons of different data sets, thus paving the way for data triangulation and ultimately providing insights for new research directions. In our research, this tool was used to manage the data coming from the questionnaires, interviews and the reflective diaries of the teachers of the case study, as well as the data coming from the Delphi Study. The software also sped up the coding process, since two coders were able to tag text and had their codes recorded. Consequently, MAXQDA can be considered a technology in between the two categories of “data management” and “data processing, visualization and analysis”. Moreover, even if it is not as common as Excel, still, it combines good usability features with quite advanced data processing features. In terms of “primary data collection tools'', as we mentioned above, we used different technologies.
Let's start with Limesurvey, which was used to collect data from teachers in the Case Study and from the experts during the Delphi. Especially during the Delphi, the software turned out to be very useful, as it helped manage some aspects that are usually considered time consuming in these kinds of studies, such as, managing communications with the panellists, administering multiple survey rounds, and then gathering and organizing participants’ responses. In our case, Limesurvey relieved the authors from tasks like sending invitations and questionnaires or registering the answers in a database. As Cole et al. (2013) highlighted, the e-Delphi is also effective and efficient in overcoming geographical barriers, saving time and money, as this was also true in our experience.
Other aspects coming from the online nature of the study were the perception of anonymity, as Limesurvey automatically assigned a code to participants, and the accuracy of data collection, that were registered directly by the system (Roztocki, 2001). On the other hand, the online Delphi Study wasn’t free of challenges. The email sent from the system was sometimes blocked by the SPAM functionalities of the recipients' mail providers and in any case risked being disregarded more than a personal invitation. These automatic features created quite ‘impersonal’ interactions between panellists and authors. Even more importantly, we cannot exclude that some Hawthorne effect occurred: the open-ended answers provided in Limesurvey by the panellists were often very short and sometimes difficult to interpret. The impersonality of the situation might have affected participants' contribution. This particular aspect would deserve further research in the future, since, to the best of our knowledge, very limited attention has been paid so far to the way participants' change their behaviours when observation and data collection happens through technology. Finally, some issues were also raised by respondents regarding Limesurvey at technical level, like the impossibility to modify previously answered questions.
Another tool used for primary data collection was Skype. In this case, the authors took advantage of a type of technology that provided a synchronous communication channel for the interviews with teachers in the Case Study. In this case, one of the main advantages was the possibility to organize the calls at the interviewees’ convenience, while the authors did not need to travel to reach the teachers’ workplace. Moreover, thanks to the use of a videoconferencing system the authors were able to show visual prompts and to collect non-verbal indexes, even though the medium might alter perceptions. Additionally, the possibility to watch the interview records several times, allowed an in-depth analysis. On rare occasions, technical problems (mainly due to the low quality of Internet connection) were annoying.
The last employed primary data collection tool was the Pedagogical Planner, used by teachers to make their learning design during the case study. This gives us the opportunity to make a general reflection about the role of technology in the research field itself, i.e., Technology Enhanced Learning (TEL). Since the learning design was performed through the Pedagogical Planner tool (which is itself digital), such use of technology allowed the authors to collect and observe concrete artefacts produced by the teachers, and thus analyse the decisions taken during the design process. Had not the technology been available, only paper-based artefacts would have been produced and the authors would have been able to analyse only the final result of the process. Instead, in this case, it was possible to observe also half-baked artefacts, thus allowing a more in-depth understanding of the overall teachers' design process. This is true for any process in the TEL research field, where usually digital platforms (like for example Learning Management Systems) can provide learning analytics, thus shedding light on the students’ learning process.
To conclude, we would like to propose some more transversal reflections about methodological decisions taken in the overall research design. Firstly, the possibility to carry out an SLR as a first step of the research, had a considerable impact on the whole research process, since it provided a reliable starting point that allowed to skip a preliminary phase of barriers’ identification that otherwise should have been carried out by interviewing teachers directly, possibly with an already existing experience in LD. Clearly, this represented a methodological decision that sped up the process but, at the same time, influenced, and maybe ‘biased’, the progression of the research. Moreover, the teachers involved in the case study were quite a small sample with a varied experience in LD. This might have affected and biased some of the results. However, having integrated both the SLR and the Case Study in the same research, and then having also enriched the findings with the Delphi Study by looking for confirmation/rejection of the initial results, the authors could find a balance and overcome the limits of one single method.
The use of technologies in the presented mixed-methods research design implicated several advantages, some of them already pointed out by Hesse-Biber and Griffin (2013) or Hai-Jew (2015). Overall, the support of technologies affected the research in terms of complexity and articulation: the affordances of technologies and their capabilities to manage big amounts of data made it possible to include several methods along multiple iterations in a relatively small research project, something that was almost impossible in previous ‘traditional’ research. The decision in itself to adopt a mixed-methods overall approach, where also the single methods are innerly mixed, was possible mostly because nowadays technologies allow to manage quantitative and qualitative data in a relatively easy and fast way, to reach different and distant stakeholders, and to easily integrate data coming from different sources (Fielding, 2012).
On the other hand, there are serious implications for research and the related results. First, online research implies a lack of a direct relationship with participants that often generates low participation rates or inaccurate responses. Moreover, online surveys often limit the possibility of participants to ask clarifications about questions, thus increasing the risk of misunderstandings and biased responses (Roztocki, 2001). Therefore, researchers should find the right balance between automation and relation with participants. This element shouldn’t be overlooked during the design phase of research projects and in any case the results should always be analysed taking these elements into the due considerations.
Another element is that the complexity of technology-supported research continuously calls for a redefinition of the researcher’s required competencies, both at a methodological and technological level. As Hesse-Biber and Griffin (2013: 3) highlighted: “Accessing new modes of data collection may challenge a researcher to come out of his or her methods ‘comfort zone’ and to develop new skills in both data collection and analysis”. Being competent about all the possible methods, along with the related technological tools, is almost impossible for one single researcher and would instead call for research projects conducted by multi-disciplinary teams. Obviously, this is not always possible, often due to funding restrictions. Moreover, unfortunately in many contexts this is also prevented by a competitive view of research, where researchers’ evaluation rewards individual endeavour more than group work. The result can be that a very complex research, if conducted by a single researcher instead of a team, can contain methodological or technology driven mistakes that might seriously affect the results.1