Education, Big Data and Artificial Intelligence: Mixed methods in digital platforms


Abstract

Digital technology has provided users with new connections that have reset our understanding of social architectures. As a reaction to Artificial Intelligence (AI) and Big Data, the educational field has rearranged its structure to consider human and non-human stakeholders and their actions on digital platforms. In light of this increasingly complex scenario, this proposal aims to present definitions and discussions about AI and Big Data from the academic field or published by international organizations. The study of AI and Big Data goes beyond the search for mere computational power and instead focuses upon less difficult (yet perhaps more complex) areas of the study social impacts in Education. This research suggests an analysis of education through 21st century skills and the impact of AI development in the age of platforms, undergoing three methodological considerations: research, application and evaluation. To accomplish the research, we relied upon systematic reviews, bibliographic research and quality analyses conducted within case studies to compose a position paper that sheds light on how AI and Big Data work and on what level they can be applied in the field of education. Our goal is to offer a triangular analysis under a multimodal approach to better understand the interface between education and new technological prospects, taking into consideration qualitative and quantitative procedures.

Keywords

Artificial intelligence, big data, education, mixed methods, multimodality, digital technology, platform society, digital connection

Palabras clave

Inteligencia artificial, macrodatos, educación, metodologías mixtas, multimodalidad, tecnología digital, sociedad de las pantallas, conexión digital

Resumen

La tecnología digital ha traído características de conexión que restablecen nuestra comprensión de arquitecturas sociales. Sobre la Inteligencia Artificial (IA) y Big Data, el campo educativo reorganiza su estructura para considerar a los actores humanos y no humanos y sus acciones en plataformas digitales. En este escenario cada vez más complejo, esta propuesta tiene como objetivo presentar definiciones y debates sobre IA y Big Data de naturaleza académica o publicados por organizaciones internacionales. El estudio de IA y Big Data puede ir más allá de la búsqueda de poder computacional / lógico y entrar en áreas menos difíciles (y quizás más complejas) del campo científico para responder a sus impactos sociales en la educación. Esta investigación sugiere un análisis de la educación a través de las habilidades del siglo XXI y los impactos del desarrollo de IA en la era de las plataformas, pasando por tres ejes de grupos metodológicos: investigación, aplicación y evaluación. Para llevar a cabo la investigación, confiamos en revisiones sistemáticas, investigaciones bibliográficas y análisis de calidad de estudios de casos para componer un documento de posición que arroje luz sobre cómo funcionan la IA y el Big Data y en qué nivel se pueden aplicar en el campo de la educación. Nuestro objetivo es ofrecer un análisis triangular bajo un enfoque multimodal para comprender mejor la interfaz entre la educación y las nuevas perspectivas tecnológicas.

Keywords

Artificial intelligence, big data, education, mixed methods, multimodality, digital technology, platform society, digital connection

Palabras clave

Inteligencia artificial, macrodatos, educación, metodologías mixtas, multimodalidad, tecnología digital, sociedad de las pantallas, conexión digital

Introduction

In the early 20th century, connectivity studies sought to understand how socio-technical systems were driven. The aim was to make sense of the communication materiality embedded in the process and the diverse roles it played. Using the framework of communication materiality, scholars argued that humans have emerged from a physical world to inhabit a symbolic atmosphere where everything comprises material content (Habermas, 1985).

Reflecting on education not only entails considering the interface between teacher and student, it involves understanding that the terms assigned to this process carry meanings that can mask technology and the collective construction of knowledge. Technology is frequently associated with both solving and creating problems within education. This phenomenon is described by Mick and Fournier (1998) as the “paradox of technology” which can be simultaneously emancipating and enslaving.

One idea on how to define technology is the concept of "technium", which is a self-reinforcing ecosystem of artifact creation, tools and ideas embracing all technologies. According to this idea, technology depends on countless preceding advances. Kelly (2010) argues that technology predates humankind, suggesting that technium involves constructs such as complexity, diversity, specialization, mutualism, ubiquity, sentience and exotropia. Yet, Kelly (2010) warns that technology can also be seen as scientific knowledge used in practical ways in industry, for example in designing new machines.

This paper approaches the developments in Artificial Intelligence (AI) and Big Data and their intersection with Education. It presents the illustration of this interface as the platform society capable of promoting 21st century skills. Mixed research methods, their application and evaluation will be presented. This includes netnography, Competency Based Education (CBE), 4-dimensional modelling, compass models and multimodality, while taking into consideration the triangular approach as a complex and sophisticated method to perform analysis in this intricate environment.

State of the art: AI and Big Data

AI has been a topic on the radar of theorists and experts since the 1950s, and, to this day, no agreement has been reached as to its definition. Studies into AI began in 1956 when John McCarthy used the term at a seminar at Dartmouth University in the United States. Prior to McCarthy, starting in 1951, research studies on genetics within the field of biological sciences had also considered AI. Also, in 1950, Alan Turing published the study “Computing Machinery and Intelligence” where he presented the “Imitation Game” also known as the “Turing Test”, which is a set of questions aimed at assessing whether the respondent is a human or a machine.

Russel and Norvig (1995) explored AI in four categories: systems that think like humans; systems that act like humans; systems that think rationally; and systems that act rationally. In the history of the study of AI, the four categories welcome theorists and followers, finding tensions at their edges between studies centered on “Humanity” or “Rationality”.

Publications from 2014 reflected these distinctions on how AI can be applied. The International Telecommunication Union (ITU) released the reports “Artificial Intelligence for Development Series” (2017) and “AI for Good Summit” (2017 and 2018); where it described the developments in the concept of a system that does not replace human intelligence. Similarly, the Organization for Economic Co-operation and Development (OECD) in partnership with International Business Machines (IBM) in the document “AI: Intelligent machines, smart policies” (2018) positions AI as a structure that increases the potential of human intelligence.

Floridi (2014), on the other hand, discussed the applications of AI arguing that successful systems are those with a molded environment around them. In other words, systems that respond to specific purposes perform best —the author gives an example of a robot which may cut grass well is unlikely to also perform the role of a refrigerator just as well. This is known as a “frame problem”. According to Floridi (2014), AI does not have a descriptive or prescriptive approach to the world; rather, it assesses the logical and mathematical coercion that makes it possible to construct artifacts and interact with them effectively.

The United Nations on the other hand, in their publication “Innovative Big Data approaches for capturing and analyzing data to monitor and achieve the SDGs” (2017) acknowledged that defining AI is not a straightforward task. The initial problem is to define what “intelligence” means or which intelligence is executed by humans and non-humans. Despite various attempts, none of the involved disciplines, such as psychology or education science, has come up with a satisfying, mutually agreed upon definition of intelligence. Legg and Hutter (2018) provided an overview of the many definitions proposed over the years: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments”. In terms of AI, the “agent” in this definition could be a human being (regular intelligence) or a system (AI). As such, a machine that exhibits intelligence, which equals human intelligence, can be referred to as having general AI.

Connection between AI and Big Data

AI happenedin the wake of the Zettabyte Era, which means that an intelligent performance from a machine is required as a consequence of Big Data. Generations since 2014 have experienced the Zetta flood that describes the byte tsunami that is taking over the environment where we live. While AI has become a natural development of an intelligent system that needs to deal with Big Data, this is why the terms are structurally connected. Despite the importance of the phenomenon, the definition of Big Data is still unclear. The term was first introduced in 1989 by Erik Larson in a piece published by The Washington Post on how to deal with junk mail. However, theorists attribute the concept of Big Data as we use it today to John R. Mashey’s “Big Data and the next wave of infrastress” published in 1998, where he acknowledges a field that requires high capacity in running analytic models to cope with vast amounts of data. Many authors have contributed to the development of the term, and in 2012 one of the first legal regulations of public and sensitive data was launched.

The current law of the General Data Protection Regulation (GDPR - European Union Commission, 2020) contains legal requirements for the use of personal data for historical, statistical and scientific research purposes effective worldwide. Floridi (2014), who contributed to the launch of GDPR, warns of two common mistakes when talking about Big Data: one is misunderstanding “Big” to refer to physical size, and the other is misunderstanding the “Data” as being “Big” only when in perspective to computational power. These two mistakes, in turn, have two different sources: the epistemological problem when thinking that there is too much data; and the idea that the solution to this problem is technological (as if technology could synthesize/reduce data amount). The confusion lies in the fact that an epistemological problem requires an epistemological solution, rather than a technical one.

The problems in Big Data interpretation and basic features

Since the problem is not the increasing amount of data, the solutions need to be updated. It is not about our processing capacity (since this activity happens on demand), but about the epistemological question of small patterns analyzing Big Data. Small patterns represent a new frontier of innovation and competition, from science to business, from governance to social policies, from security to protection. The reason why patterns should be small is to improve their processing speed —since there is a large quantity of data, the small patterns group them to speed up their synthesis). A potential ethical issue concerning the use of small patterns stems from their ability to predict future events, as they can foresee choices and behaviors, which comes up against ethical principles of information.

Another feature of data is related to volume, and the Cloud Security Alliance (2014) in its report “Big Data Taxonomy” introduces three limits to the growing usage and storage of data: thermodynamics, intelligence and memory. This provides a concerning counterweight to the argument that AI is a solution to Big Data, since intelligence is one of Big Data’s limitations. Big Data responds to data acquisition and storage, and humanity has not produced enough storage for the data we are producing, which is therefore a limitation in terms of memory.

Søe (2018) warns, however, that the main problem with this particular field is epistemological. The perspective of the amount of data being a problem is misleading, once the main question is how late people became aware of Big Data’s existence. This analysis goes in the opposite direction of the idea of data as sets so large and complex that they become difficult to process using available database management tools for traditional data processing applications, which brings us to another misleading conception as mentioned above: weak computational power. But why are small patterns such a big issue? Floridi, in a lecture in 2018 at Oxford Internet Institute, answered that question with a dot connection figure illustration: the more data points you have, the better the pattern must be, and unless you connect all the dots you will not see what the figure is really about. The question of Big Data is that, among zetta bytes of information, a pattern is required in order to conduct an analysis. Floridi compared it to finding a needle in a haystack. The integration between Big Data and AI is that data groups must create their own intelligence to identify the needle.

Yet, not every piece of data is important. Mantelero (2018) points out that perhaps half our data is insignificant, while the other half is valuable. The role of small patterns is to know which half is required. Once the valuable assets are mapped, and the needles found, an aggregation feature could be considered. This means that combining important data may drive a system to understand their customer and even to predict their choices. Thus, small patterns, as a methodological procedure, are significant when they correlate relevant data including the absence of data itself.

Collective intelligence and its ability to move forward with AI algorithms depend on a colossal historical database capable of generating insights into predictive behaviors and educational outcomes. Yet, one of the major challenges for the advancement of AI is the systematization and organization of useful data.

Material and methods: The age of platforms

Discussions arising from the application of AI and Big Data regarding the development of digital skills are partly due to the impact on work environments such as automation and the need for 21st century skills. Such competencies concentrate on socio-emotional aspects and anthropocentric character, which are considered complex constructs and difficult to automate. Cukurova et al. (2019) define 21st century skills as concepts composed of a group of other skills or knowledge and, for this reason, these competencies present high complexity for parameterization.

In this research, 21st century competencies in interface with AI will be explored between two aspects: elicitation and knowledge representation (Barrett & Edwards, 1995). Regarding the first aspect, Pearson conducted a study “The Future of Skills Employment in 2030” (Bakhshi, 2017) which gathered expertise and statistical data from the database of the United States Ministry of Labor (O'net) to design 120 competencies of the future composed of three sub-categories: competencies, knowledge and skills. Given its sophisticated quantitative character, AI enhances the operation, mapping and analysis of these new aspects within the interface of technology and education.

According to Scoular et al. (2017), AI makes it possible to analyze the multiplicity of facets of teaching and learning. From the point of view of knowledge representation by modeling, the construction of platforms gains depth in the interface between AI and education. Luckin et al. (2016) argue that AI enables the automation of parts of the educational process through three models: pedagogical, epistemological and student’s context, which the author concludes as fundamental models for the creation of adaptive tutors.

When talking about future competencies, “deep scientific” and “deep artistic” are two profiles marked by repetitive routines without making unprecedented and complex decisions. These form part of the group of occupations that will be redesigned. Some skills will likely be extinguished altogether, while others will undergo adaptations. The gains of technological implementation should be invested in people, and countries like the US already deal with populations without an occupation, but with income (using less economy).

As a response to this issue, online platforms, such as learning management systems, language mobile applications, adaptive tutors, among others are prevalent nowadays. They are capable of providing personalized benefits, and they put pressure on public services (Van-Dijck et al., 2018). Also, they affirm that platforms are neither neutral nor value-free constructs; they come with specific norms and values inscribed in their architectures. These values do not always reflect the cultural values where those platforms are operating such as privacy, accuracy, safety and consumer protection. Yet, other values such as fairness, equality, solidarity, accountability, transparency, and democratic control are relevant in public discussions. After all, platforms are not always mere reflections of the social; they can create it too. Platform-based societies have social and economic traffic increasingly channeled by a global online platform ecosystem that is driven by algorithms and fueled by data. Further evidence is how the number of mobile devices (8.3 billion) (ITU 2020) has outstripped global population (7.8 billion) (UN, 2020). However, according to the ITU study, approximately 87% of the population in developed countries have internet access while just 47% in developing countries share the same privilege (ITU, 2020). The explosion of mobile applications (colloquially “apps”), together with increases in global internet access and mobile devices to access these platforms, characterizes the questionable concept of technological ubiquity we inhabit.

21st century competencies: Mixed methods to apply and evaluate them

The platform society has the potential to demand new competencies from citizens, which brings the discussion to 21st century competencies. It is necessary to clarify concepts such as competence, capacity, capability, ability and skill, considering that these words are generally used interchangeably. Some definitions from Oxford English Dictionary can help to address this task:

  • Ability: The fact that somebody/something is able to do something.

  • Capability: The ability or qualities necessary to do something

  • Competence: The ability to do something well.

  • Knowledge: The information, understanding and skills that you gain through education or experience.

  • Literacy: The ability to creatively and culturally read and write on any surface.

  • According to this dictionary, ability, capacity and capability are synonyms, as well as skill and competence.

  • The O'Net database defines the terms as follows.

  • Ability: enduring attributes of the individual that influence performance (cognitive, physical, psychomotor and sensory abilities).

  • Skills: developed capacities that facilitate learning or the more rapid acquisition of knowledge (complex problem solving, resource management, social, system and technical skills).

According to Zabala and Arnau (2015), competence is defined as the capacity or ability (which means having the cognitive structure) to perform tasks and engage in diverse situations such as the ones from political, social and cultural life, in an effective and conscious way and adapted to a certain context. It is necessary to mobilize attitudes, skills and knowledge orchestrating and interrelating them. Similarly, the OECD (Rychen & Salganik, 2003) defined competence as the ability to successfully meet complex demands in a particular context through the mobilization of knowledge, skills, attitudes, and values. In this case, values were added as a new element to the construct. In this article, we will use the umbrella concept of competence from the OECD, in which skills will be considered a subset of competence. Ability, capability and capacity will be considered synonyms exactly as defined by the OECD.

The emergence of proposing frameworks for 21st century education structures a globalized and technologically driven mindset in the information age. For instance, Fadel and Groff (2019) proposed the four-dimensional education model in which knowledge, skills, character, and meta-learning competencies are dimensions that need to be explored to successfully redesign a curriculum. The traditional mindset of thinking and designing a curriculum is centered on knowledge transfer. Wilson (1999) points out that humankind is drowning in information while starving for wisdom. The world henceforth will be run by synthesizers. These are people who are able to put together the right information at the right time, think critically about it, and make important choices wisely. Next, we will present three methodological cohorts that can help explain the interface between Big Data, AI and Education: research, application and assessment.

Research, application and assessment

Experiments in education necessarily entail the systematic study of particular forms of learning. This context undergoes research, test and revision; and in this research, we offer netnography as a research method, followed by three examples of application that may be seen by the methods Competency Based Education (CBE), the Four-dimensional Model and the OECD Compass Model.

Research: Netnography

a) Netnography

In netnography, methodological strategies for understanding the communicational behavior of stakeholders can be set out in two ways: first, inspired by the classics of traditional ethnography such as Bronislaw Malinowski (1922) and Mead (1979); and second, considering the innovative and still recent netnographic thinking - based on Kozinets’ (2014; 2015) research principles. To describe the presence of emerging 21st century competencies of the participating population, Kozinets (2014) sought to conduct field research in the ever-decreasing contrast between what can be called street school experiences (traditional and institutionalized).

Due to the diversity of approaches and theoretical affiliations that dialogue between perspectives better linked to marketing or anthropology, netnography emerges as a necessary concept. According to Kozinets (2014) the scientific body of 21st century communication has indeed given way to a neologism, which cannot be used as an obstacle to the use of the term, nor can it determine the end or total reinvention of traditional ethnography. The author's argument explains that neologisms are part of the cyclic evolutions of science and concepts, as instruments of discourse that serve to explain observed realities and undergo transformations. Reassuring the reader, therefore, about the various nomenclatures that the method has assumed, Kozinets (2014) refers to seminal studies from the turn of the century that began the research of mapping and describing communicational behaviors in cyberspace.

Hine (1994) proposed an alignment of different terminologies such as: netnography, virtual ethnography, webnography, cyberanthropology and digital ethnography. Despite the occasional indiscriminate use of juxta position, researchers must pay careful attention to maintaining the original conception behind the netnographic method. Assuming that the term virtual ethnography was appropriate for the initial phase of the internet, Hine (1994) questioned whether to apply the concept of ethnography exclusively, given the recurring claims of the process of overcoming the dichotomy between online and offline experiences.

In terms of procedure, netnography drives the researcher to understand the kind of stakeholders who are engaged in network and platforms and how they behave with knowledge production dynamics. AI can be applied on two levels: to disseminate the educational experience; and to personalize the learning experience. AI collects data cohorts that, once synthetized, may mirror the user’s preferences, strengths and weaknesses.

Application: Competency-based education model, four-dimensional model, and compass model

b) Competency based education model

Gervais (2016) provides, based on a systematic review, the definition of competency based education (CBE), which is an outcome-based approach to education that incorporates modes of instructional delivery and assessment efforts designed to evaluate mastery of learning by students through their demonstration of the knowledge, attitudes, values, skills, and behaviors required for the degree sought.

The history of CBE dates to 1862 with the Morrill Land-Acts in the United States which “provided the basis for an applied education oriented to the needs of farm and towns people who could not attend the more exclusive and prestigious universities and colleges of the eastern United States” (Clark, 1976). According to Clark, higher education degrees, before the industrial revolution, were for the privileged classes preparing students to be thinkers, not doers. The CBE foundation advocated that education needed to focus on preparing a student for their role in society (Riesman, 1979).

c) Four-dimensional model

In Fadel and Groff's (2019) understanding, in many curricula the knowledge dimension has a central focus characterized by a lack of real-world relevance, resulting in low engagement and low student motivation. Clearly, it is still important to learn mathematics and language, but they insist this must be integrated within larger individual competencies in an interdisciplinary way, emphasizing topics such as robotics, bio systems, social systems, wellness, entrepreneurship and media. In this model, the skill dimension is mainly seen as the compendium of higher-order skills such as what Fadel and Groff term the “Four C's”: communication, collaboration, critical thinking and creativity.

Considering the ethical dimension, it is noteworthy that there are ethical implications to most of the global challenges we face today such as climate change, corruption, terrorism and income inequality. The main six elements of the ethical dimension from this model are mindfulness, curiosity, courage, resilience, ethics and leadership. The last dimension is meta-learning, understood as “learning how to learn”—specifically how to reflect on and adapt our learning composed by mindset growth and metacognition. Meta-learning, when effectively implemented enables knowledge, skills, and character competencies to be transferable across multiple disciplines, which is the ultimate goal of all education.

d) OECD Compass model

The OECD Compass model (OECD, 2020) uses the metaphor of a learning compass composed of seven elements: core foundations, transformative competencies, students’ agency and co-agency as well as an anticipation-action-reflection cycle. Core foundations are treated as a new way of including the curriculum in an educational model by relating it with knowledge, skills, attitudes and values. This new curriculum also includes subjects such as digital literacy, physical and mental health, and social and emotional skills. Transformative competencies involve creating new values, reconciling tensions and dilemmas and taking responsibility. Finally, the anticipation-action-reflection cycle, according to this model is an iterative learning process whereby learners continuously improve their thinking and act intentionally and responsibly in the interest of collective well-being.

Assessment: Multimodality

e) Multimodality

According to Nigay and Coutaz (1993) a multimodatily system is defined as one that supports communication with the user through different modalities or “modes” such as video, voice, text and gestures. “Multi” means more than one and “modalities” or “modes” refer to the communication channels. This possibility is especially important for educational platforms where there is a lack of understanding of the underlying processes and the majority of theories are imported from social sciences and psychology. Educational platforms are constantly processing multimodal inputs and outputs, for example: text (self-reports), voice (think aloud), video, biological measurements (such as eye tracking, facial expressions) to understand affection states; clickstream or trace data to track user behavior and navigation. Qualitative and quantitative methods of analysis can be applied to narrow multimodal data into information that can help effective decision making. Multimodality is the foundation of the new discipline learning analytics by providing educators and other education stakeholders with analysis and indicators that help them control educational processes and their outcomes.

Discussion

First, it is important to emphasize the difference between multimedia and multimodality. Both systems use multiple communication channels. A multimodal system is able to automatically model informational content at a high level of abstraction, striving for meaning. In education, multimodal EdTech (educational technology) is an emergent area of study. With the so-called Internet of Things, wearable sensors, cloud data storage, increased computational power for processing and analyzing Big Data sets, sensors can be used to gather high-frequency and fine-grained measurements of micro-level behavioral events. For example, micro-levels can be movement, speech, body language, or physiological responses, providing a panacea of data, able to mitigate the streetlight effect.

A complex educational setting requires sophisticated levels of measurement and for that, mixed methods of application and assessment are accurate and suitable to deal with this prospect. Complexity here is supported by Edgar Morin (2015), and his formulation of Complexity as a new way of thinking about relations in an environment that is no longer systemic and unifying. It introduces three principles (dialogic, organizational and hologrammatic) to think about a network as a complex structure instead of a systemic overview. The network becomes informational architecture, still playing its material role (in reference to Habermas’ work on communication materiality).

This subtle materiality provides methods from knowledge fields, which can be seen as a benefit of working in a transdisciplinary area. However, that scenario requires researchers and stakeholders to pay more attention to what technologies are, how they are defined and how they can be applied- while avoiding superficial approaches in education. When compiled, the three axes (research, application and assessment) in education can thrive with a digital endeavor in terms of time and quality. However, contextualization of research is recommended since the experience of AI or Big Data communities can be diverse. This is why netnography is considered as the first group of procedures since it empowers the educator to understand their target behaviors regarding digital technology. Application can vary through models, and the model used is relevant according to the netnographic study. Multimodality concludes this process by providing a tranche of aspects that can be considered evidence to assess the impact of an education through platform.

Final considerations

Applications of technology in education can have a close relationship to pedagogical models, such as technology as a means of information storage and exchange. The discussion surrounding digital culture and contemporary technologies in education has different perspectives. Here, three dimensions have been explored including the challenges in research, application and evaluation of platform proposals. To illustrate these perceptions and to support reflections of this ongoing digital transformation, studies introduce techniques such as learning analytics (data mining in educational environments) and AI applications as examples of new methods incorporated into the education field.

Since Alan Turing's Imitation Game and the early stages of AI, techniques in data management software processing, indexing and analysis have evolved at different rates. In public security, for example, the use of facial recognition technology and surveillance with Big Data regulations (e.g. GDPR). In education, on the other hand, ethical arguments raise issues about the significant contributions made by technological developments. Whether in public safety, education or elsewhere, the digital shift opens paradoxical issues insofar as these dilemmas still need reasoning, investment and professional qualifications to be considered culturally present. Despite the volume of references required, this research uses both multimodality and netnography to approach the impacts both AI and Big Data on education. The methodological cohort combines variations in theory and in practice.

Designing a research experiment or a platform is a praxis to manage data shortcuts and data representativeness. When analyzing incomplete datasets or complex constructs (such as competencies), especially those which have missing data (e.g., due to hardware failures), the information overlap across multiple modalities (triangulation of data) is convenient as it allows their overall meaning to be preserved (Bosch et al., 2015). Cukurova et al. (2019) made it evident when, in their study with AI to analyze human decision making, they compared unimodal and multimodal approaches to observe students’ sensitive data as eye movement tracks and body postures.

In sum, data triangulation, in the context of multimodality approach, has the ability complement analysis in two different ways: when data faces incompleteness to analyze a construct, and when partial data forces one to make inferences from other data parcels. For example, it is possible to analyze anxiety with educational technology among students by using very different sources of data, from neuronal activity measurements and heart rate, to questionnaires. The data triangulation process offers a reliable and secure method that makes it possible to infer evidence based on conclusions drawn.

This paper attempted to offer both settings: a deep understanding of technology in its correlation to education and an exploration of methods used to research, apply and assess those features. There is an apparent turning point in the theoretical background of education that no longer considers only human factors (educators and students) but introduces a cartography of new stakeholders who embrace transdisciplinary concepts and technical entities. The challenge of pursuing research within this field comes from aligning academic elaborations with a pragmatic context: empowering citizens to understand the implications of what appears to be a new possibility for knowledge philosophy. Perhaps the age of platforms will bring a new paradigm or perhaps not; we shall see.1