Volume index - Journal index - Article index - Map ---- Back
Childhood education, robotics, computational thinking, educational innovations, skills development, creative thinking, active learning, quantitative analysis
The current digital situation calls for the development of strategies to modernize learning processes, including initiatives for the acquisition of digital skills to enable all citizens to function in a digitalised society. In this context, there is a growing trend promoting the development of programming skills from early school age to ensure that people acquire an active and creative role in the use of technologies, through the mastery of new cognitive skills and practices such as code-literacy.
Currently, robotics is incorporated as highly valuable educational resources in the development of technical and social skills. Educational Robotics (ER) finds its main sustenance in constructivist and constructionists learning theories (Bravo & Forero, 2012; Schwabe, 2013). According to Papert, knowledge is achieved to the extent that the individual interacts with the object of study (Bers, Flannery, Kazakoff, & Sullivan, 2014); in this sense, the ER allows individuals to achieve this level of interaction. Through learning activities based on the design and construction of prototypes, students develop significant knowledge, moving from the abstract to the tangible (Pittí, Curto-Diego, & Moreno-Rodilla, 2010). Educational robotics can be integrated into the teaching-learning process through various practical approaches, one of which is its adoption as the main object of learning (Goodgame, 2018; Karampinis, 2018); a second approach would be as a means of learning (Koning, Faber, & Wierdsma, 2017; Kucuk & Sisman, 2017); the third would be to use it to support learning developments (Moro, Agatolio, & Menegatti, 2018). In the first two approaches, the orientation is aimed at the construction and programming of robots, using gear parts, sensors, actuators, and coding instructions according to the syntax of a programming language. Currently the main educational initiatives with robotics, are in these two approaches, by means of the development of training activities through courses and workshops (Buss & Gamboa, 2017; Ozcinar, Wong, & Ozturk, 2017); an example is the First Lego League, an international challenge that promotes interest in science and technology.
In the third approach, robots are used within the classroom, as a didactic resource (Bruni & Nisdeo, 2017; Serholt, 2018). In this way, learning can be facilitated by inquiry, where the occurrence of errors is taken as a learning opportunity. We find some initiatives developed in countries such as the United Kingdom, one of the pioneering nations in the development of programming skills and computational thinking from a formal curricular perspective, which has incorporated the subject “Computing” into its school curriculum.
New theories on code-literacy (Zapata-Ros, 2015), which allow individuals to communicate with devices through instructions in computer languages, trigger a great interest by the computational thinking processes (Liu, Perera, & Klein, 2017). Jeanette Wing used this term for the first time in 2006, arguing that this type of thinking “involves solving problems, designing systems and understanding human behavior, based on the fundamental concepts of informatics” (Wing, 2006: 33). Subsequently, it was considered as a basic competence that every citizen would have to acquire to function in the digital society. In addition, she argued that computational thinking is neither routine nor mechanical, but a way of solving problems intelligently and imaginatively (Wing, 2008).
In 2009, the National Science Foundation funded the project “Leveraging Thought Leadership for Computational Thinking in PK-12”. This was a joint program between the Association of Computer Science Teachers and the International Society for Technology in Education. The purpose of this initiative was to make computational thinking concepts accessible to educators by providing an operational definition, shared vocabulary and meaningful examples appropriate for the age of the students. The project linked educational objectives with classroom practices (Barr, Harrison, & Conery, 2011).
In Europe, we find similar projects; one is Erasmus+ KA2 “TACCLE3 – Coding. The contents presented through the project's website (http://taccle3.eu/), are an example of successful educational practices and experiences in the process of incorporation and promotion of these skills (García-Peñalvo et al., 2016). A significant contribution to the conceptual framework on computational thinking has been made by researchers Karen Brennan (Harvard University) and Mitch Resnick (MIT) formulating an alternative model on this style of thinking. The model was proposed within the research project that resulted in the creation of Scratch, a visual programming platform “by blocks” that allows children and young people to make their own interactive stories with animations and simulations in a playful environment. The model of computational thinking formulated by Brennan and Resnick (2012) is based on three dimensions: computational concepts, practices, and perspectives.
From our point of view, computational thinking could be defined as the ability and capacity to solve problems using programming and the fundamentals of computational science. In recent years, an approach which is increasingly used has been developed, aimed at improving children's technological literacy and making computational thinking a relevant competency in school environments (Caballero & García-Valcárcel, 2017; Liu, Perera, & Klein, 2017). Some research provides evidence that shows the positive changes that occur in students immersed in training courses in programming skills and computational thinking using programmable robots (Chen, Shen, Barth-Cohen, Jiang, Huang, & Eltoukhy, 2017; Durak & Saritepeci, 2018). In the Spanish context, programs are increasingly targeted at children in the early stages of education on mathematical content, such as algebra, with the use of robotic devices adapted to children for the development of computational thinking skills successfully (Alsina & Acosta, 2018).
The integration of robotics during the first school stages takes advantage of the fact that in this period new ideas are created based mainly on experiences and concepts previously learned; there is a great influence of the family environment (Seppänen, Schaupp, & Wahlström, 2018; Wong, Jiang, & Kong, 2018). Learning, therefore, occurs when children, using information captured by their senses, share ideas, test their limits and receive feedback. In these actions, imagination and creativity play an important role in the production of new knowledge (Buitrago, Casallas, Hernández, Reyes, Restrepo, & Danies 2017). In addition, the development of programming and computational thinking skills through robots capitalizes on the playful characteristics of the resource and context, which represents a positive impact, according to Froebel’s approach to games (Resnick & Rosenbaum, 2013).
Considering the theoretical framework exposed and considering there are still few empirical studies that prove the impact of educational robotics on the development of computational thinking in young children, this study aims to test the influence of a training program based on learning activities with educational robotics on the acquisition of computational thinking skills in early childhood education.
At present, there are several resources of educational robotics that allow the introduction of programming at early ages. The Bee-Bot® robotics kit is used for this research. This is a bee-shaped floor robot with a structure that combines resilience and subtlety at the same time. Other factors in its favor are its dimensions, which allow for easy handling. In addition, its colors, sounds, and movements make it a suitable resource for use with young children between the ages of 3 and 7.
On the other hand, its manufacturer, the English company TTS, has a recognized trajectory in the design and construction of educational resources for which this robot represents a mature educational technology with a high level of confidence and checked quality. The robot has buttons to program the sequence of movements it must perform: advance, reverse, turn left or right, start to move, pause the movements and delete the previous commands. The robot displaces in 15 cm movements, makes 90º turns and stores up to 40 instructions in its memory. For the study, a series of rugs or mats that were designed explicitly for research according to the objectives of the training activities were used. In addition, a story was elaborated for each rug that among its characters involved the Bee-Bot® robot itself. This story was presented to the children before they were shown the challenge they had to solve. The purpose of the story was to present the challenges in a playful and motivating context adapted to the children’s age.
The research questions used for this study are the following:
1) Is it possible to develop the computational thinking of children in the early childhood education stage (3-6 years) through robotic activities in the classroom?
2) Can children improve their ability to sequence actions by responding to a challenge through programming activities using educational robots?
3) Can children improve their ability to relate the instructions they give to a robot to the action it performs?
4) Can children improve their ability to identify and correct existing errors in a programming sequence?
With these questions in mind, the objective of this research focuses on assessing the students’ performance when facing these computational challenges, both initially and once they have completed a training program with robotic activities, thus, assessing the effectiveness of the program in terms of the skills developed by participants.
As a starting hypothesis, it was established that the integration of a program of learning activities with educational robotics would significantly contribute to the acquisition of computational thinking skills in Early Childhood Education schoolchildren.
The study was developed using a quasi-experimental design (Campbell & Stanley, 1993; Hernández & Maquilón, 2010), with pre-test and post-test measurements in two groups (experimental and control), as shown in the diagram in Figure 1. The students are divided into two groups, experimental (Eg), whose members would perform the training program, and control group (Cg), comprised of subjects who would not participate in robotics activities (Kandlhofer & Steinbauer, 2016). The allocation of students to groups could not be done randomly since the intervention allowed by the school required working with intact groups formed according to criteria inherent to the school itself and independent of the study. Following the methodological criteria of this type of research design, measures were collected from everyone (experimental and control group), before and after the intervention.
Two types of variables were defined in the research design: independent and dependent (Hernández & al., 2014: 238). The independent variable was one that was manipulated to measure its effect on the dependent variable. Thus, the educational robotics training program was the independent variable. The dependent variable was defined as the students' computational thinking and programming skills, considering three dimensions, which could be evaluated through the robotics kit:
1) Sequences: ability to sequence actions by responding to a challenge through programming activities.
2) Action-instruction correspondence: ability to relate the instructions given to a robot with the action it performs.
3) Debugging: ability to identify and correct existing errors in a programming sequence.
Brennan y Resnick (2012) described the sequences as a series of steps that must be taken for a task to succeed.
Computational thinking action-instruction refers to the execution to be performed by the robot each instruction was provided with (Bers, Flannery, Kazakoff, & Sullivan, 2014). The practical dimension of debugging corresponds to the performance of a task by means of the trial and error method, learning from mistakes.
The sample was made up of 131 children from a subsidized center located in Salamanca, during the 2016-2017 academic year. All the participants were informed of the objectives of the study, and the informed consent of the minors’ parents/guardians was compiled with the collaboration of the school. The age range of the participants was between 3 and 6 years old (70% between 4 and 5 years old). The distribution of participants in groups was 67 for the experimental group (51% of the entire sample) and 64 in the control group (49% of the entire sample), with a gender-balanced proportion observed. Girls represented 45% of the subjects in the experimental group and 48% in the control group.
The research was structured based on three stages: the first involved the initial measurement of the dependent variable (pre-test), the second developed the training program (intervention), and the third repeated the administration of the evaluation test (post-test).
The intervention consisted of the development of 7 working sessions with the children in the experimental group. The first was an introduction to the use of the devices, and in the following 6 sessions the children explored concepts and carried out practices on programming. The intervention sessions were designed using the TangibleK robotics curriculum –created by the DevTech research group at Tufts University in Boston, directed by teacher Umaschi Bers– as a reference (Bers, 2010).
Planning for the training session was done in agreement with the teachers, whose function was to introduce the researcher to the class group in a familiar setting, to supervise the activities developed in class and to evaluate the performance of the children together with the researcher. Each session took place during a school day, with an approximate duration of four hours per day, integrating robotics activities in the curriculum to enhance logical-mathematical skills. During the course of the activities, the students worked in small groups (4-5 members) collaboratively. The sessions were organized based on the planned objectives:
• In the introductory session, called ‘My first steps in robotics,' students had the opportunity to use the Bee-Bot® robot, exploring its characteristics and achieving a general understanding of the resource's functionalities.
• In sessions 1 and 2 they worked on the Sequence dimension. The children had to create sequences of instructions to have the robot move across the mat. First, simple forward movements were programmed. Left and right turns were then included.
• Sessions 3 and 4 focused on the Action-instruction correspondence dimension. Cards were used to enable the children to program the way they wanted the robot to move and then they were checked against the robots’ movements.
• Sessions 5 and 6 were focused on the debugging dimension. In these sessions, children were provided with simple sequences containing errors that they had to detect and correct to successfully complete the challenge.
The third phase began once the training sessions were over. At this point, a new measurement was developed through the application of Solve-it tests (programming challenges accompanied by ludic stories) that allow for the evaluation of the participants’ acquired learning in the experimental and control groups. The evaluation tests were carried out individually.
The evaluation instrument used to assess the level of performance achieved by the children is an adaptation of the “SSS” rubric used in the TangibleK program (Bers, 2010). The researcher and the teacher applied the rubric together and agreed on the evaluation results for each student.
Each dimension was evaluated through the resolution of two challenges posed to children. Each challenge received a score of 0 to 5 points, depending on the autonomy of the subject to solve the challenge and the success achieved (performance). The criteria formulated in the rubric was valued with a maximum score of 5 if the child completely achieved the assigned challenge without any help from the researcher. If the student almost achieved the assigned challenge with minimum help from the researcher, the achievement obtained was scored with a 4. If the development of the challenge was moderately satisfactory, receiving periodic aid from the researcher, but not step-by-step, the value assigned was 3 points. When the child displayed a minimal response to the assigned challenge, obtaining step-by-step help from the researcher in the process, he or she was assigned a 2. In the case of a student initiating the development of the challenge, but not completing it, a score of 1 was assigned, and when the participant did not attempt to solve the challenge, the assigned score was 0. For this study, a value of 4 was set as the objective level of achievement to overcome each challenge satisfactorily.
To verify the influence of educational robotics activities on the acquisition of computational thinking skills in school children, the results obtained in the pre-test and post-test were analyzed, distinguishing the dimensions: sequences, action-instruction correspondence and debugging.
First, the normality of the sample was determined using the Kolmogorov-Smirnov normality test. The use of this type of test is recommended when the study is performed on a sample of more than 30 individuals, as it was our case. This test is important because it enables the determination of whether to use parametric or nonparametric tests in the analyses for statistical hypothesis contrast. In the statistical analyses that were carried out, ?<0.05 was established as a critical value.
The Kolmogorov-Smirnov test data on pre-test results in the experimental and control groups leads to the conclusion that these data do not follow a normal distribution. The asymptotic significance value calculated for each dimension of computational thinking and the total is less than the confidence level established for the analyses. This leads to the use of nonparametric contrast tests such as the U of Mann-Whitney and the W of Wilcoxon.
First, the pre-test results in the experimental (Eg) and control (Cg) groups were compared to verify their equivalence. The data obtained show that the groups were not equivalent, as significant differences can be observed between both groups when comparing the means of all dimensions and the total score (complete test), with more positive results seen in the experimental group (see Table 1). The lack of equivalence in the experimental and control groups is an issue that could not be foreseen a priori since the school formed the groups before beginning the research, and group modification were not allowed. The verification of this situation was considered when selecting the most appropriate data analysis strategy, since, although this is not a desirable situation to establish the comparison between the control and experimental groups, it is not an insurmountable barrier, as specific methods of analysis offer a solution to this (non-equivalent control group designs).
The data obtained in the post-test also show significant differences (p<.001) between the experimental group and the control group in all the variables studied (dimensions and complete test), as it can be seen in Table 2. However, since these are not initially equivalent groups, these differences are not directly attributable to the intervention; thus it is necessary to deepen the analysis.
Following the guidelines of non-equivalent control group designs (Campbell & Stanley, 1993; Tejedor, 2000) to find the incidence of the independent variable in the dependent variable, the significance of the differences produced between the pre- and post-test scores in each of the dimensions of the dependent variable was analyzed for both the experimental and control groups (see Table 3 on the next page). To this end, new variables are defined: DiferenSequence, DiferenCorrespondence, DiferenDebugging, and DiferenTotal, which are obtained by calculating the difference between the pre- and post-test scores. It can be observed that the difference between the pre-test and the post-test in the experimental group is more than 2 points in all dimensions, reaching 8.16 points in the complete test (DiferenTotal). However, in the control group, the scores in the final test increased less the differences being less than 1 point in all dimensions and 1.22 in the complete test. If we observe the statistical significance of the differences pointed out, only in one case it is not significant: the debugging dimension in the control group. In this variable, there was no increase in the abilities of children in the control group. While, for the rest of the variables, even in the control group, there were significant differences that can be explained as an instrumentation effect (due to the administration of the initial test or pre-test, which may have implied some learning) as well as the maturation effect (due to the maturation of the children during the months of intervention, given that at these young children learn new abilities constantly and very quickly).
Additionally, with the data generated from the differences between the post-test and the pre-test, other statistical analyses were carried out, such as the non-parametric Mann-Whitney U test for independent samples, to confirm whether the learning gains in the experimental group are significantly higher than those in the control group. Defining the gain as the increase in the post-test score concerning the pre-test, the test results reflected an asymptotic significance of less than .01 (Table 4) for each of the variables. Therefore, the results obtained in the final tests show significant differences between the two groups (experimental and control); it can be argued that the children in the experimental group obtained greater abilities than those in the control group thanks to the intervention conducted, demonstrating a greater progress (statistically significant) in post-test scores.
The size of the effect has been estimated for the complete test by calculating the Cohen’s value. This is extraordinarily high (1.84), much higher than the value 0.80 established as very high. This value reaffirms the difference in the achievements made by the children according to the group to which they were assigned, being greater in the group that carried out the training.
Finally, we show the existing differences in the experimental and control groups through a graphical analysis using the ROC curves (García-Valcárcel & Tejedor, 2017). We did this taking as study variables the differences between the pre-test and post-test scores in each of the dimensions for the dependent variable, and in the total variable called computational thinking skills. As a classification or state variable, the group variable is considered with two possible values: experimental group and control. For the analysis, the members of the experimental group have been considered as positive cases and have been represented in the graph.
Figure 2 (see next page) shows the pairs of values (1- specificity, sensitivity) generated by the graph of ROC curves for each of the study variables (DiferenSequence, DiferenCorrespondence, DiferenDebugging, and DiferenTotal). It can be observed that all the curves are above the reference value (diagonal of the area). This is because the scores of the students in the experimental group are much higher than those of the students in the control group in all the analysis variables, as it is also shown in the preceding tables.
The development of educational robotics activities oriented to the acquisition of computational thinking skills presents positive results, corroborating that the training program facilitates the development of thinking skills in the following dimensions: sequences, action-instruction correspondence and debugging. The significant differences found between the members of the experimental and control groups demonstrate the existence of greater learning in the experimental group for each of the variables analyzed. The children in the control group also showed better skills in the post-test, which can be attributed to the maturation effect, the learning attributed to the pre-test, and to the fact that during the period of time in which the intervention took place, progress continued in the curricular program, specifically in the area of logic and mathematics, which generated greater knowledge linked to the skills evaluated.
Children who participated in the program acquired new skills to design and develop programming sequences using tangible objects (robots). These new skills allowed them to experimentally check the consequences and accuracy of the designed instructions, as well as detect errors in the programming sequences. The methodology used also supports the acquisition of social skills, such as communication, collaborative work, creativity, autonomy, and leadership. This form of learning is related to active learning methodologies and constructionist learning theories that postulate that knowledge is achieved through the interaction of the subject with the object of study (Bers, Flannery, Kazakoff, & Sullivan, 2014).
This study shows that it is possible to develop these thinking skills from early school stages, as program participants were between 3 and 6 years of age, and these children responded to the study expectations, allowing the initial hypothesis to be tested. The research also shows the impact of incorporating robotics in the development of significant learning in digital competencies related to programming. At the same time, it lays the foundations for the implementation of more complex technological learning scenarios at higher school levels.
The results achieved coincide with the conclusions of other research projects (Lee, Sullivan, & Bers, 2013; Elkin, Sullivan, & Bers, 2014) that show the positive effects of the introduction of robotic resources to promote the development of skills and interests linked to the STEM knowledge areas (Science, Technology, Engineering and Mathematics).
We consider it pertinent to highlight some limitations of the study carried out that must have to do with the size of the sample. It could have been wider if there had been more schools interested in participating in the study, as well as the equivalence of groups that could have been achieved with a random assignment of the participants to the groups which was not possible in this study due to the school organization. In this regard, the limitations of the researchers and the conditions established by the educational centers for the development of this type of studies must be considered. We consider that the results presented can be interpreted as an approximation to the subject, although more studies are required to consolidate the conclusions.
National Secretariat of Science, Technology and Innovation (SENACYT) and Institute for Training and Use of Human Resources (IFARHU) of the Panama Republic.
Alsina, A., & Acosta, Y. (2018). Iniciación al álgebra en Educación Infantil a través del pensamiento computacional: Una experiencia sobre patrones con robots educativos programables. Revista Iberoamericana de Educación Matemática, 52, 218-235. https://bit.ly/2PC1hLt
Barr, D., Harrison, J., & Conery, L. (2011). Computational Thinking: A digital age skill for everyone. Learning and Leading with Technology, 38(6), 20-23.
Berrocoso, J., Sánchez, M., & Arroyo, M. (2015). El pensamiento computacional y las nuevas ecologías del aprendizaje. Red, 46, 1-18. https://doi.org/10.6018/red/46/3
Bers, M.U. (2010). The TangibleK Robotics program: Applied computational thinking for young children. Early Childhood Research & Practice, 12(2). https://bit.ly/2RZ3B11
Bers, M.U., Flannery, L., Kazakoff, E.R., & Sullivan, A. (2014). Computational thinking and tinkering: Exploration of an early childhood robotics curriculum. Computers & Education, 72, 145-157. https://doi.org/10.1016/j.compedu.2013.10.020
Bravo, F.A., & Forero, A. (2012). La robótica como un recurso para facilitar el aprendizaje y desarrollo de competencias generales. Teoría de la Educación. 13(2), 120-136. https://bit.ly/2EtOVnJ
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational thinking. In Proceedings of the 2012 Annual Meeting of the American Educational Research Association (AERA) (pp. 1-25), Vancouver, Canada.
Bruni, F., & Nisdeo, M. (2017). Educational robots and children’s imagery: A preliminary investigation in the first year of primary school. Research on Education and Media, 9(1), 37-44. https://doi.org/10.1515/rem-2017-0007
Buitrago, F., Casallas, R., Hernández, M., Reyes, A., Restrepo, S., & Danies, G. (2017). Changing a generation’s way of thinking: Teaching computational thinking through programming. Review of Educational Research, 87(4), 834-860. https://doi.org/10.3102/0034654317710096
Buss, A., & Gamboa, R. (2017). Teacher transformations in developing computational thinking: Gaming and robotics use in after-school settings. In P.J. Rich & C.B. Hodges (Eds.), Emerging research, practice, and policy on computational thinking (pp. 189-203). Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-52691-1_12
Caballero, Y.A., & García-Valcárcel, A. (2017). Development of computational thinking skills and collaborative learning in initial education students through educational activities supported by ICT resources and programmable educational robots. In F.J. García-Peñalvo (Ed.), Proceedings of the 5th International Conference on Technological Ecosystems for Enhancing Multiculturality (p. 103). New York: ACM. https://doi.org/10.1145/3144826.3145450
Campbell, D., & Stanley, J. (1993). Disen?os experimentales y cuasiexperimentales en la investigacio?n social. Buenos Aires: Amorrortu.
Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., & Eltoukhy, M.M. (2017). Assessing elementary students’ computational thinking in everyday reasoning and robotics programming. Computers and Education, 109, 162-175. https://doi.org/10.1016/j.compedu.2017.03.001
Durak, H.Y., & Saritepeci, M. (2018). Analysis of the relation between computational thinking skills and various variables with the structural equation model. Computers & Education, 116, 191-202. https://doi.org/10.1016/j.compedu.2017.09.004
Elkin, M., Sullivan, A., & Bers, M.U. (2014). Implementing a robotics curriculum in an early childhood Montessori classroom. Journal of Information Technology Education: Innovations in Practice, 13, 153-169. https://doi.org/10.28945/2094
García-Peñalvo, F.J., Rees, A.M., Hughes, J., Jormanainen, I., Toivonen, T., & Vermeersch, J. (2016). A survey of resources for introducing coding into schools. Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM’16) (pp.19-26). Salamanca, Spain, November 2-4, 2016. New York: ACM. https://doi.org/10.1145/3012430.3012491
García-Valcárcel, A., & Tejedor, F.J. (2017). Percepción de los estudiantes sobre el valor de las TIC en sus estrategias de aprendizaje y su relación con el rendimiento. Educación XX1, 20(2), 137-159. https://doi.org/10.5944/educxx1.19035
Goodgame, C. (2018). Beebots and Tiny Tots. In E. Langran, & J. Borup (Eds.). Society for Information Technology & Teacher Education International Conference (pp. 1179-1183). Association for the Advancement of Computing in Education (AACE).
Herna?ndez-Sampieri, R., Ferna?ndez-Collado. C., & Baptista-Lucio. P. (2014). Metodologi?a de la investigacio?n. Me?xico: McGraw-Hill Education.
Kandlhofer, M., & Steinbauer, G. (2016). Evaluating the impact of educational robotics on pupils’ technical-and social-skills and science related attitudes. Robotics and Autonomous Systems, 75, 679685. https://doi.org/10.1016/j.robot.2015.09.007
Karampinis, T. (2018). Robotics-based learning interventions and experiences from our implementations in the RobESL framework. International Journal of Smart Education and Urban Society, 9(1), 13-24. https://doi.org/10.4018/ijseus.2018010102
Koning, J.I., Faber, H.H., & Wierdsma, M.D. (2017). Introducing computational thinking to 5 and 6 years old students in dutch primary schools: An educational design research study. In C. Suero, & M. Joy (Eds.), Proceedings of the 17th Koli Calling Conference on Computing Education Research Calling Conference on Computing Education Research (pp. 189-190). New York: ACM. https://doi.org/10.1145/3141880.3141908
Kucuk, S., & Sisman, B. (2017). Behavioral patterns of elementary students and teachers in one-to-one robotics instruction. Computers & Education, 111, 31-43. https://doi.org/10.1016/j.compedu.2017.04.002
Lee, K.T., Sullivan, A., & Bers, M.U. (2013). Collaboration by design: Using robotics to foster social interaction in kindergarten. Computers in the Schools, 30(3), 271-281. https://doi.org/10.1080/07380569.2013.805676
Liu, H.P., Perera, S.M., & Klein, J.W. (2017). Using model-based learning to promote computational thinking education. In P.J. Rich, & C.B. Hodges (Eds.), Emerging research, practice, and policy on computational thinking (pp. 153-172). Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-52691-1_10
Moro, M., Agatolio, F., & Menegatti, E. (2018). The RoboESL Project: Development, evaluation and outcomes regarding the proposed robotic enhanced curricula. International Journal of Smart Education and Urban Society, 9(1), 48-60. https://doi.org/10.4018/ijseus.2018010105
Ozcinar, H., Wong, G., & Ozturk, H.T. (Eds.) (2017). Teaching computational thinking in primary education. USA: IGI Global. https://doi.org/10.4018/978-1-5225-3200-2
Pittí, K., Curto-Diego, B., Moreno-Rodilla, V. (2010). Experiencias construccionistas con robótica educativa en el Centro Internacional de Tecnologías Avanzadas. Education in the Knowledge Society, 11(1), 310-329. https://bit.ly/2MNPwls
Resnick, M., & Rosenbaum, E. (2013). Designing for tinkerability. In M. Honey & D.E. Kanter (Eds.), Design, make, play: Growing the next generation of STEM innovators (pp.163-181). New York: Routledge.
Schwabe, R.H. (2013). Las tecnologías educativas bajo un paradigma construccionista: un modelo de aprendizaje en el contexto de los nativos digitales. Revista Iberoamericana de Estudos em Educação, 8(3), 738-746. https://doi.org/10.5860/choice.51-1612
Seppänen, L., Schaupp, M., & Wahlström, M. (2018). Enhancing learning as theoretical thinking in robotic surgery. Nordic Journal of Vocational Education and Training, 7(2), 84-103. https://doi.org/10.3384/njvet.2242-458x.177284
Serholt, S. (2018). Breakdowns in children's interactions with a robotic tutor: A longitudinal study. Computers in Human Behavior, 81, 250-264. https://doi.org/10.1016/j.chb.2017.12.030
Tejedor, F.J. (2000). El diseño y los diseños en la evaluación de programas. Revista de Investigación Educativa, 18(2), 319-339.
Wing, J.M. (2006). Computational thinking. Communications of the ACM, 49(3), 33-35. https://doi.org/10.1145/1118178.1118215
Wing, J.M. (2008). Computational thinking and thinking about computing. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 366(1881), 3717-3725. https://doi.org/10.1098/rsta.2008.0118
Wong, G., Jiang, S., & Kong, R. (2018). Computational thinking and multifaceted skills: A qualitative study in primary schools. in teaching computational thinking in primary education (pp. 78-101). USA: IGI Global. https://doi.org/10.4018/978-1-5225-3200-2.ch005
Zapata-Ros, M. (2015). Pensamiento computacional: Una nueva alfabetización digital. RED, 46, 1-47. https://doi.org/10.6018/red/45/4