Global trends in the use of technologies in education have focused on more specific areas of enhancement and application due to social transformations and the role of digital technologies in recent years. Some of these approaches emphasize computational thinking (CT) as an essential skill that everyone, not just computer scientists, should develop (Wing, 2006) to better understand the technologies and generate new forms of reasoning, creation, expression, and problem-solving (Resnick, 2013).
Although no consensus exists on how to define CT and its components (Tang et al., 2020), most concepts of CT refer to ways of thinking for formulating and solving problems that can be represented and processed through the use of machines (Chen et al., 2017). This approach began to garner attention several decades back with the contributions of Papert, who proposed that people needed to acquire the necessary skills for understanding and participating in the construction of the new computer culture and advanced uses of computers, notably programming (Tang et al., 2020; Webb et al., 2017).
CT is more than just solving informatics problems and programming with the computer, since it entails the comprehension of computational concepts that can be used to manage everyday life (Wing, 2006). A major challenge today, then, is to define what needs to be learned and how CT can best be taught in the classroom (Papert, 1998), as well as how to match that with students’ capacities and characteristics (Zhang & Nouri, 2019).
Technology by itself does not lead to change, and CT does not develop spontaneously through mere contact with computers, so educational proposals that include specific objectives and strategies to develop this learning are needed. As Papert (1987) noted, technology is not the key to improving education. Rather, it is the culture of thinking and learning that helps change people, and thus create the conditions for tackling twenty-first-century challenges.
In recent years, various educational efforts have been made in different countries to teach CT at the elementary and secondary levels (Grover & Pea, 2013). These initiatives have driven research to gain insights into learning achievements. Nevertheless, assessment of CT knowledge and practices is still being developed in educational systems, as it is crucial to the creation of instruments for achieving successful integration of CT in the curriculum (Bocconi et al., 2016; Grover & Pea, 2013; Román-González, 2015).
This points to another overriding challenge, considering that different emerging studies are leading to a universally agreed-upon concept of CT. In the strategies used up to now, scales or tests, analyzers of programmed outputs, achievement tests, and more qualitative techniques (interviews, field notes, focal groups, observations, etc.) have been included, seeking to approximate key programming skills and concepts associated with CT (Atmatzidou & Demetriadis, 2016; Brennan & Resnick, 2012; Dagienė & Stupurienė, 2016; Leonard et al., 2016). There is an agreement, however, that a general void exists in instruments and tools for measuring CT (Román-González, 2015), as well as conditions for ensuring their ecological validity (Salkind, 2010).
Despite the foregoing, the results are quite positive since it has been seen that student participation in educational interventions promoting aspects of CT leads to enhanced skills such as algorithmic thinking (Grover et al., 2015), self-efficacy in programming, creation and understanding of programming codes (Jun et al., 2017), algorithm development, the notion of action-instruction correspondence in a robot, and program debugging (García-Valcárcel & Caballero-González, 2019). The findings have in turn suggested intervening factors in the students’ results such as prior computer experience and math skills (Grover et al., 2015). The issue of gender influence is still under study since contradictory evidence has been found (Dagienė et al., 2014; Dagienė & Stupurienė, 2016; Kalas & Tomcsányiová, 2009), and a certain association between CT and students’ cognitive capacities has been mentioned (Ambrosia et al., 2014).
Given this scenario, it is important to continue developing educational proposals that specifically address CT, and identify the factors that could foster this kind of learning. Useful and valid assessment tools are also needed for incorporating CT into education and contributing to a theoretical understanding of this construct.
The goal of this study, then, is to provide evidence of factors that facilitate CT in elementary school students, including the potential contribution of LIE++, which addresses specific CT practices and knowledge, in comparison to the LIE-Guides proposal, where CT is not explicitly addressed. The following research questions were asked: 1) What are the factors associated with students’ results on a CT learning test? and 2) To what extent does LIE++ foster this learning in comparison with the LIE-Guides proposal?
Conceptualization of the educational proposals
Since 1988, programming has been included in Costa Rica’s public education system (from preschool to lower secondary school) through the National Educational Informatics Program (PRONIE MEP-FOD) implemented by the Ministry of Public Education (MEP for its initials in Spanish) and the Omar Dengo Foundation (FOD for its initials in Spanish). The goal has been to build students’ high-level cognitive capacities such as problem-solving and collaboration (Omar Dengo Foundation, 2016) to drive personal development in connection with the country’s technological, social, and economic growth (Fallas & Zúñiga, 2010).
This has been done primarily by including two weekly educational informatics lessons in the curriculum, taught by an informatics teacher in a computer laboratory. In 2009, LIE-Guides, a proposal based on student performance standards for learning with digital technologies, was implemented. LIE-Guides has emphasized skill-building with programming in project development (prioritizing the use of Scratch) based on the social appropriation of digital technologies (Muñoz et al., 2014). Three dimensions have been emphasized within this framework (Figure 1).
Due to rapid changes in recent years deriving from the scientific and technological revolution, in 2015 the program reformulated the proposal to what is known as LIE++. The distinctive feature of this initiative is its introduction of the explicit teaching of CT knowledge and practices in project programming with physical computing (i.e., the use of cards such as Arduino, Circuit Playground, and Micro: bit) and collaborative work, bringing innovative equipment into the schools. The skills to be developed in students are grouped into five CT competencies (Figure 2).
The implementation of LIE++ has been progressive, entailing training and accompaniment of informatics teachers. Currently, the two proposals coexist while the transition is being completed in all the participating schools. Advantage has been taken of this period to learn from the implementation and move forward on developing CT learning assessment tools, with the idea of generating the information needed to improve the different program actions, and to report on the attainment of the stated goals.
Methods and materials
A quasi-experimental cross-sectional design was used to compare the scores obtained on a CT learning test in two interest groups: one group of LIE-Guides students and another group with at least one year of participation in the LIE++ proposal. In addition, other factors affecting the students’ test results were also explored.
At the time the data was collected (October 2019), PRONIE MEP-FOD was benefitting a total of 984 schools with educational informatics. However, only 532 met the requirements for the study due to the aforementioned transition from one proposal to the other. Of these, 210 schools were still implementing LIE-Guides and 322 schools were implementing LIE++.
After voluntary participation was ascertained, a sample of 348 schools and 14,795 sixth-grade students was obtained, covering 65% of the schools and 56% of the student population (Table 1).
Instrument and item design
In the initial study stage, indicators were created for the expected learning results in sixth-grade students with regard to both CT and specific programming, and physical computing contents. These indicators were refined with the literature review and experts in the area until a final group of 18 CT-associated learning indicators was defined.
Since the design compared two different educational proposals, independent items were developed in a programming language (similar to the “Bebras” tasks in Palts et al., 2017) to ensure potential differences in the results were not due to a lack of familiarity with a specific language. Most items were single-choice and associated with an indicator, giving a total of 20 items (i.e., Figure 3). These were constructed through the collaborative work of the researchers and the team responsible for implementing LIE++, including material review and discussion sessions. The content was also validated by judges who were computer science and programming experts, and educational informatics teachers. Once all the necessary corrections were made to the items, cognitive interviews were conducted with six sixth-grade students participating in the proposals (three males and three females). This allowed to further refine the items and verify if the students were able to transfer their learning into their answers. In addition, a set of questions was added to the test to identify student characteristics.
Description of the variables
The dependent variable is the score that approximates the CT-associated learning achieved by the students on the test, based on a Rasch model and transformed on a scale of 100 to 900 points (the higher the score, the greater the skill), with an expected average of 500 points and a standard deviation (SD) of 100 points. Below are the independent variables that were considered (Table 2).
Data collection procedure
The test was administered digitally with a duration of 50-60 minutes. The items were ordered according to difficulty, based on the information provided by students in the cognitive interviews, with the easiest ones first to prevent reluctance to take the test. A tutorial was prepared for teachers to give the test during their classes.
Prior to the data collection, the required permissions were obtained from the educational authorities and data confidentiality and the voluntary nature of student participation were ensured.
Test analysis and psychometric properties
To provide evidence of the validity and reliability of the scores obtained on the test, a rigorous process was followed for conceptual framework creation and item construction, and quantitative analyses were carried out using the R platform (version 3.3.2) and Winsteps (version 3.75.1).
Overall, the test was found to have adequate psychometric properties. The single dimensionality assumption was corroborated, and the items showed adequate internal consistency, discrimination, and degree of difficulty. In turn, no bias was found to favor results due to the student’s gender. Below are the statistical procedures that were used and the main findings that support the above statements:
The exploratory factor analysis found that most of the rotated factor loads are greater than 0.2, indicating adequate association of the items with the construct. The sedimentation graph showed the relevance of the first factor, explaining 11.84%, so single dimensionality was assumed based on the theoretical backing and these results.
With classical test theory, it was found that most of the items have acceptable discrimination (scores of more than 0.12), and the Cronbach’s alpha shows adequate internal consistency.
In the differential performance analysis 1 , it was found that only one of the items has a moderate effect in favor of males (Magis et al., 2010).
Using the Rasch model to estimate the students’ skill levels on the test, items were obtained with different levels of difficulty that are within the expected ranges, and the infit and outfit statistics suggest a good model fit (Linacre, 2002).
The differences in the average test scores of the two compared groups were explored with one-way ANOVA for each group, considering variables of interest. Finally, CT-associated factors were explored using a multilevel regression model to consider the nested data structure (Holmes et al., 2014).
The co-variables that were used were chosen based on evidence in the literature or prior research experience. Schools in which fewer than 15 students participated were excluded in these analyses to get a better regression model fit, leaving a total of 297 schools. The analysis included a total of 13.213 students, after cases with lost values in the considered variables were excluded.
Participants’ sociodemographic and educational information
The total number of students who participated in the study (n=14,795) is characterized by an equal percentage in terms of gender (49.3% females and 50.7% males) and an average age of 12 years (SD=0.64). Most reported a high level of access to technology: their own cell phone (84.9%), Internet on the cell phone (72.4%), and Internet at home (60.8%). However, only 25.2% said they used a computer at home at least three days a week, and its use tends to be more recreational.
Data revealed that students have an intermediate level on the cultural capital indicator (M=5.1, SD=2.3) since they have regular access to books at home, but their reading habit is infrequent. Educationally, according to the grade indicator (M=7.4, SD=2.5), students have good performance and only 3.4% say they had repeated at least one grade during elementary school. This percentage agrees with the national average (some 3%, according to the Ministry of Public Education [MEP], 2019).
In general, the study groups have similar characteristics, but differences were found in some context variables. The LIE++ students show more favorable characteristics since 83.9% go to schools in urban areas and are in territorial areas with a higher average on the IDS (M=64.3) compared to the LIE-Guides group of students, where 70.4% belong to urban areas and places with a lower IDS average (M=53.9). These effects, however, were controlled with regression.
As for the areas, it should be mentioned that 69% of the participating schools are urban and 31% are rural. This is because the data collection involved Internet use, which kept more rural schools from participating.
Performance test results by educational proposal
Although the average scores of both groups did not exceed the scale average (X=500), significant differences were evident between the two groups (Figure 4) 2 : LIE-Guides, 474.1 (SD=79.2) points vs. LIE++, 486.6 (SD=85.0) points (F(1.13914)=53.08, p<0.00).
In a first exploration, it was found that the LIE++ students performed better on the test than the LIE-Guides group (Figure 5) 3 . This difference between the groups is maintained in the LIE++ students if they are from an urban area, have not repeated grades, have more years of participation in educational informatics (5 or 6 years), and correctly recognize the concept of programming. This trend is maintained regardless of sex.
The foregoing indicates that the type of student participation and variables in the proposal implementation itself might be contributing to the development of CT learning. The next section, however, specifies the factors with the most weight in these results. Regarding the test’s level of difficulty, according to the Rasch map the difficult and easy items resulted in the same for both groups. This is significant, since it rules out the possibility of the evaluated content being the reason for any differences between the groups, as well as the possibility of one group having an advantage over the other. In addition, upon delving deeper into the content of the items, the most complex ones were found to be aimed at programming topics (functions and code debugging), while the easiest ones refer to problem-solving and logical reasoning.
Contribution of the factors associated with the results
Based on the standardized coefficients 4 (Table 3), the factors with the most influence on the students’ scores were found to be as follows: average scores, IDS, years of participation in educational informatics, gender, cultural capital, no repeated grades, participation in LIE++, and the faithful proposal implementation indicator.
In terms of the educational proposal, participation in LIE++ has a positive effect on the test results, since these students obtained a higher average than the LIE-Guides group (8.90 points on average). The following are other variables that help boost test scores: adherence to the proposal’s expected scheme of work (work collaboratively and do programming projects) and having more years of participation in informatics classes.
Other factors that affect the results refer to home and school conditions and students’ intrinsic characteristics. Regarding home conditions, students with home Internet access and more cultural capital obtained better test results. About having technological devices at home, the results were unexpected, the more technological devices students had, the lower their test scores were (averaging 1.71). This may be due to more recreational use of technology in the home, as reported by the students.
As for the school context, the more developed the area where the school is located, the better the students’ average test scores tended to be. Since 15% of the data variability is explained by the school the student attends, educational context is a factor that might be affecting the level of CT learning.
Concerning individual characteristics, students who had not repeated any grades and who had high grades in their subjects - both variables related to academic performance - were found to have better results on the CT test. As for gender, the average test score for males is higher than for females (10.87 points on average).
Discussion and conclusions
This study is a first approach to the factors affecting CT learning in elementary school students, including an analysis of the contribution of LIE++. It generates evidence in the region, there being few studies that report on the impact of technologies on the development of core competencies (Martínez-Restrepo et al., 2018). It also helps highlight elements from the LIE++ experience that could be considered by other initiatives for reflecting on the use of donated equipment and educational actions to further the development of advanced computer skills in education. A key finding of this study is that, for both proposals, years of participation and faithful implementation have a significant impact on CT learning. This reflects the cumulative nature of the knowledge required for CT development and the importance of continuity in this kind of state program, along with adherence to what is set out in the proposals. As for LIE++, better results were obtained than for LIE-Guides. Although the difference in the scores of the two groups is small, it is the first evidence of the potentialities of LIE++, considering its recent implementation and the fact that the comparison group was also participating in a technology initiative.
The differences that were found could be explained by certain characteristics that distinguish LIE++ from LIE-Guides. In the first place, LIE++ explicitly addresses CT learning, which may achieve a stronger impact compared to other initiatives. This has been seen in other studies, such as the one by Román-González (2016), where informatics curricula aimed at CT literacy and development show moderate to large effects, unlike more traditional ICT curricula. Secondly, LIE++ promotes a work dynamic that uses physical computing and student teamwork, seeking to create a practical, fun space by building physical computing projects and robots. Regarding this, Sullivan & Bers (2018) and Caballero-González & García-Valcárcel (2020) point out that several experiences using robotics and programming help students learn computer science and engineering concepts and practices and at the same time permit greater student engagement, even at an early age. Thirdly, although there has been teacher training and assistance in both proposals, priority has recently been given to LIE++ support due to the transition process; in addition, LIE++ incorporates many of the lessons learned from the program throughout its course. Program implementers need to consider complementary strategies for ensuring the intervention’s adherence and sustainability. According to Martínez-Restrepo et al. (2018), one of the weaknesses in Latin America is that ICT interventions in education fail to achieve certain effects because they go no further than mere donation of equipment. Another aspect to consider is that CT development is complex and multifactorial by nature, since in addition to proposal-linked variables, the students’ personal and social factors have also shown influence on results. Among the relevant intrinsic variables are academic performance and gender. Although school success depends on many factors beyond the individuals, it must be acknowledged that there are intrinsic characteristics of each student that favor learning, to a greater or lesser extent, such as in the case of fluid intelligence and other cognitive capacities that play a key role in this type of learning (Ambrosio et al., 2014).
As for differences due to gender, males were found to obtain higher test scores than females. This finding has not been consistent in other studies. It has been found, though, that these differences in favor of men may be related to socialization and culture, where the idea has been promoted that the field of technology is predominantly masculine, creating a certain amount of unwillingness and fear among women to tackle challenges in this field (Espino & González, 2016). It is therefore extremely important to establish inclusive, gender-bias-free educational proposals that consider the learning limitations and strengths of all students in order to reduce such gaps. These findings mark major challenges regarding measurement capacity and adherence of teachers to the proposal, as well as to the potential offsetting of the different student characteristics by the educational strategies themselves.
In terms of the student’s home and school characteristics associated with better results, such as greater cultural capital and IDS, Jara et al. (2015) note that other studies have found the achievement of educational goals to be linked to the context’s economic, social, and cultural elements. It is therefore important to understand that students are immersed in a broader context that influences the teaching and learning processes, such as the motivation of students and conditions that strengthen learning in their immediate surroundings (i.e., the support and education of their parents, access to resources, etc.). Despite the foregoing, variables related to technology use and access in the home (computers and Internet) did not have much effect on CT test scores, since even when students had technology available, they reported using it more for recreation. This reinforces the idea that consumption of technology is not enough for developing CT but rather there must be actions or initiatives geared to teaching it (Zapata-Ros, 2015).
Today’s new generations are required to go beyond the mere consumption of technology and digital media, so CT includes skills needed for tackling twenty-first century social demands. Parallel to this, updated educational interventions intended to develop these competencies are required. To foster the desired learning, these should consider not only the educational proposals, but also the individual characteristics of the students and their surroundings.
Several difficulties were faced in this study, namely:
The educational context in which the proposal is being implemented: Although the experimental designs are the ones with the greatest potential for demonstrating causal effects, the evaluated contexts do not always have adequate conditions for doing so. PRONIE MEP-FOD has extensive national coverage (92.2% of daytime public education in December 2019), which limited the definition of a control group with no type of intervention. For this reason, comparisons of different proposals were used to better estimate the effects. An additional consideration was that LIE++ is in the initial stages of implementation, implying that many teachers and students are still not familiarized with this proposal.
Digital application: Due to the resources available for the study, digital data collection was prioritized. This limited the participation of all schools contemplated in the study, particularly those in rural areas where there are serious Internet connection problems.
As LIE++ becomes more consolidated, this type of study should be replicated to obtain more points of evidence of its impact on students. One way to define the control group, and to improve assessment conditions, is to partner with countries in the region without this type of intervention in their educational systems.
In addition, as a result of the study a test was created to assess CT learning in elementary school students. In the future, the test should continue to be strengthened by improving its psychometric properties, expanding the number of items, strengthening the conceptual model, and exploring other associated factors. Also, as other studies have found, the use of this type of test could be enriched with the use of other instruments and tools for a more comprehensive assessment of CT. (1)