Keywords

1 Introduction

Theories and research about embodied cognition consider that human cognition is fundamentally grounded in sensory-motor process and bodily activity [10]. Unlike the traditional theory of cognition that suggests that the brain is central to all cognitive processes, embodied cognition describes how the mind, body, and the world influence one another to create meaning. Wilson and Golonka [23] described embodied cognition as a continuous loop of perception and action in an environment. The mind no longer has been treated as separate from the body, and perceptual rich experiences shape cognitive processes and allow individuals to construct meaning and understanding of the world [7]. Embodied cognition then can be understood as a meaning-making process that occurs through embodied interaction with the physical world. The relationship between physical experience and cognition has been broadly demonstrated, for example, through theory of affordance for action based on perception [8], the relationship between abstract concepts and bodily experience through metaphorical expression [12], inquiry-based discovery learning [2], and the importance of sensorimotor experience in cognitive development [18]. Jean Piaget [17] posited that cognitive structuring requires both physical and mental activity. Particularly for infants, the direct physical interaction with the world is a key constituting factor of cognitive development.

In the Human-Computer Interaction (HCI) field, embodied interaction is a term coined by Dourish [7] to capture several research trends and ideas in HCI around tangible computing, social computing, and ubiquitous computing, emphasized on the development of interactive experiences in the service of learning. It refers to the creation, manipulation, and sharing of meaning through engaged interaction with artifacts [7], and includes material objects and environments in the process of meaning-making and action formation [21]. Dourish used the term to describe an approach to interaction design that placed an emphasis on understanding and incorporating our relationship with the world around us, both physical and social, into the design and use of interactive systems. The tangibles community within HCI has since built upon this interpretation of embodied interaction focusing on the use of space and movement [9].

Contemporary technologies, such as tangible technology, allow the more physical and immersive mode of interaction, more active learning, hands-on activities directly related to physical contexts, with new forms of communication and collaboration promoting socially mediated learning and foregrounding the role of the body in interaction and learning. The approach of Tangible User Interface (TUI) proposes to embed computing elements in concrete materials, creating an educational feature that unites the advantages of physical interaction and multimedia handling provided by technology. Enriching the concrete materials, computational resources can work several senses (sight, hearing, touch). Theories of learning and cognition offer a compelling rationale for using tangible and embodied interaction for supporting learning [14], being compatible with socio-constructivist theoretical concepts including hands-on engagement; experiential learning [2]; construction of models [15, 20]; collaborative activity and transformative communication [16].

Many authors have designed tangible programming tools for young children that support the physical construction of a computer program by connecting or stacking parts that represent an action that is performed by a robot [6, 22]. In these environments, a typical programming activity involves asking children to move the robot by creating appropriate computer programs. The children think mainly about the goal of the robot and how the robot will interact with the environment. However, there is another important aspect that they have not considered: how the user physically interacts with the environment and the effects of alternative embodied interactions in the learning experience. In this paper, drawing on the multimodal method of analysis, we investigate the embodied interaction of preschool children with a tangible programming environment to examine how the embodied forms of interaction shape the learning experience. Multimodality offers a valuable approach for analyzing video data, as it allows the interpretation of a wide range of embodied communicational forms (posture, gaze, gesture, manipulation, as well as visual representation and speech) that are relevant to meaning-making [19]. Learning scientists rely on the multimodal analysis of visual, audio, gestural, movement, and other data sources to draw inferences about what and how students are learning. For example, using videos, researchers have looked at facial expressions, gestures, tone of voice, and discourse to understand how people learn [1, 11]. The aim of this paper, then, is to look at how embodied forms of interaction play out differently across different children and examine the implications for the knowledge construction process and meaning-making.

The context of this paper is an activity conducted by a group of researchers to introduce young children in programming through creative and meaningful experiences. We worked with two preschool classes where we used the TaPrEC+mBot [3], a tangible environment designed for children to learn basic programming concepts as an engaging introduction to computer programming. The environment supports children in building physical computer programs by organizing tangible objects involving basic programming concepts such as sequence, repetition, and procedures. We present and discuss aspects of the role of body position, gaze, manipulation and speech of children interacting with the tangible programming environment. This paper is organized as follows: In Sect. 2 we describe the TaPrEC+mBot environment and its functioning, and we present the research methodology and workshops conducted. Then, in Sect. 3 we present the results of our case study. In Sect. 4 we discuss the results and findings. Finally, we present our conclusions and point out to future work in Sect. 5.

2 Case Study: TaPrEC+mBot and its Usage

TaPrEC+mBot [3] is a tangible programming environment designed to provide children with an engaging introduction to computer programming. The children should be able to build physical computer programs by organizing tangible objects and applying basic programming concepts such as sequence, repetition, procedures. It consists of four parts (see Fig. 1 up): i) hardware: notebook and Radio Frequency Identification (RFID) system (tag and reader), ii) programming blocks which is a set of colored pieces of puzzle-like wooden blocks containing an RFID tag on one side and an embossed symbol on the other, iii) mBotFootnote 1, a robot car that we used to represent the physical output of tangible programs and iv) software: a control program that we developed in the mBlockFootnote 2, a Scratch-based programming software tool, to allow communication via Bluetooth between the programming blocks and the mBot.

Fig. 1.
figure 1

TaPrEC+mBot physical environment (up), system architecture (down)

Through the RFID reader, the tangible program information is entered into the TaPrEC+mBot environment. When the user passes the RFID reader over each programming block, the identifiers of the RFID tags are sent to the control program. The control program verifies if they exist in the list of RFID identifiers, and it sends them to the processing queue. Then, the control program sends to mBot, via Bluetooth, the Scratch commands associated with each RFID identifier (see Fig. 1 down). Finally, mBot executes the sequence of commands received in the physical world.

2.1 Setting and Participants

The setting for the case study was the Children Living Center (CECI - Portuguese acronym of Centro de Convivência Infantil). Located on the campus of the University of Campinas (UNICAMP), it is a space that gives access to the education of infants and children from six months to six years old. Furthermore, our study is part of a project approved by the university’s research ethics committee under the number 72413817.3.0000.5404. For this study, we worked with two preschool classes. The first class was composed of fifteen children (6 girls and 9 boys) aged four to five with a mean age of 5.25 (SDFootnote 3 = 0.27). The second class had ten children (4 girls and 6 boys) aged four to five with a mean age of 5.18 (SD = 0.41). We held a workshop with each group that lasted 90 min. The workshops were documented by video and photos.

2.2 Pilot Work with the Teachers

Before the children’s workshops, we conducted a pilot work with the teachers so that they could have their own experience with the TaPrEC+mBot environment. With the help of the teachers, we devised the activity to the children’s workshops: the children had to program the robot car to cross a bridge (a paper bridge on the floor). During the pilot work, the teachers suggested to use the following programming blocks: i) forward, backward, turn left, and turn right (see Fig. 2) and ii) change the color of the programming blocks so that children could easily differentiate between them (see Fig. 3).

Fig. 2.
figure 2

Pilot work with the teachers

Fig. 3.
figure 3

Programming blocks adapted by teachers

To evaluate the experience with the tangible programming environment, we worked with the teachers to adapt the Emoti-SAM artifact for self-evaluation [4]. The adapted Emoti-SAM consists of 5 emoticons drawn on a paper: the most positive option is a happy face with the thumb up and, in opposition, the most negative is a face of disappointment. Also, the teacher suggested to add a blank space so that the children could freely express their feelings about the activity through their drawing too (see Fig. 4). The children’s drawings incorporate a variety of information about the child and his/her experience. As a child-centered evaluation tool, drawings can be advantageous as they are fun and attractive universal activities [24], quick and efficient ways to elicit a large amount of accurate information as no training or practice is required [13], easily produced by children who either may not be able to write proficiently (if at all) or may feel unsure of expressing themselves verbally to a researcher.

Fig. 4.
figure 4

Emoti-SAM adapted by the teachers (translated from Portuguese).

2.3 Workshops with the Children

The children had already experienced the TaPrEC+mBot in another workshop, so they were familiar with our tangible programming environment [5]. Seeking to identify how much the children remembered about the functioning of the TaPrEC+mBot environment, we began by asking them: “Can anyone explain how the programming blocks work?” We observed that the children partially remembered how to use the programming blocks. Immediately, we explained that to create a tangible program it is necessary to organize the programming blocks in a specific sequence: first the “start block”, then the blocks of movement, and finally the “end block”. With each team, we demonstrated the functioning of each movement block with simple tangible programs. After the demonstration, we explained the activity, and the teams began to create tangible programs (see Fig. 5). As a suggestion of the teachers, the children were organized into teams of three and four children, so that while one team interacted with the TaPrEC+mBot in the playroom, the other teams waited inside the classroom, making drawings or other typical activity of the playtime.

Fig. 5.
figure 5

Two teams of children programming the mBot robot

The teams spent between 15 and 25 min interacting with the TaPrEC+mBot. After finishing the time of work with the environment, we finished the workshop by applying the Emoti-SAM adapted according to the teacher’s suggestions. We asked the children to draw what they liked most about the workshop (see Fig. 6).

Fig. 6.
figure 6

Child filling out the Emoti-SAM adapted form

3 Results of the Case Study

The analysis conducted was supported by the videos recorded about the activities and aims to find forms of embodied interaction in the experience with the tangible programming environment. In this paper, we use embodied interaction in its practical sense to denote direct, body-based interaction. It was apparent from the data that the teams of children chose to position themselves similarly around the environment: the children positioned themselves adjacent to one another with the programming blocks in the front of them and they sat on the floor for most of the interaction (see Fig. 5). However, occasionally some kids got up and walked to have a close watch of the position of the robot concerning the cardboard bridge. The children’s gaze towards the programming blocks was generally for long periods, and they only watched the robot when it moved during the execution of the tangible program. During the creation of tangible programs, first, they build the program, pass the RFID reader, and then they look at the actions of the robot. Our analysis aimed to explore ways in which the bodily-based differences may importantly shape the action and strategies for the construction of tangible programs. Also, some students were observed to be more verbal than others, which in turn influenced their use of other embodied forms. Inspired by Price and Jewitt’s work [19], a detailed transcription of the multimodal interaction of seven teams of children was then completed. This comprised a record of body positioning and changes in bodily position, manipulation of programming blocks, gaze and speech. This enabled access to the following types of data: about the process of creating a tangible program, the turn-taking during the activity and the description of algorithms in verbal form.

3.1 Teams’ Activity

Team 1 (Girl1, Girl2, Girl3, Girl4): Their total interaction time was 15.36 min, and the team completed the task successfully (see Fig. 7). During the creation of the tangible program, usually, Girl4 was distracted and her gaze was in multiple directions. However, at the time that her classmates passed the reader RFID on the tangible program, she joined the group and watched closely. The children were attentive to the sound of the RFID reader, and even they imitated the sound every time it passed over a programming block. When the robot moves, everyone is also attentive too. The tangible programs were created mainly by Girl1 and Girl3. Girl2 grabbed some programming blocks to create her programs, but she did not test them. Girl4 passed programming blocks to Girl1 and Girl3. Throughout the interaction, they built 6 tangible programs, adding, removing, and changing the programming blocks. The changes are made quickly by one, and then by another, or simultaneously. Almost in the last part of the task, when the robot is closer to the bridge, the four girls come together to form a circle around the programming blocks and program together. In most of the tangible programs, Girl3 was in charge of passing the RFID reader, but in the last program, the RFID reader was operated by Girl2 who joined the group again to complete the task.

Some of the girls’ expressions related to the creation of the tangible programs are directed to describe what happened after the program was executed (“It went straight and then turned”) or about new steps to follow (“It needs to go further”, “It has to turn here”). Talks between Girl1 and Girl3 denote discussion of ideas about the creation of a tangible program:

Girl3: “I think it has to make a turn”

Girl1: “No, it has to go here”

Girl3: “So I’m going to put this”

Girl1: “But then the car goes here”

Girl3: “And this one?”

Girl1: “This is not backward. This is forward”

We observed that the girls used their arms to explain their programs. When they wanted to explain that the robot goes to the left, they moved with the arms indicating that direction. They built a basic program, and later they added programming blocks. If the new blocks approached the robot to the bridge, they continued adding blocks; otherwise, they removed the blocks and continued with the basic program. They mainly used the “forward” block.

Fig. 7.
figure 7

Team 1 during the process of building tangible programs.

Team 2 (Girl5, Boy1, Boy2, Boy3): Their total interaction time was 20.03 min, and the team did not complete the task (see Fig. 8). Initially, the children positioned themselves side by side and after they formed a circle to start programming. When they have to test or program, the children open the circle to observe the movements of the robot. The children scattered the programming blocks around them. When they needed some programming block, they should look at all the parts around them. Most of the tangible programs were created by Boy3. The other children helped by placing some programming blocks, suggesting the use of some programming blocks and finding and giving the programming blocks to their classmates. The task of passing the RFID reader was mainly performed by Girl5 and Boy3. In total, the team created 16 tangible programs. The following is an excerpt from the conversation between Boy1 and Boy3 during the creation of a tangible program:

Boy1: “It’s just that the robot goes forward, forward, forward, turn around, and up here”

Boy3: “No, it has to turn around and stay straight and go there”

Boy1: “For me, it will go wrong”

We observed that Boy2 simulated the mBot route acting as if he was the robot itself, made the route taking small steps while talking about how the tangible program should be (“It walks forward, turn there, goes here, forward, stop here, then it goes up, it goes forward”). We observed that the movements of Boy3’s hands to indicate the movements that the robot should perform to accomplish the task. A strategy that Boy1 used to know what programming blocks to use was to grab the robot and simulate the movements necessary to raise the cardboard bridge. In this way, he could also calculate the number of programming blocks.

Fig. 8.
figure 8

Team 2 during the process of building tangible programs.

Team 3 (Girl6, Boy4, Boy5, Boy6): Their total interaction time was 18.08 min, and the team completed the task (see Fig. 9). When starting the activity, the children sit around the programming blocks except for Boy4; he approaches the robot and he watches it attentive. Then he joins the group to start programming. Boy5 observes the position of the robot and begins to create a tangible program. Boy4 gets distracted and starts walking in the classroom. Boy6 gets up and goes to the robot to calculate the distance with his hands and returns to the circle. The tangible programs were created by Girl6, Boy5, and Boy6. They worked together simultaneously, adding, removing and changing the programming blocks; and to pass the RFID reader they took turns with each other. In the final part of the activity, Boy4 joins the team to create the last tangible program.

Fig. 9.
figure 9

Team 3 during the process of building tangible programs.

During the activity, Girl6, Boy5, and Boy6 talked about ideas to create tangible programs, as illustrated below:

Girl5: “And what if you put that block”

Boy5: “It [the robot] will hit the table; it has to turn the other way”

Boy5: “We need two [blocks] backward. Let’s see if it works”

Boy6: “It [the robot] will have to turn there”

Boy5: “The face [of the robot] has to turn in front of us”

Boy6: “No, it has to turn there.”

In total, the team built 12 tangible programs. The children’s strategy to complete the task was to make programs with one, two or three movements (Ex: [START, FORWARD, FORWARD, END]) and repeat the same program several times while the robot was approaching the bridge. When it was necessary to make turns, they placed the two blocks of turns to verify which of the blocks to use (Ex: [START, TURN RIGHT, TURN LEFT, END]). One of the most interesting interactions we observed was Girl6 turning her body imitating the movement of the robot. Besides, as in the other teams, the children used arms and hands to indicate movements for their tangible program or explain their program ideas to their classmates.

Team 4 (Boy7, Boy8, Boy9): Their total interaction time was 18.02 min, and the team completed the task (see Fig. 10). During the manipulation of programming blocks, Boy9 is not attentive and he was very introverted. However, when Boy7 executed the tangible programs and the robot moves, Boy9 approaches the group to watch closely. Boy7 was the one who created almost all tangible programs. Eight of the nine programs were created by Boy7. Boy8 helped by placing the programming blocks in just one, but he took charge of passing the RFID reader and giving suggestions to Boy7. During the interaction, the dialogues were only between Boy7 and Boy8. The following is an example:

Boy7: “It has to go forward and turn”

Boy8: “I will remove this piece. I’ll put this block from here to turn over here”

Fig. 10.
figure 10

Team 4 during the process of building tangible programs.

The team’s strategy to accomplish the task was to create programs with few movements (Ex. [START, FORWARD, TURN LEFT, END]) until the robot was very close to the bridge. Then they added more programming blocks and repeated the last program several times until the robot was able to pass the bridge (Ex. The program [START, FORWARD, FORWARD, FORWARD, FORWARD, FORWARD, FORWARD, END] was repeated twice). The main movements that they carried out were with the hands to indicate the direction that the robot should follow.

Team 5 (Girl7, Boy10, Boy11, Boy12): Their total interaction time was 24.22 min, and the team completed the task (see Fig. 11). Everyone starts picking up the blocks to help build a tangible program. Each one tries to solve the task: Boy11 simulates the robot’s movements with his hands on the floor, and he calculates the number of blocks for the tangible program. Girl7 and Boy12 set up their programs separately. Boy12 is the first to test his tangible program. When the robot moves, everyone pays attention, and then they come together in a circle to program. Twelve of the sixteen tangible programs were created with the help of all children. The other tangible programs were created by Boy11 and Boy12. Boy10 and Girl7 were responsible for passing the RFID reader on the programming blocks.

Fig. 11.
figure 11

Team 5 during the process of building tangible programs.

During the process of creating tangible programs, the children talked about which programming blocks to put in the program and the quantity of them, as illustrated below:

Boy11: “The bridge is big then we have to put a lot of FORWARD”

Girl7: “Not so much!”

Boy12: “You should put 3 of these [FORWARD] and then turn here and then go up”

We identified that as a strategy for solving the problem, children place the largest number of blocks to observe the movements of the robot. Then they completely dismantled this program and created another one with other combinations of programming blocks. When they got the robot to stay close to the bridge, they created programs with just one movement (Ex: [START, TURN LEFT, END], [START, FORWARD, END]) to prevent the robot from moving away from the objective. When the robot was already on the bridge, they created a program with nine “forward” blocks to get the robot to pass the bridge. We observed that when children Boy10 and Boy12 explained their tangible programs, they placed their hands on the floor pretending to be the robot and made the path of the robot themselves (see Fig. 11).

Team 6 (Boy13, Girl8, Girl9): Their total interaction time was 15.04 min, and the team completed the task (see Fig. 12). At the beginning of the activity, Girl8 and the Boy13 program separately, and each creates its tangible program. Girl9 watches her classmates. There was competitiveness to grab the programming blocks. Girl8 is the first to test her tangible program. After this, Boy13 tests his tangible program. Once the robot begins to move, Girl9 shows interest, and she begins to create her tangible program. They worked separately, and they tested their programs one after the other until Girl8 achieved the robot very close to the bridge. The children begin to help Girl8 by giving suggestions and passing the programming blocks. In total, the children built 8 tangible programs, and Girl8 manipulated mostly the programming blocks.

Fig. 12.
figure 12

Team 6 during the process of building tangible programs.

In this team, there was few dialogue between the members. Here are some examples:

Girl8: “To go forward, what is the block”

Girl9: “It’s pink”

The children built programs with just one movement when they wanted the robot to make turns (Ex. [START, TURN LEFT, END], [START, TURN RIGHT, END]) and programs with several programming blocks when they wanted the robot to advance large spaces (Ex. [START, FORWARD, FORWARD, FORWARD, FORWARD, FORWARD, FORWARD, END]). This team showed body movements that accompanied the creation of tangible programs, for example, the movements of hands (see Fig. 12c).

Team 7 (Boy14, Girl10, Boy15): Their total interaction time was 25.47 min, and the team completed the task (see Fig. 13). At the beginning of the activity, the team worked separately. Boy15 built his program, and Girl10 and Boy14 built their programs. Throughout the activity, the team worked separately. In total, they created 25 tangible programs: 5 programs created by Boy15, and 20 programs created by Girl10 and Boy14. Most of the programs were simple, with just one movement (Ex. [START, TURN RIGHT, END]). Girl10 and Boy15 approached the bridge to observe the position of the robot to be able to identify what type of programming block they should use.

Fig. 13.
figure 13

Team 7 during the process of building tangible programs.

This team had little verbal exchange related to the creation of tangible programs:

Boy14: “It has to go forward and then there”

Girl10: “Now it has to go back”

We mainly observed that the children used the movements of arms and hands to explain their programs.

3.2 Feedback from the Adapted Emoti-SAM

Regarding the adapted Emoti-SAM, Table 1 shows the responses of children for their experience in the workshops. The children indicated, in a range of emoticons, their affective states about the workshop, painting the desired emoticon. Seventeen of the twenty-five children opted for the emoticon that represents the greatest happiness, five children opted for the second emoticon of the scale, and three children painted the saddest emoticon. The results indicate that the activity was considered pleasurable and enjoyable for most of the children. To a correct analysis of the Emoti-SAM drawings, the children were asked individually to talk about what they had drawn. We quantified the drawings made by children (see Table 2). Samples of the Emoti-SAM drawings are illustrated in Fig. 14 and Fig. 15.

Fig. 14.
figure 14

Girl’s explanation: “She is me, the little pieces of command, the bridge and the robot”.

Fig. 15.
figure 15

Boy’s explanation: “The little pieces of the command, the robot is going up the bridge, the bridge, the robot’s path across the bridge, then the robot goes straight”.

Table 1. Emoti-SAM results from all children
Table 2. Emoti-SAM results from all children

4 Discussion

A multimodal methodological approach focuses on the role of various modes, such as posture, gaze, action and speech in the meaning-making process, and the interaction between these modes. We collected data regarding the process of creating a tangible program to investigate the position, action, and speech of children in the effort to accomplish a task together. Regarding the children’s behavior, during the workshop, we observed that trying to control robots through programming was a very exciting process for the children. The verbal and bodily manifestations that we observed show that most children interacted with the environment with great freedom, i.e. creating themselves their strategies instead of following instructions, and enthusiasm. Their body postures lying down on the floor to watch closely the robot movements suggest they were very relaxed and yet attentive. Eight of the twenty-five children who participated in the case study approached the robot to observe more closely its position in relation to the cardboard bridge. All other children remained in the place or near the place chosen at the beginning of the interaction within reach of the programming blocks.

Our analysis suggests that the approach that the children had towards the robot and the cardboard bridge, helped them to define what type of programming blocks to use and the number of these. We observed three types of strategies that children used to define the programming blocks: i) act as if they were the robot and carry out the path that the robot should travel to complete the task, ii) the children used their hands to indicate the directions that the robot should follow, iii) the children grabbed the robot and they simulated the route that the robot should move. Different positions give different opportunities for interaction, for example, the children who grabbed the robot could discover that the robot has a little face that represented the front of the robot. It was an important fact so that they could build the programs according to where the robot’s face was. The children who stayed in their initial position observed the position of the robot in the distance and used arms and hand movements to indicate the movements that the robot should make, and this helped them choose the appropriate programming blocks for the tangible program.

Regarding the children’s speech, we observe that each group had a different level of communication, while some groups frequently talked to define each robot movement, members of other groups preferred to act separately and then test their ideas in practice. The children pretended to be the robot, usually recounted their movements as they made the path that the robot should take. Many of the explanations given by the children were accompanied by body movements, for example, when they said “the robot must go to this site”, the children did not say “left” or “right”, but they moved the arm or hand in the direction in which they wanted the robot to move. Generally, the programming blocks were scattered in the floor so that any child could grab them. This favored that regardless of their location in the group space, the children had equal access to the programming blocks. Each child pick up blocks and place them on the program easily and simultaneously, or at similar times, and led to clashes of action and ideas, and reposition or remove of the blocks from others when creating a tangible program. While their articulations comprised instructions and basic descriptions of algorithms, some teams extended their types of dialogue to include explanations and predictions.

One difference that exists between the teams is the number of tangible programs they created to accomplish the same task. This offers the opportunity for exploring a greater number of combinations of sequence of movements, potentially exposing the children to more experiences. On the other hand, the teams that created a greater number of tangible programs reduced their reflection time before changing any programming block of the tangible program, which caused errors to complete the task. We observed that an adequate reflection time was important for children to understand the function of programming blocks and allow them to consider how they created their program and correctly choose which programming block to use.

Regarding the children’s emotions, we used Emoti-SAM adapted to allow children to express their opinions and feelings towards the workshop and our tangible programming environment. Most of the children said they liked the environment very much in Emoti-SAM with an average score of 5.0, which was consistent with what we observed. The children made drawings symbolizing what they liked the most in the workshop. Twenty-one of twenty-five children drew the mBot; this may mean that the robot was in the center of attention for them. Some of the Emoti-SAM drawings represent a dynamic scene (for example, mBot crossing a bridge) that suggest a high level of involvement with the experience. It is also interesting to note that some children project themselves in the scene of programming the robot.

According to our observations, the embodied interaction is shown in: i) the program planning when the children moved their hands indicate directions while they explain their ideas for building the tangible program (for example Fig. 7d, Fig. 8a); ii) the algorithm building when the children used their arm, finger or hand movements to identify what programming block should be added to their tangible program (for example Fig. 8b, Fig. 13d); iii) the simulation the tangible program when the children use the full-body movement with locomotion to drawing the path that the robot will move (for example Fig. 8c, Fig. 11c); iv) the debugging of the tangible program when the children used their hands on the floor to calculate the number of programming blocks to correct a previous program (for example Fig. 11b); v) the execution of tangible program, when the children imitated the movements that the robot made (for example, Fig. 9b).

5 Conclusion

TaPrEC+mBot is a technological environment designed with current educational challenges that highlight the development of computational training as an important skill child should develop to learn and hopefully appreciate science and technology later. This paper aimed to explore a multimodal approach to analyzing embodied interaction in a tangible programming environment with children aged 4–5 years. The results show that the children had different forms of bodily interaction, demonstrating different forms of manipulation, strategies and verbal articulation. The relationship between these modes seems to be directly influenced and influences the creation of tangible programs. The verbal articulation gives an idea of the students’ thinking and planning. Future studies might examine the use of wearable technology for providing embodied experiences to introduce programming concepts.