Abstract
The design of interactive systems concerned with the impact of the technology on the human agent as well as the effect of the human experience on the technology is not a trivial task. Our investigation goes towards a vision of socioenactive systems, by supporting and identifying how a group of people can dynamically and seamlessly interact with the technology. In this paper, we elaborate a set of guidelines to design socioenactive systems. We apply them in the construction of a technological framework situated in an educational environment for children around the age of 5 (N = 25). The scenario was supported by educational robots, programmed to perform a set of actions mimicking human emotional expressions. The system was designed to shape the robots’ behavior according to the feedback of children’s responses in iterative sessions. This entails a complete cycle, where the robot impacts the children and is affected by their experiences. We found that children create hypotheses to make sense of the robot’s behavior. Our results present original aspects related to a social enactive system.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Advanced interactive systems entail complex interaction scenarios as well as research challenges demanding the consideration of new factors to design and guide the interaction. Kaipainen et al. [12, 13] have drawn the outlines of a multidisciplinary research agenda focusing on a dynamically coupled human and technological processes. They defined the concept of enactive system, based on a ubiquitous approach [18, 19] of the Bruner’s enactment idea [1].
This approach is recursive by nature, involving the impact of the technology on the human agent as well as the effect of the human experience on the technology. Our investigation expands this concept to a socioenactive vision, which goes further by supporting and identifying how a group of people can dynamically and seamless interact with the technology. The conception and experimentation of such a system presents several open research questions. For instance, how to adequate the design of socioenactive systems in specific domain scenarios.
Our research scenario is an environment of complementary education for children around the age of 5, enrolled in the Division of Early Childhood and Complementary Education of the University of Campinas [3]. In this context, we worked with educational robots, in particular the mBot [4], a robot kit that enables programming via Scratch.
In this paper, we propose a set of socioenactive design guidelines and apply them to a system in the educational environment. We assume that the system’s behavior must be driven and shaped according to users’ input and sense making. For this particular purpose, we adapted Kaipainen’s et al. [12] set of objectives to design enactive systems, establishing a series of guidelines for the socioenactive systems design. Following the guidelines, we mapped 6 human expressions (happiness, sadness, disgust, surprise, anger and despise) to emojis and their respective technological representation in the educational robot.
On this basis, we designed and evaluated a first version of a socioenactive system, in which a series of iterative sessions were performed, consisting of the following steps: a child secretly performs one of the mapped expressions to a camera; the expression is identified and input into the system; the system identifies, for that moment, which action must be executed by the robot; other children hypothesize which expression led the robot to take that action; responses are inserted into the system, influencing the next cycle of interaction.
We analyzed the study’s data relying both on systems’ behavior and the participants’ responses. Our study explains how children created meaning to the performed actions as a group working collaboratively. We found patterns related to the diversity (lack of unanimity) on the robot’s expressions identified by the children; a clear preference for an expression (happiness) and that children have better performance when identifying the robot’s expression than the expression performed by the child in the cardboard box.
Although recent literature has presented alternatives to tailor system’s results according to students’ performance [9] and improved Educational Robotics (ER) [16], we advance the state-of-the-art in the design and evaluation of systems with dynamic interactive coupling between people interaction and the systems’ behavior.
The remainder of this article is organized as follows: Sect. 2 presents the foundations and related work. Section 3 thoroughly describes the defined methodology by presenting the experimental design, the participants and the studied application scenario. Section 4 presents the results and Sect. 5 discusses the obtained findings. Finally, Sect. 6 presents the conclusion remarks and our envisioned future work.
2 Background
Kaipainen et al. [12] defines as enactive a system that “is recursive by nature, involving the impact of the technology on the human agent as well as the effect of the human experience on the technology”. The following research questions were considered to lead the design of enactive systems [12]:
-
What if the interaction experience would modify the content, thus constituting a self-controlling system?
-
What would be the proper metadata ontologies to account not only for pre-existent content categories, but for those that can emerge in such a recursive system dynamics?
Kaipainen et al. [12] further defined several objectives that could be used to lead the development of such systems. In this investigation, we organized the objectives into guidelines (named here from G1 to G4) as follows:
-
G1: definition of a database or rule set to support the generation of behavior in real time;
-
G2: definition of technologies supported by sensors to detect and track participants behavior;
-
G3: mapping between psycho-physiological dimensions of content;
-
G4: an algorithm to manage the narrative montage in real time.
We propose to include social aspects into the model to expand the original enactive concept defined by Kaipainen et al. [12]. In this sense, our proposal considers not only the individual—traditional key in the interaction process with technology—but the impact of the social interactions performed by a group of individuals in such environment. We defined this concept as “socioenactive”Footnote 1.
A key research challenge to achieve socioenactive systems refers to the difficulties of capturing, modeling, and interpreting human aspects such as emotion and social environment. The use of ontologies stands for an alternative to achieve this goal, once they represent semantics in computational systems, by describing concepts and interrelationships among them.
To represent semantics in computational systems, Web Ontologies have been designed to provide rich machine-decidable semantic representation [6,7,8, 10]. They refer to a formal specification of a domain, formalizing a conceptualization of a domain in terms of classes, properties and relationships between classes.
Soft Ontology is another conceptual approach, in which, in contrast to Web Ontologies, with fixed hierarchies described in Web Ontology Language (OWL) [14], refers to flexible set of meta-data [12, 13]. This is useful to represent dynamically evolving information domains, as well as for representing and interpreting psycho-physiological states by including, for instance, the emotions [15] from the involved participants in the interaction. These ontologies present individual elements associated with values in a non-structured a priori hierarchy. They should evolve according to the recursive cycle, thus impacting the human agent and being affected by the human experiences.
With the objective of implementing these flexible solutions, ontology-based enactive systems are frequently based on fuzzy models [17]. Other studies propose the use of ontology networks, which conciliate models of several types of ontological representations, including soft and hard ontologies [5].
In this work, we rely on the concept of Soft Ontology to develop a behavior matrix representing the meaning of robot’s actions. We assume that the robot’s behavior is based on the children’ collective assignments by round to round from the interaction in the environment.
3 Methodology
This investigation aims to answer the following research questions:
-
RQ1: how to adapt the enactive guidelines proposed by Kaipainen et al. to the design of socioenactive system?
-
RQ2: what would be the first impressions of the execution of a socioenactive system in an educational environment?
In the following, we present the experimental design of a socioenactive system (Subsect. 3.1), followed by the educational application where the study was conducted and the description of the participants in our study (Subsect. 3.2). Subsection 3.3 presents the workshops environment and dynamics. Subsection 3.4 presents how data collected from the study was analyzed.
3.1 Experimental Design
The experimental research design was organized to adapt the enactive systems’ goals proposed by Kaipainen et al. [12] as guidelines to support the design of socioenactive systems. Table 1 presents the 4 proposed key guidelines (G1, G2, G3 and G4) underlying our experimental design.
Figure 1 shows an adaptation of an enactive system’ scheme [12, 17] towards a socioenactive system’s organization explored in this study. It shows the socioenactive feedback cycle, starting with G1, used to support the implementation of the socioenactive system instance (cf. Subsect. 3.2). The guidelines were mapped to a behavior matrix (G1), the mBot [4] (G2 and G4) and the mapping of human facial expressions (G3). The social component is represented by the children themselves.
3.2 Study Scenario and Participants
In total, 25 children, aged 4 to 5 years old, participated in this study. All children were enrolled in the Division of Early Childhood and Complementary Education of the University of Campinas [3]. The children came from two separated classes, morning and afternoon—referenced from now on as Group 1 and Group 2—with respectively 13 and 12 students each. Each group had a different teacher.
In this study, all parents signed a Term of ConsentFootnote 2, allowing the participation of children, and data collection through video and images. All children assented to participate and signed the Term of Agreement with the help of teachers.
Initially, in a brainstorming session with teachers, we defined 6 expressions that would make sense for the children’ related context: happiness, sadness, disgust, surprise, anger and despise. Each expression was associated to an emojiFootnote 3 expression (cf. Fig. 2).
In order to contextualize each emojis expression to children, the teachers mapped parts of an adapted version of the “Little Red Riding Hood” story to each of the emojis. The teachers then organized storytelling sessions with the children, showing to them the respective emoji plaque when required (associated to specific parts of the story). For example, a plaque with the “surprise” emoji was shown to the children in the scene that the wolf revealed his disguise to the Little Red Riding Hood. Table 2 illustrates how some parts of the story were mapped to the emojis:
Related to the guideline G3 (cf. Table 1), Fig. 3 shows the expressions mapping, in which each emoji (first row) was mapped as a segment display image, programmed in the mBot software (second row). The third row shows the mBot display mapped to each one of the emojis.
Related the guideline G1 (cf. Table 1), our behavior matrix represents domain concepts and ontology dimensions. This matrix is based on the idea of ontological dimensions (ontodimensions) and ontospaces as proposed by Kaipainen et al. [12]. Their proposal focused on the representation of collaborative tagging practices, with the tags representing ontodimensions and the tag space, ontospaces.
The behavior matrix is our ontological solution to represent knowledge about the emotional expressions and a set of behaviors that can be performed by the robot. Table 3 shows our solution exploring a probability matrix to represent the association of the robots’ actions (rows) and emotional expressions (columns). Each cell represents a weighted probability value of the robot to execute an action for a given emotional expression. The matrix provides flexible and fuzziness behavior to our robot solution. New actions can be dynamically included by inserting new rows, and new emotional expressions can be inserted including new columns in runtime.
Here, the concepts refer to the defined emotional expressions and the ontology dimensions to the robot’s actions. Each matrix element stands for a probability of relating the concept with an ontology dimension. All values of the matrix were initialized with an initial default probability. Along the execution of the workshop dynamics (cf. Subsect. 3.3) and the input answers from children, the probabilities are adjusted representing the children’ understanding of the correlation between concepts and ontology dimensions. This dynamic behavior of the matrix based on participants’ input can be seen as a socioenactive system because the association of the robot’s actions with the emotional expression are modeled according to the social interaction context and people’s contribution.
Related to the guideline G4 (cf. Table 1), the mBot was programmed to perform a set of actions for each one of the expressions. The set of actions for each expression was designed with the help of teachers aiming to give a realistic emotional aspect to the mBot. Table 4 shows the algorithm related to each action programmed in the mBot. Guideline G2 was not adopted on this study (i.e., the robot’s sensors were not employed).
3.3 Workshops Environment and Dynamics
We conducted workshops for evaluating of our socioenactive system. The study environment was composed by a cardboard box, presented to the children as a “telepathic box”, equipped with a camera and isolated from the other parts; a stage for the robot to perform its actions; a children audience area; and a table for the children to choose (vote) which expression they thought their friend made inside the “telepathic box”. Figure 4 shows the workshop environment organization.
The workshop dynamics was organized in the following steps:
-
Step 1: Each child is randomly selected to mimic an emotional expression in the “telepathic box”. Overall, each child is selected once.
-
Step 2: The selected child choose a plaque and mimic an emotional expression in front of a camera.
-
Step 3: The system triggers an action in the robot based on the recognition of the expression performed by the child. Considering the difficulties related to real-time image processing and facial recognition, we adopted a Wizard of Oz approach [2]: after the child in the “telepathic box” mimics the expression, a researcher signals to the other researchers which expression was performed.
-
Step 4: Children in the audience area watch the robot performing the action.
-
Step 5: Teachers ask the children in the audience area: “What expression do you think your friend made in the telepathic box that triggered this action on the robot?”. Each child selects, privately and in his/her own turn, an emoji expression from a pool with the 6 available expressions. Each emoji was internally identified with an RFID.
-
Step 6: The data collected in the step 5 is used to update the system’s behavior matrix.
Figure 5 shows the “telepathic box” (left) and a child selecting an emoji expression (equipped with a RFID tag) as they thought his/her friend performed inside the “telepathic box”.
The behavior matrix was initially configured to select aleatory actions (Step 3) for a given emotional expression (all ontological dimensions present an equal probability). In such a case, it is initialized with default values (cf. Table 3), which is adjusted according to the children’s behavior. The fuzziness behavior is provided by the weighted random selection of the robot’ actions. For this purpose, we used weighted values associating actions to emotion expressions (i.e., the values of a column). The weighted value is adjusted according to the children’ feedback during the Step 6 in the workshop. This complete a cycle, where the robot impacts the children and is affected by their experiences.
As the rounds proceed, our defined algorithm balances the probabilities among the robot’s actions according to audience feedback (Steps 5 and 6). For instance, given an “action x” performed by a child in the “telepathic box”, if most of the children selected a “happy emotion” to such action, the system would increase the probability of association between “happy emotion” and “action x”. In this sense, the robot’s behavior is based on the children’ collective feedback, as it relies on the assignments received in each round.
As mentioned earlier, we carried out studies with two groups, Group 1 and Group 2, respectively, related to the morning and afternoon classes. The study with Group 1 was used to calibrate the behavior matrix, test it, and correct errors. The matrix was really put into practice only during the study with Group 2.
3.4 Employed Analyses
All workshop sections were filmed and produced data recorded for analyses. We emphasize two distinct analyses. The first concerns the behavior matrix data. For this purpose, we stored the matrix status for each iteration related to the children’s answers to understand the consistency and convergence of concepts related to the ontology dimensions (Subsect. 4.1 presents the obtained results). The second analysis concerns the children’s behavior data. At this analysis, we aimed to comprehend the different expressions assigned by children over the different iterations in the workshops. For this purpose, we counted each participant answer in each iteration in the workshop (Subsect. 4.2 presents the obtained results).
4 Evaluation Results
Section 4.1 presents an analysis of the behavior matrix data. Section 4.2 focuses on the children’s behavior during Group 2 study, once this group used a stable version of the behavior matrix.
4.1 Behavior Matrix Analysis
Table 5 presents the status of the matrix after the first iteration (with Group 2). For example, in Table 5 there is a lower probability of the robot to execute action 1 in response to a child that performed the disgust expression in the “telepathic box”.
Table 6 presents the matrix values after four iteration rounds. In the first iteration a child expressed disgust in the “telepathic box”, leading the robot to execute action 1, related to happiness, as presented in Table 4. Then, in average, the other children chose, through the RFID emojis, a different expression of the one performed by the child in the “telepathic box”. Thus, the weight for action 1 was decreased, i.e., it was not a good action to be associated to the disgust expression for this group of children.
In the second iteration, a child expressed happiness in the “telepathic box”, leading the robot to execute action 3, related to disgust. Similarly to the first iteration, in average the other children chose, through the RFID emojis, a different expression of the one performed by the child in the “telepathic box”. Thus, the weight for action 3 was decreased in the happiness column of the matrix.
In the third iteration, a child expressed anger in the “telepathic box”, leading the robot to execute action 5, related to anger. Then, in average, the other children chose, through the RFID emojis, the same expression of the one performed by the child in the “telepathic box”, i.e., the anger expression. Therefore, the behavior matrix had increased the value related to action 5 in the anger column, also decreasing the values of the other actions in the same column, once there was a correspondence between the expression performed by the child in the “telepathic box” and the expression chosen by the other children.
In the fourth iteration a child expressed happiness in the “telepathic box”, leading the robot to execute action 4, related to anger. Then, in average, the other children chose, through the RFID emojis, a different expression of the one performed by the child in the “telepathic box”. Thus, the weight for action 4 was decreased in the happiness column.
After 12 iterations, the behavior matrix presented a slow but consistent convergence, by attributing higher weights to the actions that represent the emotional expressions, as planned by the researchers. It is important to mention that this (12) is still a low number of iterations for convergence purpose. Scenarios with larger number of actions, including various action alternatives for a given emotion expression, are necessary to a more precise evaluation of the behavior matrix.
4.2 Children’s Behavior Analysis
In the Group 2 study, 12 children were present, thus leading to 12 iteration rounds. For each iteration, 11 children should choose what facial expression they thought the child in the “telepathic box” had performed. They used RFID emojis to indicate their choices (cf. Sect. 3.3).
Figure 6 shows the number of different emoji expressions chosen by the children in Group 2 in each of the 12 iterations. It is possible to observe that none of the iterations had an unanimity in the expression’s choice. Iterations 1 and 8 presented the lowest diversity in the choices with 3 different expressions chosen.
Figure 7 presents (in blue) the total number of choices related to each expression, considering all iterations. The max number (132 choices), was calculated multiplying the number of iterations (12) by the number of choices in each iteration (11). It also presents (in orange), the total number of iterations in which each expression was chosen by at least one child.
On its turn, Fig. 8 shows the frequency of children’s choices related to each expression in each iteration. For each iteration, the expression drawn and performed by the child in the “telepathic box” is indicated in a black label, whereas the action performed by the robot is indicated in a blue label. For example, the disgust expression was performed by the child in the “telepathic box” in iterations 1 and 9, and the robot did not execute this action in any iteration. Also, in iterations 1 and 9, respectively 3 and 2 children chose RFIDs emojis related to the disgust expression, meaning they believed the child in the “telepathic box” had performed the disgust expression.
Additionally, also through the analysis of Fig. 8, it is possible to infer that the expression with the highest number of votes was equivalent to the one performed by the child in the “telepathic box” in 25% (3 of 12) of the cases (specifically iterations 3, 7 and 10). On the other hand, in 75% (9 of 12) of the cases (specifically iterations 1, 2, 3, 4, 7, 8, 9, 10 and 11) the expression with the highest number of votes was equivalent to the one performed by the robot.
Finally, we observe that in three iterations (4, 8 and 11) a consensus was reached, i.e., the majority of children chose the same RFID emoji. In all of these iterations the emoji chosen corresponds to the action performed by the robot.
5 Discussion
Kaipainen et al. [12] proposes, as a contrast to the standard conceptualization of human-computer interaction, an enactive relationship between the individual and technology. On that approach, an enactive system would consider the impact of the technology on the human agent as well as the effect of the human experience on the technology [12]. This relates to the ubiquitous computing approach proposed by Weiser [18, 19], which predicts a seamless interaction with technology, that adapts itself accordingly to the environment characteristics.
Pushing forward the state-of-the-art, this investigation aimed at understanding how a group of people can dynamically and seamlessly interact with technology underlying socioenactive systems enriched by ontology aspects regarding emotional expressions (research question RQ1). For this purpose, our research scenario involved an educational environment with two different groups (Group 1 and Group 2) of 4–5 years old children (N = 25 in total) participating in proposed activities supported by an educational robot. Our robot was programmed to perform a set of actions mimicking some emotional human expressions: happiness, sadness, disgust, surprise, anger and despise.
In the carried out workshops, a series of iterative sessions were conducted, consisting of the following steps: a child secretly performs one of the mapped emotional expressions to a camera; the expression is identified and is input into the system; the system identifies, for that moment, which action must be executed by the robot; other children hypothesize which expression led the robot to take that action; responses are inserted into the system, starting another iterative cycle.
In summary, answering the research question RQ2, we observed in Group 2 some patterns in the actions and behavior performed by the children as our key findings:
-
Diversity of choices for the emotion expression done by the robot: We observed a lack of unanimity in the interpretation of the expression made by the robot. For instance, the group had 4 and 5 different emotion expressions chosen in 8 iterations (out of 12). This result could be attributed in part to the difficulty of the task, considering emotion and its expression through interpretation of the behavior of a robot is not trivial, especially for children of that age. Moreover, there was the complexity of the ontology algorithm in getting input from the audience to rebuild the robot’s behavior in the iterative feedback cycle, which could have made the emotional situation even harder to grasp.
-
Happiness is the king/queen! The choice of happiness as a response for interpreting the robot’s action (or the expression supposedly made by the child in the telepathic box to be reproduced by the robot) was present in 11 of the 12 iterations. This could be interpreted in part as a reflex of the pleasure and excitement children were experiencing in the activity, with the robot’s actions in the narrative scenario presented to them. It seems that children tended to see happiness more frequently than the other emotions.
-
Capturing the robot’s emotion expression: A relevant result relative to the interpretation of the robot emotion was observed. Instead of guessing the emotion the kid in the telepathic box did, children were very good in guessing the expression the robot was expressing. In nine, out of 12 iterations, the expression most chosen by the children was the same actually performed by the robot. This means the action of the robot as a system output was characterizing very well the intended emotion in its behavior.
Overall, several lessons could be learned from this study, that should be addressed in further investigations. Some approaches should be useful in dealing with the many complexities present in our research scenario. The scenario was created for children’s interaction with a robot, who learns with the children’s interpretation of the emotion expressed in the robot’s behavior. This is clearly a socioenactive system scenario, as the enactive and the social aspects of children’s enaction are present.
Although the system enactive loop was very consistently performed as there were actions coming from the audience that fed the system, shaping the next system’s actions, some fine tuning in the algorithm is still needed. The social aspect of interaction in the enactive loop was consistently considered by the ontology algorithm to deal with what that specific group understands as a particular emotion expression. Nevertheless, some adjustments in parameters are still required to cope with the learning aspect of the algorithm.
Besides the ontology system algorithm, other aspects deserve our attention. Working with emotions and their expression, especially in children context might need more granular treatments. For example, reducing the set of emotions could help in making more visible the children’s responses. The joint behavior of children in expressing or interpreting emotions should be another aspect to explore, going further in understanding their ‘choices’ of the emojis which better represent the robot’s action.
This work contributed to understanding some socioenactive aspects of interaction in technology-enhanced scenarios. The lessons learned in this investigation is certainly helpful for informing new scenarios and going further to thoroughly advance the state-of-the-art.
6 Conclusion
The way of designing coupled interactive systems integrating technology and humans in a less deterministic fashion deserves huge research efforts. This paper expanded the enactive concept to a socioenactive vision. Based on the features for enactive systems, we defined a set of guidelines to design socioenactive systems. An instance of such system was implemented in an educational scenario, in which several iterations generated data to shape the behavior of the system according to the meaning given by the children in the conducted workshop dynamics, leading to a non-deterministic behavior of the system. We found several patterns in the children’s actions and behaviors. We consider this study as a first of several efforts in investigating socioenactive systems in practice, shedding light and supporting further development related to this topic. Future work involves further analyses over the user experience in the context of our study. The analysis of such data might provide additional support and evidences to conduct additional studies in the design and development of socioenactive systems.
Notes
- 1.
The current research is part of a broader project, supported by the São Paulo Research Foundation (FAPESP), grant #2015/16528-0, that aims to study, build and evaluate socioenactive conceptual frameworks for different scenarios, like the educational, health and museums. The complete project description is available at: https://interhad.nied.unicamp.br/projetos/socio-enactive-systems Accessed: February, 2019.
- 2.
This research was approved by the Research Ethics Committee of the University of Campinas with number 72413817.3.00000.5404.
- 3.
Ideograms and smileys used in electronic messages and web pages. Definition retrieved from: https://en.wikipedia.org/wiki/Emoji Accessed: January, 2019.
- 4.
The opinions, hypotheses, and conclusions or recommendations expressed in this material are the responsibility of the authors and do not necessarily reflect the views of FAPESP.
References
Bruner, J.S.: Toward a Theory of Instruction, 177 p. Belknap Press, Cambridge (1966). \$3.95. Bull. Natl. Assoc. Second. Sch. Princ. 50(309), 304–312 (1966). https://doi.org/10.1177/019263656605030929
Critical acclaim for research methods in human computer interaction. In: Lazar, J., Feng, J.H., Hochheiser, H. (eds.) Research Methods in Human Computer Interaction, 2nd edn, pp. v–viii. Morgan Kaufmann, Boston (2017). http://www.sciencedirect.com/science/article/pii/B9780128053904099866
Division of early childhood and complementary education of the university of campinas (dedic), January 2019. http://www.dgrh.unicamp.br/dedic/prodecad
mBot: Ideal robot for children to build and code, January 2019. https://www.makeblock.com/steam-kits/mbot
Bonacin, R., Calado, I., dos Reis, J.C.: A metamodel for supporting interoperability in heterogeneous ontology networks. In: Liu, K., Nakata, K., Li, W., Baranauskas, C. (eds.) ICISO 2018. IAICT, vol. 527, pp. 187–196. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94541-5_19
Dos Reis, J.C., Bonacin, R., Baranauskas, M.C.C.: A semiotic-based approach to the design of web ontologies. In: 12th International Conference on Informatics and Semiotics in Organisations (ICISO 2010), pp. 60–67 (2010)
Dos Reis, J.C., Bonacin, R., Baranauskas, M.C.C.: An assisted process for building semiotic web ontology. In: 13th International Conference on Informatics and Semiotics in Organisations (ICISO 2011), pp. 167–174 (2011)
Dos Reis, J.C., Bonacin, R., Baranauskas, M.C.C.: Constructing web ontologies informed by semantic analysis method. In: 13th International Conference on Enterprise Information Systems (ICEIS 2011), pp. 203–206 (2011)
Effenberger, T., Pelanek, R.: Towards making block-based programming activities adaptive. In: Proceedings of the Fifth Annual ACM Conference on Learning at Scale, L@S 2018, pp. 13:1–13:4. ACM, New York (2018). https://doi.org/10.1145/3231644.3231670
Gruber, T.R.: Toward principles for the design of ontologies used for knowledge sharing. Int. J. Hum. Comput. Stud. 43(5–6), 907–928 (1995)
Unicode Inc.: Unicode 11 emoji fonts (2019). http://www.unicode.org/emoji/charts-11.0/emoji-list.html
Kaipainen, M., Normak, P., Niglas, K., Kippar, J., Laanpere, M.: Soft ontologies, spatial representations and multi-perspective explorability. Expert Syst. 25(5), 474–483 (2008)
Kaipainen, M., et al.: Enactive systems and enactive media: embodied human-machine coupling beyond interfaces. Leonardo 44, 433–438 (2011)
Kalyanpur, A., Golbeck, J., Banerjee, J., Hendler, J.: OWL: capturing semantic information using a standardized web ontology language. Multiling. Comput. Technol. Mag. 15(7) (2004)
Pittermann, J., Pittermann, A.: A data-oriented approach to integrate emotions in adaptive dialogue management. In: IUI (2007)
Pöhner, N., Hennecke, M.: The teacher’s role in educational robotics competitions. In: Proceedings of the 18th Koli Calling International Conference on Computing Education Research, Koli Calling 2018, pp. 34:1–34:2. ACM, New York (2018). https://doi.org/10.1145/3279720.3279753
Tikka, P., Vuori, R., Kaipainen, M.: Narrative logic of enactive cinema: obsession. Digit. Creat. 17(4), 205–212 (2006)
Weiser, M.: Some computer science issues in ubiquitous computing. Commun. ACM 36(7), 75–84 (1993). https://doi.org/10.1145/159544.159617
Weiser, M.: Human-computer interaction. In: The Computer for the 21st Century, pp. 933–940. Morgan Kaufmann Publishers Inc., San Francisco (1995). http://dl.acm.org/citation.cfm?id=212925.213017
Acknowledgements
This research is supported by São Paulo Research Foundation (FAPESP)Footnote 4, grants #2015/16528-0, #2014/07502-4, #2018/06416-8 and #2017/02325-5. Additional support was provided by the Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES), the National Council of Technological and Scientific Development (CNPq) under grant #306272/2017-2, the University of Campinas (UNICAMP), the Nucleus of Informatics Applied to Education of the University of Campinas (NIED), and the Federal Institute of Education, Science and Technology of São Paulo (IFSP).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Caceffo, R. et al. (2019). Collaborative Meaning Construction in Socioenactive Systems: Study with the mBot. In: Zaphiris, P., Ioannou, A. (eds) Learning and Collaboration Technologies. Designing Learning Experiences. HCII 2019. Lecture Notes in Computer Science(), vol 11590. Springer, Cham. https://doi.org/10.1007/978-3-030-21814-0_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-21814-0_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-21813-3
Online ISBN: 978-3-030-21814-0
eBook Packages: Computer ScienceComputer Science (R0)