Chapter 11. Social components of technology and implications of social interactions on learning

Sandra Y. Okita
Teachers College, Columbia University

Technology has revealed new insights into the role of social relationships in learning. This chapter explores the social components of technology using biologically inspired robots and computer representations (e.g. avatars and agents), and examines the implications on student learning and behaviour. The importance of theoretically sound and robust concepts such as Learning-by-Teaching and Recursive Feedback is highlighted in designing a learning relationship that can work to maximise the partnership between learner and technology. As a promising strategy for connecting research and practice, the chapter introduces practical applications for the classroom by applying learning-by-teaching concepts using computer agents, virtual avatars and programmable robotic systems.



Educational researchers have a good history of developing coherent, empirically supported theories about learning and teaching, but their work is limited when it comes to tailoring findings to practice (Weiss, 1999[1]). Educational technology has the potential to play a significant role in strengthening this link for K-12 education, but current processes for use by practitioners are largely inadequate (Burkhardt and Schoenfeld, 2003[2]). Several reasons come to mind. Design engineers often maximise on the novelty and sophistication of technology, and do not focus on the social components of technology that are important in student learning. In order to make technology more educationally powerful, changes in the design have to be made. It is not just the design of a “thing” rather, it is the design of a social relationship between a learner and a “thing” that involves learning (Sheridan, 2002[3]).

It is important that students do not become overly dependent on technology as decision aids to the extent that they give up thinking for themselves. A critical ingredient to lifelong learning is the ability to assess individuals’ own learning and find out where improvements can be made in their understanding. There is a need to identify the kinds of practices that research suggests are successful, and that can work to maximise benefits from the social components of technology. Practice needs to be well-grounded in theory to be able to explore how or why things work (Gomez and Henstchke, 2009[4]), in order to ensure that the learning trajectories are clear and robust enough for the classroom. The importance of this research in relation to K-12 education is that it identifies the kinds of practices that work well with technology, enabling us to equip students with well-designed tools and self-learning skills to self-assess, make informed decisions, identify problems on their own and learn throughout their lives.

This chapter examines how 1) maximising the social components of technology; 2) focusing on the design of the relationship between a learner and technology; and 3) well-grounded theories in learning can help tailor research findings to practice. The latter section of the chapter examines robotics education as a promising strategy for connecting research and practice, followed by policy implications.

Social components of technology in learning and behaviour

Social interaction has many benefits for learning that are not attributable strictly to socialness, such as observing a mature performance (Rummel, Spada and Hauser, 2009[5]), receiving questions, generating explanations (Roscoe and Chi, 2007[6]), and engaging in social motivation and conditions that sustain learning interactions. In many instances, the effects of socialness on learning also are attributed to the timing and quality of the delivery of information, which computers can mimic and control (Okita, 2014[7]). The definition of “social” has expanded dramatically with technology, allowing students to communicate remotely through virtual avatars without being physically present. Sociable computer agent programmes can engage in contingent social dialogue for long periods, and biologically inspired robots can model physical human behaviours. Technology has revealed new insights into the role of social relationships in learning, but little is known about the impacts of such social components on learning.

Neurological evidence indicates that attributions of humanness engage different brain circuitry (Blakemore et al., 2003[8]), and people’s interaction patterns differ depending on whether they believe they are interacting with an agent or an avatar. A study examined whether believing that a virtual representation was an agent (computer) or an avatar (human) affected learning (Okita, Bailenson and Schwartz, 2008[9]). Thirty-five college students participated in a study that compared these two conditions. In both conditions, the participants asked identical questions, and the virtual human provided identical, pre-recorded verbal and non-verbal responses. In this way, the study isolated “social belief” from other important aspects of social interaction. The findings indicated that the “belief” in the avatar condition resulted in significant learning gains and higher arousal measures (i.e. skin conductance level (SCL)) compared to the agent condition. Greater arousal correlated with better learning, with the peak SCL being reached when the participant was reading the last portion of a question. This suggested that the locus of the learning effect might occur when people take the socially relevant action of reading, and the arousal during this action prepared them to learn from the response.

The above-noted possibility led to a replication of the study with an additional avatar-silent condition. Participants read the questions silently rather than aloud to the avatar. This way, they could not take any socially relevant action. People might not learn as well through passive listening of an avatar. The results replicated the avatar and agent condition from the previous study. However, the avatar condition showed a moderate advantage over the avatar-silent condition when the problem progressed to more difficult inferential questions. The SCL scores for the avatar and agent condition were similar to those in the first study, but the SCL scores for the avatar-silent condition were smaller than those for the agent condition. This replication study separated the effect of the social element (belief of being social) and the socially relevant action on learning. The result implied that making students believe that they are “being social with a human” (rather than a computer programme) increases factual knowledge, but not deep understanding, unless “socially relevant actions” are involved.

The physiological and learning measures may also help index some internal processes that can inform possible ways to design relationships in which technological artefacts may anticipate people’s needs and thus help devise the next step of their learning experiences. For example, in special needs education, robots can assist autistic children in developing social skills (e.g. taking turns and sharing) by repeating specific behaviours when opportunities arise (Robins et al., 2004[10]).

Designing a relationship between a human and a “thing”

Biologically inspired robots have interesting implications for learning, as they have boundary-like properties that elicit strong responses from people (e.g. humanoid robots with human form and motion, but still have machine-like properties). For researchers, robots provide a range of design choices (e.g. tone of voice, gesture, feedback) that can be used to influence social interactions (Okita, Ng-Thow-Hing and Sarvadevabhatla, 2011[11]). This section explores how the transition from the design of a “thing” to the design of a relationship between a human and a “thing” can be beneficial for learning and behaviour. (Nickerson, 1999[12]) suggests that children exposed to thought-provoking objects or situations are more likely to develop deep genuine interests in those objects. In an experiment, children were presented with several biologically inspired robot dogs each performing varying levels of intelligent behaviours (i.e. dancing to music, finding and kicking a ball, and a turned off robot). Robots were used as vehicles to probe children’s understanding of artificial intelligence, biological properties and agency (Okita and Ng-Thow-Hing, 2014[13]). The study involved 93 children ages three to five-years-old, and it was found that the complex nature of these robots challenged children’s beliefs and prompted them to do some serious thinking. For agency, children made inferences that robots had to have a remote control to move, but they also believed that these robots could jump on the couch when no one was around. For biology, children believed that robots get hungry, have a heart, but cannot grow because of their hard exteriors. Findings revealed that children may bring a syncretic set of beliefs when slowly developing their understanding of an unfamiliar object (i.e. robots) in a piecemeal fashion. Children seemed to shift their discrete beliefs based on a mixture of facts that they had acquired through observations and interactions (Inagaki and Hatano, 2002[14]). These findings on naïve biology can influence instruction when teaching biological phenomena (e.g. child life interventions in hospitals). In a classroom study, a computer application assessed student’s prior knowledge in physics, so teachers could use the information to better design instruction (Thissen-Roe, Hunt and Minstrell, 2004[15]).

Other studies have explored differences in children’s learning and behaviour based on their relationship with robots. One study engaged children in a learning task using different turn-taking scenarios with the life-sized Honda humanoid robot (e.g. table setting, learning about utensils). The robot exhibited different learning relationships with children, e.g. as a teacher, a peer-learner robot or a robot engaged in self-directed play. Little difference was found between the older children (7-10 year-olds). However, the younger children (4-6 year-olds) in the peer-learner robot relationship learned and performed as well as the older children on the post-test, which indicated that different relationships between children and the robot can influence learning outcomes (Okita, Ng-Thow-Hing and Sarvadevabhatla, 2011[11]). In another study, 30 children between the ages five to seven-years-old participated in an individual, 20-minute study with Honda’s humanoid robot (Okita and Ng-Thow-Hing, 2014[13]). The study explored three different social scenarios to see how close children will allow robots to approach them. The robot slowly steps forward while engaging in different social scenario dialogues assigned by condition. The robot says “Captain, may I take another step?” when using Familiar Game Playing condition, “I am going to take another step” in the Announcing Intent condition, and randomly takes steps forward unannounced for the No Notification condition. The robot continues to take steps until stopped by the child. Results showed that designing a dialogue around a game that the child was familiar with, such as “Captain may I?” could significantly reduce the distance between the robot and the child, compared to other conditions. Implications include assisting sensory technology when physical distance is crucial in detecting and avoiding collisions and identifying users (e.g. facial recognition). The studies found that a child’s ability to pretend or engage in social interaction with robots was often constrained by what the robot could do in response. Until robots have the intelligence to flexibly respond to a wide range of interactive bids, designing a human-robot relationship around a familiar schema or script can be useful in guiding a social interaction.

Recursive feedback during Learning-by-Teaching

A critical ingredient to lifelong learning is being able to assess one’s own learning trajectories, and finding where further improvements can be made in understanding. Learning-by-Teaching (LBT) is a form of peer learning that can provide informative assessment of one’s own content knowledge (Bargh and Schul, 1980[16]). The LBT cycle has three phases, i.e. preparing to teach, teaching a peer, and recursive feedback. This section focuses on Recursive Feedback, which refers to information that flows back to teachers when they have the opportunity to observe their pupils independently perform in a relevant context (e.g. coach watching the soccer team play). Recursive feedback in LBT reveals discrepancies they notice from observation, and leads to the realisation that potential deficiencies in pupil understanding may not be due exclusively to how the material was taught; rather, it could reflect a lack of precision in the teacher’s own content knowledge. A series of studies tested whether recursive feedback maximised the benefits of LBT on peer learning (Okita and Schwartz, 2013[17]), and identified situational variations for effective implementation in other settings.

A human-human (laboratory) study involved 40 graduate students who met face-to-face with another student (the confederate). The potential value of recursive feedback was isolated through the study design, which systematically removed various elements from the full LBT cycle. For instance, one control condition had tutors prepare and teach, but they did not observe their pupils perform. Students who prepared, taught and observed their pupils perform exhibited superior learning of human biology relative to several control conditions, which included elements of LBT but not recursive feedback. For recursive feedback to be effective, tutors had to maintain representations of their own understanding, of what they taught, and of the understanding of their pupils. Doing so helped the tutors sort out which aspects of the pupils’ performances related to which levels of representation. Results indicated that recursive feedback enhances the effectiveness of LBT instructional models.

Avatar-Avatar (Online Virtual Reality Environment): Two additional studies examined whether the benefits of recursive feedback extend to an online virtual reality environment. Thirty-nine graduate student participants communicated through virtual avatars, but they never communicated in person. The first study replicated the human-human study with the same study design and procedures, but took place in an online virtual reality world. The previous findings were replicated, as tutors who taught and observed their pupil avatar interact with an examiner exhibited superior learning relative to the control conditions that included LBT elements but not recursive feedback (Okita et al., 2013[18]).

Virtual environments offer additional design choices that may influence recursive feedback. A follow-up study with 20 graduate students added two recursive feedback conditions that incorporated different design choices, i.e. 1) a customised pupil avatar; and 2) a doppelgänger look-alike, where the pupil avatar looked like the participant. Previous literature has demonstrated that look-alike appearances can affect decision-making and influence behaviour (Bailenson, 2012[19]). The results showed that the generic pupil avatar (control) condition had the highest performance, followed by the customisation and the look-alike conditions. Too much or too little customisation seemed to hinder performance. Too much customisation (31+ times) may have increased the tutor’s sense of ownership and thus her/his association with only the avatar’s surface features. Too little customisation (%3C15 times) possibly did not develop any sense of ownership or relationship. Since participants in the control condition focused only on developing the tutor-tutee relationship (i.e. content knowledge), perhaps they were able to focus more on the pupil avatar’s performance during recursive feedback. The lower performance in look-alike avatars may come from the learners’ tendencies to perceive their own appearance and performance better than objectively warranted (Lerner and Agar, 1972[20]). Possibly designing look-alike avatars that perform better than the actual learner may invite active involvement. Recursive feedback in LBT naturally brings many positive forces for learning, and is a powerful learning method that can be applied in classrooms (e.g. reciprocal teaching), teacher education, online learning and other technology enhanced learning environments.

Robotics programming in education

Educational robotics involves learning to program computers by explicitly formalising the rules in a learning environment (e.g. programming interface) and then observe an external source (e.g. robot) interpret and carry out the command in its entirety. Students observe the output given by the robot and then backtrack to ferret out the algorithms (e.g. debug) that dictate the robot’s behaviours. Schools typically give students problems to solve, but rarely ask students to search out problems on their own (Houtz, 1994[21]). Programming is unique in that students actively work to find answers to problems they create. Learning robotics also has strong constructivist implications for teaching (DiSessa, 1988[22]; Jaipal-Jamani and Angeli, 2017[23]) because a student’s experience can create a “time for telling” that leads to more learning and transfer by taking the knowledge from this experience and applying it to new situations (Schwartz and Bransford, 1998[24]). Educational robotics can have a prominent role in helping students connect with science, maths and other skills they have acquired in school.

Forty-one elementary school students in fifth and sixth grades learned to program simple robot movements. Students assigned to the high-transparency environment learned visual programming to control robots (e.g. visual icons/LEGO Mindstorms NXT-G). Students assigned to the low-transparency environment learned syntactic programming to control robots (e.g. RobotC programming). A midway performance test showed that students in both conditions learned how to debug familiar programming problems equally well. Then, the students were asked to debug unfamiliar programming problems. The low-transparency (syntactic programming) group was more successful in adapting its knowledge to debug unfamiliar high-transparency (visual programming) problems after observing the robot’s behavioural output. Students in the high-transparency (visual programming) group were less successful. Even after the students became familiar with both environments, the low-transparency group continued to perform better than the high-transparency group. This has practical implications on the order and manner in which knowledge is constructed, because findings suggest that whether students can make better use of what they know may depend on how they developed that earlier form of knowledge (Okita, 2014[7]). Similar limitations were found in learning fractions when comparing tile and pie wedge manipulatives (Gick and Holyoak, 1983[25]; Martin and Schwartz, 2005[26]).

Policy implications

Educational technology can have a significant role in strengthening ties between research and practice in K-12 education, but the processes for use and the preparation for pre-service teachers are largely lacking (Burkhardt and Schoenfeld, 2003[2]). Current teacher education programmes still struggle to place emphasis on K-8 science, technology, engineering and mathematics (STEM) disciplines that involve technology and engineering (Bybee, 2010[27]). Many teachers feel they lack knowledge and expertise in applying the methods that research suggests are successful (Nadelson et al., 2013[28]), and research findings are usually not usable “as is” in the classroom, and technologies are often built without clear learning trajectories.

Robotics is an integrative discipline that brings together basic math, science, applied engineering and creative thinking. Preparing pre-service teachers to teach STEM using robotics has been suggested as a promising way to improve students’ experience of, and attainment in, science and mathematics. Similar approaches are also seen internationally (Papanikolaou, Frangou and Alimisis, 2008[29]).

Very few examples combine research, practice and commerce early in the process of developing ideas and tools. It is important for policy leaders to help support educational foundations to undertake initiatives in educational venture philanthropy to support joint development between research, practice and commerce. This is particularly important for educational robotics since commercialisation can lead to low cost tools that can be more affordable for schools (Gomez and Henstchke, 2009[4]). Robotics systems often are open sourced, which allows the independent development of educational content. This can be a direct link to commerce, allowing faster feedback between research, practice and commerce.

Often times, designing a positive learning condition depends on situations that bring together a well-chosen confluence of effective learning methodologies, theories and choices of partnerships with technology. To bridge the gap between research and practice, this chapter identified the kinds of concepts research suggests are theoretically sound and robust, can work to maximise the social components of technology, and have practical applications for learning and behaviour. Policy often has a key role in shaping the demands for specific instructional services in schools, but teachers and policymakers need to know what learning is lost and gained when selecting one choice over another (Gomez and Henstchke, 2009[4]). The findings can inform teachers about the pedagogical risks and how specific instructional approaches may encourage certain kinds of learning when used with technology. As strengths and weaknesses becomes visible, incentives shift, and teachers may be more open to new ideas and methods.


[19] Bailenson, J. (2012), “Doppelgangers: A new form of self?”, Psychologist, Vol. 25/1, pp. 36-38, (accessed on 2 November 2018).

[16] Bargh, J. and Y. Schul (1980), “On the cognitive benefits of teaching”, Journal of Educational Psychology, Vol. 72/5, pp. 593-604,

[8] Blakemore, S. et al. (2003), “The detection of contingency and animacy from simple animations in the human brain”, Cerebral Cortex, Vol. 13/8, pp. 837-844, (accessed on 2 November 2018).

[2] Burkhardt, H. and A. Schoenfeld (2003), “Improving educational research: Toward a more useful, more influential, and better-funded enterprise”, Educational Researcher, Vol. 32/9, pp. 3-14,

[27] Bybee, R. (2010), “Advancing STEM education: A 2020 vision”, Technology and Engineering Teacher, Vol. 70/1, pp. 30-35, (accessed on 2 November 2018).

[22] DiSessa, A. (1988), “Knowledge in pieces”, in G. Forman and P. Pulfalls (eds.), Constructivism in the Computer Age, Lawrence Erbaum Associates, Hillsdale, NJ.

[25] Gick, M. and K. Holyoak (1983), “Schema induction and analogical transfer”, Cognitive Psychology, Vol. 15/1, pp. 1-38,

[4] Gomez, L. and G. Henstchke (2009), K-12 Education: the Role of For-Profit Providers, (accessed on 2 November 2018).

[21] Houtz, J. (1994), “Creative problem solving in the classroom: Contributions of four psychological approaches”, in Runco, M. (ed.), Problem Finding, Problem Solving and Creativity, Ablex, Norwood, NJ.

[14] Inagaki, K. and G. Hatano (2002), Young Children’s Naive Thinking About The Biological World, Psychology Press, New York, NY.

[23] Jaipal-Jamani, K. and C. Angeli (2017), “Effect of robotics on elementary preservice teachers’ self-efficacy, science learning, and computational thinking”, Journal of Science Education and Technology, Vol. 26/2, pp. 175-192,

[20] Lerner, M. and E. Agar (1972), “The consequences of perceived similarity: Attraction and rejection, approach and avoidance”, Journal of Experimental Research in Personality, Vol. 6/1, pp. 69-75, (accessed on 2 November 2018).

[26] Martin, T. and D. Schwartz (2005), “Physically distributed learning: Adapting and reinterpreting physical environments in the development of fraction concepts”, Cognitive Science, Vol. 29/4, pp. 587-625,

[28] Nadelson, L. et al. (2013), “Teacher STEM perception and preparation: Inquiry-based STEM professional development for elementary teachers”, The Journal of Educational Research, Vol. 106/2, pp. 157-168,

[12] Nickerson, R. (1999), “Enhancing creativity”, in Stemberg, R. (ed.), Handbook of Creativity, Cambridge University Press, Cambridge, UK.

[7] Okita, S. (2014), “The relative merits of transparency: Investigating situations that support the use of robotics in developing student learning adaptability across virtual and physical computing platforms”, British Journal of Educational Technology, Vol. 45/5, pp. 844-862,

[9] Okita, S., J. Bailenson and D. Schwartz (2008), Mere Belief in Social Action Improves Complex Learning, International Society of the Learning Sciences, (accessed on 2 November 2018).

[13] Okita, S. and V. Ng-Thow-Hing (2014), “The effects of design choices on human-robot interactions in children and adults”, in J. Marwowitz (ed.), Robots that Talk and Listen, De Gruyter.

[11] Okita, S., V. Ng-Thow-Hing and R. Sarvadevabhatla (2011), “Multimodal approach to affective human-robot interaction design with children”, ACM Transactions on Interactive Intelligent Systems, Vol. 1/1, pp. 1-29,

[17] Okita, S. and D. Schwartz (2013), “Learning by teaching human pupils and teachable agents: The importance of recursive feedback”, Journal of the Learning Sciences, Vol. 22/3, pp. 375-412,

[18] Okita, S. et al. (2013), “Learning by teaching with virtual peers and the effects of technological design choices on learning”, Computers & Education, Vol. 63, pp. 176-196,

[29] Papanikolaou, K., S. Frangou and D. Alimisis (2008), Teachers as Designers of Robotics-Enhanced Projects: The TERECoP Course in Greece:Teaching with Robotics; Didactic Approaches and Experiences, organised in the context of SIMPAR 20098 conference, University of Padova,

[10] Robins, B. et al. (2004), Chapter 1 Effects of Repeated Exposure of a Humanoid Robot on Children with auAtism – Can We Encourage Basic Social Interaction Skills?,

[6] Roscoe, R. and M. Chi (2007), “Understanding tutor learning: Knowledge-building and knowledge-telling in peer tutors’ Explanations and questions”, Review of Educational Research, Vol. 77/4, pp. 534-574,

[5] Rummel, N., H. Spada and S. Hauser (2009), “Learning to collaborate while being scripted or by observing a model”, International Journal of Computer-Supported Collaborative Learning, Vol. 4/1, pp. 69-92,

[24] Schwartz, D. and J. Bransford (1998), A Time for Telling,

[3] Sheridan, T. (2002), “Humans and automation: System design and research issues”, in Human Factors and Ergonomics Society, John Wiley & Sons, Inc.

[15] Thissen-Roe, A., E. Hunt and J. Minstrell (2004), “The DIAGNOSER project: Combining assessment and learning”, Behavior Research Methods, Instruments, & Computers, Vol. 36/2, pp. 234-240,

[1] Weiss, J. (1999), “Theoretical foundations of policy intervention”, in Frederickson, H. and J. Johnston (eds.), Public Management Reform and Innovation: Research, Theory and Application, University of Alabama Press, Tuscaloosa.

End of the section – Back to iLibrary publication page