Abstract
With the development of 5G networks, capacity enhancement, massive connectivity, ultra-high reliability, and low latency are typical use scenarios of information communication technology (ICT). These ICT improvements will also bring large challenges and revolutions to future human-computer interaction (HCI). This study aims to explore the emerging and promising trends and applications of HCI in the 5G era. Eleven experts in HCI and psychology were invited to participate in focus groups and brainstorming studies to collect their perspectives and expectations of future life scenarios and technological applications with 5G networks. Four trends in HCI are identified and proposed in this paper according to their opinions: natural interaction, multimodal display, virtual identity, and cloud data. The four trends are based on the elements of humans, computers, and the interactions between them and the environment. Each identified trend is analyzed in terms of main technological applications, links to 5G development, and future life scenarios. A related trend model was further discussed with respect to user experience and privacy protection.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The fifth generation of mobile communication technology (5G) has made major breakthroughs in data transmission speed, delay, connection, capacity, reliability, and mobility [1]. With the development of 5G networks, capacity enhancement, massive connectivity, ultra-high reliability, and low latency are typical use scenarios in information communication technology (ICT) [2]. These ICT improvements will also bring large challenges and revolutions to future interactions between people and information technology. This change is not limited to hardware, software, interaction paradigms, and new user needs. The emergence of 5G will accelerate the interconnection of all things, promote the construction of smart cities, and realize the development and expansion of digital life, such as intelligent furniture, intelligent transportation, intelligent offices, and other multi-faceted living scenarios [3]. The future will be an era of information explosion. Massive data will be generated during the interactions between people and information technology.
To explore the feasibility of 5G information communication networks in the context of human-computer interaction (HCI), this study focused on the human-center perspective and combined development and daily life applications with 5G technology to imagine future trends in HCI and life scenarios. Eleven experts in HCI and psychology were invited to participate in focus groups and brainstorming studies to collect their perspectives and expectations of future life scenarios and technological applications with 5G networks. The theme of the seminar was the human, computer, and environmental elements involved in the 5G era, including the following: 1) smart devices, emerging smart devices (such as AR glasses, smart speakers, etc.), new interaction paradigms (such as voice interaction, gesture interaction, eye movement, EEG, etc.), and the impact of 5G on hardware and software design; 2) Internet of things (IoT) and big data, data collection, transmission and application, people-centric design of the IoT, 5G impact on the development of the IoT, etc.; 3) artificial intelligence (AI), using AI to understand users through data, the role of AI in the future, the combination of AI and 5G networks, etc.; 4) future life scenarios, data collection, transmission, and applications in different scenarios in the future digital world, as well as data intercommunication between different scenarios.
We conducted open coding of potential trends and life scenarios in the raw discussion record. Axial coding was performed based on the coded topics. Researchers discussed the content of coded transcripts and clustered them into themes. Four dimensions were clustered: natural interactions, multimodal applications, virtual identity, and data in the cloud, and future life scenarios were concluded with each theme. Based on these four themes, we proposed a theoretical HCI trends model for human factor research in the 5G era. In the subsequent sections of this paper, each identified trend is analyzed related to technological application development, and some promising sub-trends are discussed. Human life scenarios in the 5G era are introduced in terms of human needs, emerging smart devices, interaction paradigms, and AI applications. Further discussion is extended to investigate the changes and challenges in user experience, social distance, and technological development in the 5G era.
2 Natural Interaction
2.1 Natural User Interface
A major core of HCI is the input and output of information. The information in the human brain is reflected by the user’s physical attributes, including gestures, touch, and voice. These physical attributes of the user are sensed by an input device and converted into information that can be processed by a computer. In current HCI input, in addition to keyboard input, more natural and simple interaction methods have started to attract more attention.
Some researchers have invented the process of tapping with a thumb on the touchpad, in which the user taps the QWERTY keyboard on a virtual screen and receives text feedback on a separate screen, which opens up new possibilities for entering text on ubiquitous computing platforms, such as smart TVs and head mounted displays [4]. Some researchers have invented a pressure-based input method that allows text to be entered with subtle finger movements. Users report that this input is easy to learn and fun to use, demonstrating the feasibility of applying pressure as the primary channel for text input [5]. In addition to the common voice input, Lip-Interact, a lip language input, has also been developed. It uses a front camera to recognize lip movements and uses natural language lip movement coding to recognize mobile phone operation commands. This interaction is not affected by environmental noise and largely alleviates the issues surrounding personal privacy and social norms in the public environment [6].
Although these new types of interaction methods are still being explored in the academic world, attention should also be paid to the exploration of future interaction methods for these more natural and easy interactions in the 5G era.
2.2 Brain-Computer Interface
A brain-computer interface (BCI) builds a direct information transmission path between the brain and the outside world by decoding neural brain activity information in the course of human thinking. BCI provides a new HCI channel that does not rely on peripheral nerves and muscle tissues. BCI hardware is currently divided into invasive and non-invasive types. Intrusive technology requires the acquisition of electroencephalograms (EEGs) by fully implanting electrodes on the surface of the cerebral cortex or inside the brain. However, because it needs to be implanted into the human brain, it is an “invasive” operation, so the research purpose is mainly to help patients with severe disabilities to recover or improve the quality of life.
The Japanese Honda company produced a mind-control robot. Operators can control surrounding robots to perform corresponding actions by imagining their limb movements. According to a study by the University of Rochester in the United States, subjects can control P300 component signals in a virtual reality (VR) scene, such as for switching lights or operating a virtual car. Japanese technology company Neurowear has developed a BCI device called Necomimi Brainwave Cat Ears. This cat ear device can detect human brain waves, and then turn the cat ears to express different emotions. Its similar product, the “Electric Wave Cattail,” can control the movement of a cat-like tail device by brainwaves. Emotiv, a neurotechnology company in San Francisco, California, has developed a brain wave compilation device, Emotiv Insight, which can help people with disabilities to control wheelchairs or computers. Some researchers have developed a virtual prosthetic speech system that can decode the brain’s speaking intentions, interpret the brain’s intended vocal cord movements during speech, and convert them into basically understandable speech using a computer to output speech [7].
In terms of materials for future BCIs, considering biocompatibility, small size, mechanical flexibility, and electronic properties, graphene is one of the most promising candidates for neural interfaces [8]. Although the development of BCI still faces limitations such as ethics, technology, and neurophysiology, this interaction is still expected in the future 5G era.
2.3 Affective Computing
Affective computing refers to identifying a human’s emotional signals to create a computer system that can perceive, recognize, and understand human emotions, and can respond intelligently, sensitively, and friendly. That is, to give computers the same observation, understanding, and cognitive ability to produce various emotional characteristics [9].
The research on affective computing mainly includes the collection of affective signals, the analysis, modeling, and recognition of signals, and the research of signal fusion algorithms. The analysis, modeling, and recognition of emotion signals mainly include five methods: facial emotion recognition, speech emotion recognition, limb emotion recognition, language and text recognition, and physiological pattern recognition. At present, traditional research mainly includes text sentiment analysis, speech sentiment analysis, and visual sentiment analysis. These three types of analysis have formed commonly used analysis databases and API interfaces. At the same time, sentiment computing based on massive network data and multi-modal sentiment computing has now emerged. Massive sentiment analysis will be a way to perform sentiment calculations with the support of a large number of networks and various other data in the future, such as judging depression tendency from a person’s Weibo data, so as to actively intervene. Multi-modal sentiment computing will make the dimensions in HCI as complete as possible, to judge by multi-result fitting. For example, in the interactions of emotional robots, robots can better recognize human emotions through multi-modal emotion recognition, thereby achieving better interactions.
Affective computing has been applied in classroom teaching, emotional monitoring, medical rehabilitation, and public opinion monitoring. In future digital life scenarios, improved HCI based on emotional computing will also make interactions more vivid and beautiful.
2.4 Life Scenarios
With the increase of capacity and connectivity, natural interactions will become the trend in HCI, especially in AI and the IoT. People will use voice, gestures, touch, EEG, and more natural ways to interact with their smart homes. Their homes will become smart and quiet, which means these devices can perform affective computing and automatically provide more optimal settings. Smart cars can interact with the driver and control the car more safely with a lower loading. Robots of the future will be able to understand user instructions and behave in a more natural way.
3 Multimodal Display
3.1 Virtual Environment Technology
Virtual environment technologies, including VR technology and augmented environment (AR) technology, have attracted increasing academic interest in recent decades. VR technology can build a virtual scene that is totally different from the physical environment, while AR refers to adding virtual information to real physical scenes [10]. Common virtual-environment visual display devices include head-mounted displays, mobile phones, and spatial immersive displays [11]. Holographic projection, a special augmented reality technology, refers to projecting virtual images without the devices mentioned above. Holographic display aims to create a physically pure three-dimensional image, which can be observed from different angles without restriction. Hatsune Miku, a virtual Japanese idol, is the first to adopt holographic technology to show its virtual image in concert [12].
3.2 Multisensory Fusion Technology
With the development of information communication technologies, multimodal information is communicated from person to person, i.e., visual information, auditory information, and tactile information. Multisensory fusion can not only enrich information content but also improve the sense of immersion during usage.
Multisensory fusion technology provides users with immersive experience not only via visual simulation but also auditory and even tactile simulation. The auditory channel is very important when the visual channel is occupied. Take a driving alert system, for instance; it was suggested that auditory alerts contribute to reduced collision rates, shorter reaction times, larger maximum brake pedal force, and larger maximum lane deviation when compared to the baseline condition without a warning [13]. Tactile information can be simulated by an electronic skin system that receives commands wirelessly and then simulates the “touch” with vibration. The user can feel the “touch” by putting a soft and thin device like a patch on the skin [14]. With the development of future technologies, the simulation of VR users’ senses might even extend to taste and smell.
For the transmission of multi-channel information, especially when high-definition visual information is included, the speed of transmission is very important. The development of 5G technology brings more possibilities for high-speed, high-quality multimodal information display. The impact model of consumer multisensory perceptual is built form the embodied cognition perspective, multisensory perceptions have indirect positive effect on online purchase intention [15].
3.3 Life Scenarios
Attention to tactile sensation is a hot spot in multi-modal applications. In addition to audiovisual interactions, in the future, when using smart devices, tactile interactions will also become a focus. In terms of entertainment, people can use VR games or VR virtual experiences to obtain a better user experience. AR and MR can be used to assist daily behaviors, such as AR-assisted training and driving. Future autonomous driving may take advantage of the multi-sensory redundancy effect to improve the efficiency and safety of takeovers.
4 Virtual Identity
4.1 Virtual Avatar
Zepeto is a mobile application launched in 2018 and has gained extensive user and media attention in a short time. The core content of Zepeto is social. Different from other traditional social media, Zepeto gives users their own avatars. Users can interact with their avatar, share it through pictures and videos, use the avatar to creative emoticons, take photos with friends’ avatars, etc. The popularity of Zepeto reveals people’s expectations for their new identities in socializing. Social network users are no longer satisfied with plain text or pictures, and they are expecting a new function to “show off.” Various types of virtual technologies, such as face recognition and motion capture, provide the opportunity for each user to create a virtual identity on social networks. Information such as account name, avatar, and personal introduction constitute a simple virtual identity. According to studies on human social attributes, people try to influence their images consciously or unconsciously in their social life, which is called “image management.” Image management behaviors occur when people want to show others a good self-image [16, 17]. Therefore, as a social person, people will play their own social roles on social networking services, showing their image characteristics in line with their own expectations and those of others. Virtual, anonymous social media offers users a greater degree of freedom. People can manipulate multiple network personalities to show themselves, making them more humorous, friendly, and cute than they are in reality. One characteristic of network image management is that users can fully control the release of information so that they can present themselves more strategically.
4.2 Virtual Idols
With the development of virtual identity, virtual idols, such as Hatsune Miku, emerge. Idolatry was viewed as a kind of attachment to outstanding figures in fantasies, which are often overpowered or idealized. Fans would be disappointed when the idols show their “true face.” For virtual idols, they will never disappoint their fans because they are designed as a “perfect” idol. The most well-known virtual idol is the Japanese virtual singer Hatsune Miku. Miku was born in 2008 and has a sound source library based on the vocal synthesis software VOCALOID released by Yamaha. Using the data of the sound source library, music lovers can make songs and upload them to the Internet. In addition to the speech synthesis technique, virtual 3D image synthesis, such as MikuMikuDance, also enable Miku’s fans to create fanworks without restrictions. Fans can create their own virtual idols with the help of virtual technologies and open-source data. Meanwhile, high-quality fanworks also attract more potential fan groups.
On March 9, 2010, Hatsune Miku became the first virtual singer to hold a concert using holographic projection technology, and 2500 concert tickets were snapped up in an instant. More than 30,000 Internet users watched the entire concert through a live webcast. Without a doubt, the virtual entertainment industry represented by virtual idols has large commercial potential. The development of 5G technology will provide more possibilities for this virtual industry.
4.3 Life Scenarios
Everyone could create a virtual avatar in a social network or community communication with image identification and virtual modeling technology and then could use their digital identities to socialize in different scenarios. People could use their avatars for online activities such as entertainment, social networking, and working. Anyone could choose to start their own information rights where others could directly obtain the developed information after identification through AR or other methods. New types of socializing could be conducted through AR and VR.
5 Data in the Cloud
5.1 Big Data in the Cloud
The key characteristics of 5G include capacity enhancement and massive connectivity, which make massive data collection and analysis feasible and reliable. The 4 V definition of big data, volume (amount), variety (type and source), velocity (speed of data transfer), and value (hidden information) is widely recognized [18]. Integrated sensor systems provide users with a large-volume and wide-variety of data with wireless and wearable sensors emerging. Massive data in the cloud eliminates data storage using expansive hardware, and analysis in cloud computing increases the efficiency of the process in software [19].
Big data in the cloud has already been explored in IoT systems, cloud manufacturing, and healthcare service. To deal with multi-dimensional IoT data, researchers proposed an update-and-query efficient index framework based on a key-value store that can support high insert throughput and provide efficient multi-dimensional queries simultaneously [20]. With massive sensor data in cloud manufacturing systems, Hadoop was introduced to conduct sensor data management [21]. Experts believe the adoption of IoT paradigms and data in the cloud in the healthcare field will improve service in the 5G era. They proposed a hybrid model of IoT and cloud computing to manage big data in health service applications consisting of four main components: stakeholders’ devices, stakeholders’ requests (tasks), cloud broker, and network administrator [22].
5.2 Edge Computing
As cloud computing technology develops, data is increasingly produced at the edge of the network. With the growing quantity of data generated at the edge, the speed of data transportation is becoming a bottleneck for the cloud-based computing paradigm [23]. In this way, edge computing, which refers to the enabling technologies that allow computation to be performed at the network edge so that computing happens near data sources, would be more efficient [24]. If the data is computed through a mobile network edge, the technology is defined as mobile edge computing (MEC) [25]. According to the white paper published by ETSI [26], MEC can be characterized by five points: on-premises, proximity, lower latency, location awareness, and network context information.
Edge computing infrastructures have been recognized to be a solution for high-demand computing power, low latency, and high bandwidth in AR applications [27]. For example, by collecting environmental data, AR combines real and virtual objects handled by MEC. Edge computing also plays a role in content delivery and video acceleration [25]. Users’ requests and responses are time-efficient as the edge server is deployed close to the edge devices.
5.3 Life Scenarios
With massive amounts of data produced by each person, all the data could be uploaded to the cloud and synchronized in different mobile digital devices. Edge computing will provide users with rapid feedback on data analysis and fast data transmission. Integrated sensor systems will be combined with smart home devices to collect data on environmental indicators, physiological indicators, movement behavior, etc. Each person generates rich and diverse data. In addition to audio, video, and image data generated by themselves, various sensor data, such as geographical, physiological, and psychological indicators, can be used to construct the data context of the entire life cycle of a person. With the powerful computing capability provided by edge computing, the system analyzes and predicts the habits, rules, and intentions of each person, and provides users with the most needed information services at the right time and place. Through edge computing, the data will be used for warning and monitoring of emergencies and providing timely rescue and emergency guidance. For example, if an elderly person falls or suddenly becomes ill, the system will immediately contact the hospital and their families, and provide practical rescue guidance to those around them. Doctors can search the health cloud and conduct cloud computing with authorized data in the cloud for remote diagnosis and tracking patients’ statuses.
6 Discussion
Based on the four trends in HCI in the 5G era, we proposed a trend model. Each basic element of the HCI paradigm is related to potential trends. In terms of the human element, people’s virtual identities in social networking and data clouds will complement and co-exist with identities in real life. People will not only exist physically; the data they generate will also become another form of digital existence. As for the computer elements, it is broadly defined and includes many potential devices besides computers. Hardware and software will provide more natural user interfaces for better user experiences and conduct more intelligent computing to recognize emotions. The interaction element will become natural and multimodal. Besides keyboard input and screen display, voice interaction, gesture interaction, haptic interaction, and brain-computer interaction will be popular in the 5G era. All data produced by HCI will be in the cloud, and the environment element will be digitally collected and processed. In addition, 5G high-density coverage allows smart devices to collect more data automatically and provide services proactively (Fig. 1).
In terms of user experience in the 5G era, pragmatic and hedonic aspects are both important. The main benefits of 5G applications are capacity enhancement, massive connectivity, ultra-high reliability, and low latency. These properties result in not only efficient performance but also satisfying entertainment. We could cluster the life scenarios into pragmatic and hedonic aspects with respect to human needs and experience. In natural interactions with a smart home, smart car, or social robot, qualities like object response time and accuracy and the subject’s perceived usefulness and easiness play an important role in the user experience. Entertainment elements like perceived enjoyment or cuteness could make an impact on trust and the user experience of intelligent systems or devices. For multimodal displays, one typical scenario is an immersive entertainment experience. For example, people perceive visual, auditory, and haptic stimuli in a VR environment, and such immersion, provided by 3D modeling and multimodal display, improves the pragmatic user experience. Besides, using multimodal displays for vehicle takeover in smart cars is an efficient improvement. Considering further trends in HCI, pragmatic and hedonic qualities in each dimension need to be identified and discussed to explore related factors and design guidelines.
Looking back at the changes brought by the previous 4G development, we considered the various trends brought by 5G to HCI and continued to imagine the new changes in the future 6G era. From a social point of view, 4G makes real-time face-to-face communication in video calls smoother. In the 6G era, the bandwidth and delay will meet the requirements of real-time projection technology, and the communication between people will be more real. The space limitation will be completely broken. For example, remote operations such as remote surgery with high time accuracy will be more popular and safer. From the perspective of the development of autonomous driving, the latency of 4G networks is as high as tens of milliseconds and unstable, while the latency of 6G will be lower and more stable. At the same time, the bandwidth can even reach the terabyte level. The improvement of network performance can guarantee the usability and safety of human driving. In addition, with the development of smart homes and smart cities, the bottlenecks caused by ICT will be further reduced by 6G technology.
The convergence of digital life and data throughout the life cycle in 5G development trends poses security challenges as well. In the 5G era, data in the cloud will result in abundant monitoring networks and a large amount of personal privacy data. With big data technologies, it is even possible to infer detailed and comprehensive user portraits from non-associated data. The leakage of personal privacy seriously threatens the safety and quality of everyone’s life. Although the EU’s general data protection regulation is considered as the strictest personal data protection bill in history [28], personal data is still in the possession and control of third parties, and there is a potential risk of leakage. A new type of personal data protection mechanism is needed to allow individuals to control their data by themselves. The acquisition of data requires personal authorization. Similar technologies such as SoLiD and DID have already been applied.
In the 5G era, combined with high-density coverage of base stations, human-centered combined space and time study framework will be developed with high-precision positioning and accurate services. From the perspective of the entire life cycle of people, people are divided into eight-stages according to their ages [29]. As shown in Fig. 2, people at each stage have their own mobility of activity (only at home, two points and one line, three points and three lines, and three points and four lines), and each activity is placed by 5G’s precision positioning technology. Based on their position, servers can identify and predict the user’s intention, infer the user’s next objective, and analyze the potential information service that the user needs. In this way, people can be served according to their location and data. Figure 2 shows the three-point and three-line activities of middle-aged workers from 25–65 as an example.
At the same time, 5G increases the speed of network uplinks, enabling each information service node to provide external information in real-time and at high speed. Combined with high-performance terminal equipment, a human-centered radar-like information search service will become possible. Combining different locations and different activity intents, we can construct a suitable radar search radius and information preferences, shield and weaken unwanted information, and recommend the information that best meets the user’s needs to improve the efficiency of user screening and identification, reduce the search and screening time, and improve user experience.
7 Conclusion
This study invited 11 experts in HCI and psychology to participate in focus groups and brainstorm with the topic of future life scenarios and technological applications with 5G networks to explore the trends in HCI in the 5G era. After open coding and axial coding, four trends of HCI were identified and proposed for technological development and future life scenarios. Natural user interfaces, brain-computer interactions, and affective computing were related to the trend in natural interaction. Virtual environment and multimodal fusion technologies were important parts of the trend in multimodal applications. Humans would have their own virtual avatar, including digital life and virtual images. Big data in the cloud and edge computing became environmental elements of HCI paradigms. The specific life scenarios involved multiple areas such as housekeeping, driving, entertainment, and healthcare and are related to pragmatic and hedonic user experience factors. Personal data protection is still critical in 5G and new technologies are developing. Human-centered combined space and time study frameworks with high-precision positioning and accurate services could be further investigated.
The limitations of this study were that only subjective feedback was collected, and the concluded trends in HCI may be qualitative. However, this study emphasizes experts’ perspectives and speculation, and we thought it was acceptable to use these methods. This study reflected potential trend developments and application domains from 11 experts. There is more work required to achieve a deeper perspective.
References
Andrews, J.G., et al.: What will 5G be? IEEE J. Sel. Areas Commun. 32(6), 1065–1082 (2014)
Palattella, M.R., et al.: Internet of things in the 5G era: Enablers, architecture, and business models. IEEE J. Sel. Areas Commun. 34(3), 510–527 (2016)
Amy, N.: 5 myths about 5G. IEEE Spectr. (2016)
Lu, Y., Yu, C., Yi, X., Shi, Y., Zhao, S.: BlindType: eyes-free text entry on handheld touchpad by leveraging thumb’s muscle memory. In: Proceedings of the ACM on Inter-active, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 2, pp. 1–24 (2017)
Zhong, M., Yu, C., Wang, Q., Xu, X., Shi, Y.: ForceBoard: Subtle text entry leveraging pressure. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montréal, Canada, pp. 1–10. ACM (2018)
Sun, K., Yu, C., Shi, W., Liu, L., Shi, Y.: Lip-interact: improving mobile device interaction with silent speech commands. In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, German, pp. 581–593. ACM (2018)
Anumanchipalli, G.K., Chartier, J., Chang, E.F.: Speech synthesis from neural decoding of spoken sentences. Nature 568(7753), 493–498 (2019)
Hébert, C., et al.: Flexible graphene solution-gated field-effect transistors: Efficient transducers for micro-electrocorticography. Adv. Func. Mater. 28(12), 1703976 (2018)
Picard, R.W.: Affective Computing. MIT Press, London (2000)
Gavish, N.: Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interactive Learning Environments 23(6), 778–798 (2015)
Lantz, E.: The future of virtual reality: head mounted displays versus spatially immersive displays (panel). In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, United States, pp. 485–486. ACM (1996)
McLeod, K.: Living in the immaterial world: holograms and spirituality in recent popular music. Popul. Music Soc. 39(5), 501–515 (2016)
Wu, X., Boyle, L.N., Marshall, D., O’Brien, W.: The effectiveness of auditory forward collision warning alerts. Transp. Res. Part F Traffic Psychol. Behav. 59, 164–178 (2018)
Yu, X., et al.: Skin-integrated wireless haptic interfaces for virtual and augmented reality. Nature 575(7783), 473–479 (2019)
Xiao-qing, S., Xi-xiang, S.: The Impact of multisensory perception on consumer’s online purchase intention: Embodied cognition perspective. In: Proceedings of 23rd International Conference on Management Science and Engineering Management, Dubai, United Arab Emirates, pp. 200–206. IEEE (2015)
Goffman, E.: The Presentation of Self in Everyday Life. Harmondsworth, London (1978)
Tedeschi, J.T.: Impression Management Theory and Social Psychological Research. Academic Press, Salt Lake City (2013)
Laurila, J.K., et al.: The mobile data challenge: big data for mobile computing research (2012)
Hashem, I.A.T., Yaqoob, I., Anuar, N.B., Mokhtar, S., Gani, A., Khan, S.U.: The rise of “big data” on cloud computing: review and open research issues. Inf. Syst. 47, 98–115 (2015)
Ma, Y., et al.: An efficient index for massive IOT data in cloud environment. In: Proceedings of the 21st ACM International Conference on Information and Knowledge Management, Hawaii, USA, pp. 2129–2133. ACM (2012)
Bao, Y., Ren, L., Zhang, L., Zhang, X., Luo, Y.: Massive sensor data management framework in cloud manufacturing based on Hadoop. In: Proceedings of 10th International Conference on Industrial Informatics, Beijing, China, pp. 397–401. IEEE (2012)
Elhoseny, M., Abdelaziz, A., Salama, A.S., Riad, A.M., Muhammad, K., Sangaiah, A.K.: A hybrid model of Internet of Things and cloud computing to manage big data in health services applications. Future Gener. Comput. Syst. 86, 1383–1394 (2018)
Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016)
Shi, W., Dustdar, S.: The promise of edge computing. Computer 49(5), 78–81 (2016)
Hu, Y.C., Patel, M., Sabella, D., Sprecher, N., Young, V.: Mobile edge computing—a key technology towards 5G. ETSI White Paper 11(11), 1–16 (2015)
Patel, M., Naughton, B., Chan, C., Sprecher, N., Abeta, S., Neal, A.: Mobile-edge computing introductory technical white paper. White Paper, Mobile-Edge Computing (MEC) Industry Initiative, pp. 1089–7801 (2014)
Abbas, N., Zhang, Y., Taherkordi, A., Skeie, T.: Mobile edge computing: a survey. IEEE Internet Things J. 5(1), 450–465 (2017)
Voigt, P., von dem Bussche, A.: The EU general data protection regulation (GDPR). Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57959-7
Erikson, E.H.: Childhood and Society, 2nd edn. Norton & Company, New York (1963)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhao, J., Zhang, A., Rau, PL.P., Dong, L., Ge, L. (2020). Trends in Human-Computer Interaction in the 5G Era: Emerging Life Scenarios with 5G Networks. In: Rau, PL. (eds) Cross-Cultural Design. User Experience of Products, Services, and Intelligent Environments. HCII 2020. Lecture Notes in Computer Science(), vol 12192. Springer, Cham. https://doi.org/10.1007/978-3-030-49788-0_53
Download citation
DOI: https://doi.org/10.1007/978-3-030-49788-0_53
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49787-3
Online ISBN: 978-3-030-49788-0
eBook Packages: Computer ScienceComputer Science (R0)