{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,15]],"date-time":"2024-08-15T02:09:23Z","timestamp":1723687763040},"reference-count":36,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2019,7,10]],"date-time":"2019-07-10T00:00:00Z","timestamp":1562716800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/100014013","name":"UK Research and Innovation","doi-asserted-by":"publisher","award":["2018-1"],"id":[{"id":"10.13039\/100014013","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003141","name":"Consejo Nacional de Ciencia y Tecnolog\u00eda","doi-asserted-by":"publisher","award":["Scholarship No. 276379"],"id":[{"id":"10.13039\/501100003141","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"Activity recognition, a key component in pervasive healthcare monitoring, relies on classification algorithms that require labeled data of individuals performing the activity of interest to train accurate models. Labeling data can be performed in a lab setting where an individual enacts the activity under controlled conditions. The ubiquity of mobile and wearable sensors allows the collection of large datasets from individuals performing activities in naturalistic conditions. Gathering accurate data labels for activity recognition is typically an expensive and time-consuming process. In this paper we present two novel approaches for semi-automated online data labeling performed by the individual executing the activity of interest. The approaches have been designed to address two of the limitations of self-annotation: (i) The burden on the user performing and annotating the activity, and (ii) the lack of accuracy due to the user labeling the data minutes or hours after the completion of an activity. The first approach is based on the recognition of subtle finger gestures performed in response to a data-labeling query. The second approach focuses on labeling activities that have an auditory manifestation and uses a classifier to have an initial estimation of the activity, and a conversational agent to ask the participant for clarification or for additional data. Both approaches are described, evaluated in controlled experiments to assess their feasibility and their advantages and limitations are discussed. Results show that while both studies have limitations, they achieve 80% to 90% precision.<\/jats:p>","DOI":"10.3390\/s19143035","type":"journal-article","created":{"date-parts":[[2019,7,10]],"date-time":"2019-07-10T15:56:51Z","timestamp":1562774211000},"page":"3035","source":"Crossref","is-referenced-by-count":15,"title":["Semi-Automated Data Labeling for Activity Recognition in Pervasive Healthcare"],"prefix":"10.3390","volume":"19","author":[{"given":"Dagoberto","family":"Cruz-Sandoval","sequence":"first","affiliation":[{"name":"CICESE (Centro de Investigacion Cientifica y de Investigacion Superior de Ensenada), Ensenada 22860, Mexico"}]},{"given":"Jessica","family":"Beltran-Marquez","sequence":"additional","affiliation":[{"name":"IPN (Instituto Politecnico Nacional), Tijuana 22435, Mexico"},{"name":"CONACYT (Consejo Nacional de Ciencia y Tecnolog\u00eda), Ciudad de Mexico 03940, Mexico"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-3420-0532","authenticated-orcid":false,"given":"Matias","family":"Garcia-Constantino","sequence":"additional","affiliation":[{"name":"School of Computing, Ulster University, Jordanstown BT37 0QB, UK"}]},{"given":"Luis A.","family":"Gonzalez-Jasso","sequence":"additional","affiliation":[{"name":"INIFAP (Instituto Nacional de Investigaciones Forestales, Agricolas y Pecuarias), Aguascalientes 20660, Mexico"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-2967-9654","authenticated-orcid":false,"given":"Jesus","family":"Favela","sequence":"additional","affiliation":[{"name":"CICESE (Centro de Investigacion Cientifica y de Investigacion Superior de Ensenada), Ensenada 22860, Mexico"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-3979-9465","authenticated-orcid":false,"given":"Irvin Hussein","family":"Lopez-Nava","sequence":"additional","affiliation":[{"name":"CICESE (Centro de Investigacion Cientifica y de Investigacion Superior de Ensenada), Ensenada 22860, Mexico"},{"name":"CONACYT (Consejo Nacional de Ciencia y Tecnolog\u00eda), Ciudad de Mexico 03940, Mexico"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-2368-7354","authenticated-orcid":false,"given":"Ian","family":"Cleland","sequence":"additional","affiliation":[{"name":"School of Computing, Ulster University, Jordanstown BT37 0QB, UK"}]},{"given":"Andrew","family":"Ennis","sequence":"additional","affiliation":[{"name":"School of Computing, Ulster University, Jordanstown BT37 0QB, UK"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-3603-4806","authenticated-orcid":false,"given":"Netzahualcoyotl","family":"Hernandez-Cruz","sequence":"additional","affiliation":[{"name":"School of Computing, Ulster University, Jordanstown BT37 0QB, UK"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-6318-8456","authenticated-orcid":false,"given":"Joseph","family":"Rafferty","sequence":"additional","affiliation":[{"name":"School of Computing, Ulster University, Jordanstown BT37 0QB, UK"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-6768-7877","authenticated-orcid":false,"given":"Jonathan","family":"Synnott","sequence":"additional","affiliation":[{"name":"School of Computing, Ulster University, Jordanstown BT37 0QB, UK"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-0882-7902","authenticated-orcid":false,"given":"Chris","family":"Nugent","sequence":"additional","affiliation":[{"name":"School of Computing, Ulster University, Jordanstown BT37 0QB, UK"}]}],"member":"1968","published-online":{"date-parts":[[2019,7,10]]},"reference":[{"key":"ref_1","unstructured":"Yordanova, K., Paiement, A., Schr\u00f6der, M., Tonkin, E., Woznowski, P., Olsson, C.M., Rafferty, J., and Sztyler, T. (2018, January 19\u201323). Challenges in annotation of useR data for UbiquitOUs systems: Results from the 1st ARDUOUS workshop. Proceedings of the International Conference on Pervasive Computing and Communications Workshops, Athens, Greece."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Winnicka, A., Kesik, K., Polap, D., Wo\u017aniak, M., and Marsza\u0142ek, Z. (2019). A Multi-Agent Gamification System for Managing Smart Homes. Sensors, 19.","DOI":"10.3390\/s19051249"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Bravo, J., Herv\u00e1s, R., Fontecha, J., and Gonz\u00e1lez, I. (2018). m-Health: Lessons Learned by m-Experiences. Sensors, 18.","DOI":"10.3390\/s18051569"},{"key":"ref_4","first-page":"1","article-title":"A smartphone application for automated decision support in cognitive task based evaluation of central nervous system motor disorders","volume":"1","author":"Lauraitis","year":"2019","journal-title":"IEEE J. Biomed. Health Inf."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1192","DOI":"10.1109\/SURV.2012.110112.00192","article-title":"A survey on human activity recognition using wearable sensors","volume":"15","author":"Lara","year":"2012","journal-title":"IEEE Commun. Surv. Tutor."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"2","DOI":"10.1145\/2134203.2134205","article-title":"Multimodal recognition of reading activity in transit using body-worn sensors","volume":"9","author":"Bulling","year":"2012","journal-title":"ACM Trans. Appl. Percept."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"419","DOI":"10.1207\/S15327078IN0204_02","article-title":"What Is Ecological Validity? A Dimensional Analysis","volume":"2","author":"Schmuckler","year":"2001","journal-title":"Infancy"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2137","DOI":"10.1001\/jama.2013.281865","article-title":"Do Flawed Data on Caloric Intake From NHANES Present Problems for Researchers and Policy Makers?","volume":"310","author":"Mitka","year":"2013","journal-title":"J. Am. Med. Assoc."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Gonz\u00e1lez-Jasso, L.A., and Favela, J. (2018). Data Labeling for Participatory Sensing Using Geature Recognition with Smartwatches. Proceedings, 2.","DOI":"10.3390\/proceedings2191210"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Garcia-Constantino, M., Beltran-Marquez, J., Cruz-Sandoval, D., Lopez-Nava, I., Favela, J., Ennis, A., Nugent, C., Rafferty, J., Cleland, I., and Synnott, J. (2019, January 11\u201315). Semi-automated Annotation of Audible Home Activities. Proceedings of the ARDOUS 19\u20143rd International Workshop on Annotation of useR Data for UbiquitOUs Systems inside PerCom Pervasive Computing, Kyoto, Japan.","DOI":"10.1109\/PERCOMW.2019.8730729"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Schr\u00f6der, M., Yordanova, K., Bader, S., and Kirste, T. (2016, January 23\u201324). Tool support for the online annotation of sensor data. Proceedings of the 3rd International Workshop on Sensor-based Activity Recognition and Interaction, Rostock, Germany.","DOI":"10.1145\/2948963.2948972"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Cruciani, F., Cleland, I., Nugent, C., McCullagh, P., Synnes, K., and Hallberg, J. (2018). Automatic Annotation for Human Activity Recognition in Free Living Using a Smartphone. Sensors, 18.","DOI":"10.3390\/s18072203"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Kipp, M. (2001, January 3\u20137). ANVIL\u2014A generic annotation tool for multimodal dialogue. Proceedings of the 7th European Conference on Speech Communication and Technology, Aalborg, Denmark.","DOI":"10.21437\/Eurospeech.2001-354"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Cowie, R., Sawey, M., Doherty, C., Jaimovich, J., Fyans, C., and Stapleton, P. (2013, January 2\u20135). Gtrace: General trace program compatible with emotionml. Proceedings of the Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland.","DOI":"10.1109\/ACII.2013.126"},{"key":"ref_15","unstructured":"Brugman, H., and Russel, A. (2004, January 26\u201328). Annotating Multi-media\/Multi-modal Resources with ELAN. Proceedings of the 4th International Conference on Language Resources and Language Evaluation; European Language Resources Association (ELRA), Lisbon, Portugal."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Tonkin, E., Burrows, A., Woznowski, P., Laskowski, P., Yordanova, K., Twomey, N., and Craddock, I. (2018). Talk, Text, Tag? Understanding Self-Annotation of Smart Home Data from a User\u2019s Perspective. Sensors, 18.","DOI":"10.3390\/s18072365"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Lasecki, W.S., Song, Y.C., Kautz, H., and Bigham, J.P. (2013, January 23\u201327). Real-time crowd labeling for deployable activity recognition. Proceedings of the 2013 Conference on Computer Supported Cooperative Work, San Antonio, TX, USA.","DOI":"10.1145\/2441776.2441912"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"86","DOI":"10.1136\/ebmental-2016-102418","article-title":"Use of the experience sampling method in the context of clinical trials","volume":"19","author":"Verhagen","year":"2016","journal-title":"Evid.-Based Ment. Health"},{"key":"ref_19","unstructured":"Arslan, U., D\u00f6nderler, M.E., Saykol, E., Ulusoy, \u00d6., and G\u00fcd\u00fckbay, U. (2002, January 22\u201329). A semi-automatic semantic annotation tool for video databases. Proceedings of the Workshop on Multimedia Semantics, Milovy, Czech Republic."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Kubat, R., DeCamp, P., Roy, B., and Roy, D. (2007, January 12). Totalrecall: Visualization and semi-automatic annotation of very large audio-visual corpora. Proceedings of the 9th International Conference on Multimodal Interfaces, Nagoya, Aichi, Japan.","DOI":"10.1145\/1322192.1322229"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"169","DOI":"10.1017\/S1355771800003071","article-title":"Marsyas: A framework for audio analysis","volume":"4","author":"Tzanetakis","year":"2000","journal-title":"Organ. Sound"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"271","DOI":"10.1016\/S0167-6393(96)00037-4","article-title":"Automatic segmentation and labeling of multi-lingual speech data","volume":"19","author":"Vorstermans","year":"1996","journal-title":"Speech Commun."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"896","DOI":"10.1016\/j.imavis.2014.08.004","article-title":"Automatic annotation of tennis games: An integration of audio, vision, and learning","volume":"32","author":"Yan","year":"2014","journal-title":"Image Vis. Comput."},{"key":"ref_24","unstructured":"Auer, E., Wittenburg, P., Sloetjes, H., Schreer, O., Masneri, S., Schneider, D., and Tsch\u00f6pel, S. (2010, January 16). Automatic annotation of media field recordings. Proceedings of the Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, University de Lisbon, Lisbon, Portugal."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Intille, S.S., Rondoni, J., Kukla, C., Ancona, I., and Bao, L. (2003, January 5\u201310). A context-aware experience sampling tool. Proceedings of the CHI\u201903 Extended Abstracts on Human Factors in Computing Systems, Ft. Lauderdale, FL, USA.","DOI":"10.1145\/765891.766101"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"339","DOI":"10.1111\/j.1467-6494.1991.tb00252.x","article-title":"Self-Recording of Everyday Life Events: Origins, Types, and Uses","volume":"59","author":"Wheeler","year":"1991","journal-title":"J. Personal."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Yordanova, K., and Kr\u00fcger, F. (2018). Creating and Exploring Semantic Annotation for Behaviour Analysis. Sensors, 18.","DOI":"10.3390\/s18092778"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"e5","DOI":"10.5334\/jors.ar","article-title":"CARMA: Software for continuous affect rating and media annotation","volume":"2","author":"Girard","year":"2014","journal-title":"J. Open Res. Softw."},{"key":"ref_29","first-page":"92","article-title":"Microinteraction Ecological Momentary Assessment Response Rates: Effect of Microinteractions or the Smartwatch?","volume":"1","author":"Ponnada","year":"2017","journal-title":"Proc. ACM Interac. Mobile Wearable Ubiquitous Technol."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1054","DOI":"10.1109\/JSEN.2015.2497279","article-title":"Detection of gestures associated with medication adherence using smartwatch-based inertial sensors","volume":"16","author":"Kalantarian","year":"2015","journal-title":"IEEE Sens. J."},{"key":"ref_31","unstructured":"Costante, G., Porzi, L., Lanz, O., Valigi, P., and Ricci, E. (2014, January 1\u20135). Personalizing a smartwatch-based gesture interface with transfer learning. Proceedings of the 22nd European Signal Processing Conference, Lisbon, Portugal."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Wen, H., Ramos Rojas, J., and Dey, A.K. (2016, January 7\u201312). Serendipity: Finger gesture recognition using an off-the-shelf smartwatch. Proceedings of the CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA.","DOI":"10.1145\/2858036.2858466"},{"key":"ref_33","unstructured":"Hsu, C.W., Chang, C.C., and Lin, C.J. (2003). A Practical Guide to Support Vector Classification, National Taiwan University, Department of Computer Science. Technical Report."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"153","DOI":"10.1016\/j.patrec.2015.08.027","article-title":"Scalable identification of mixed environmental sounds, recorded from heterogeneous sources","volume":"68","author":"Favela","year":"2015","journal-title":"Pattern Recognit. Lett."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"145","DOI":"10.1007\/s00779-018-01188-8","article-title":"Recognition of audible disruptive behavior from people with dementia","volume":"23","author":"Navarro","year":"2019","journal-title":"Pers. Ubiquitous Comput."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"63","DOI":"10.1016\/j.robot.2019.01.002","article-title":"Lightweight and optimized sound source localization and tracking methods for open and closed microphone array configurations","volume":"113","author":"Grondin","year":"2019","journal-title":"Robot. Auton. Syst."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/14\/3035\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,6,19]],"date-time":"2024-06-19T01:06:04Z","timestamp":1718759164000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/14\/3035"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,7,10]]},"references-count":36,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2019,7]]}},"alternative-id":["s19143035"],"URL":"https:\/\/doi.org\/10.3390\/s19143035","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,7,10]]}}}