Abstract
For almost 3 decades, research on auditory displays and sonification has been well advanced. Now, the auditory display community has arrived at the stage of sonic information design with a more systematic, refined necessity, going beyond random mappings between the referents and sounds. Due to its innate transdisciplinary nature of auditory display, it would be difficult to unify the methods to study it. This special issue covers a diverse collection of approaches to auditory displays, involving art, design, science, and research. Accordingly, the works in the present special issue included new theories, frameworks, methods, and applications about auditory displays and auditory user interfaces. We hope that this special issue can provide the state of art of auditory display research and auditory user interface design, offering fresh inspiration and motivation to researchers and designers for their future works.
Avoid common mistakes on your manuscript.
1 Introduction
In the early days of auditory displays, adding sounds to computers per se was a novel attempt to broaden the concept of user interfaces. The next phase of auditory display research was accelerated by forming ICAD (International Community for Auditory Display: http://icad.org) with its first conference in 1992 [6]. For more than 25 years, new terms, theories, and techniques have emerged at ICAD. It seems that now we are in the third phase of auditory display research, spurring further growth to the domain. Our community is moving forward beyond “designs” based just on analogies of visual displays or simple mappings between referents and sounds [4]. Our effort is to make necessity in mappings so that the outcomes of auditory displays and sonification can meet users’ expectations, including acceptance, aesthetics, as well as usability. We believe that design research is well aligned with the pragmatic nature of auditory displays and sonification, and is a bridge between scientific and artistic research paradigms in our discipline [12].
Designers and researchers have tried to make auditory displays and auditory user interfaces more useful in numerous areas, extending typical visuo-centric interactions to multimodal and multisensorial systems. Application areas include education, assistive technologies, auditory wayfinding, auditory graphs, speech interfaces, virtual and augmented reality environments, artistic performances, and associated perceptual, cognitive, technical, and technological research and development. Research through design [2] or embedded design research has recently become more pervasive for auditory display designers. Following the progression of developments in auditory displays and the multimodal community [5, 13], this special issue aimed at embracing all types of “design” activities as a necessary process in auditory displays and sonification.
In addition, methodical evaluation and analysis have become more prominent, leading to a more robust science. In this iterative process, auditory displays can achieve improved reliability through repeatable scientific research. In some areas, the auditory displays and sonification community already reached this science stage, while in others they are still exploring all possibilities.
The pursuit of novelty encourages artists to seek the integration of different types of arts and transform modalities. By definition, auditory displays and sonification transform data into sound. Thanks to the characteristics of this transformation, there have been active interactions between auditory displays and various forms of arts. Thus, this special issue invited contributions addressing artistic approaches to auditory displays and auditory user interfaces. Rather than insisting on a specific approach, we encouraged contributions from a broad spectrum of diverse strategies, because we strongly believe that all these approaches—art, design, science, and research—should be balanced and utilized flexibly depending on the circumstances to advance the theory and practice of auditory displays and sonification.
2 Summary of contributions
This special issue concerning Auditory Displays and Auditory User Interfaces: Art-Design-Science-Research (ADSR) was inspired by the theme of the ICAD 2018 Conference, a wordplay on the term “ADSR” (Attack-Decay-Sustain-Release), commonly used in sound-related domains. Even though this special issue was motivated by ICAD 2018, any new manuscripts under this broad theme were also encouraged to submit to this special issue.
Each submission was subjected to the normal rigorous journal review process, that included peer reviews by two to four external reviewers, in addition to reviews by the three guest editors. At the conclusion of the review and revision process cycles, seven manuscripts were accepted among 16 submissions (acceptance rate 44%) to form this Special Issue on Auditory Displays and Auditory User Interfaces.
The topics discussed in this special issue are representative of the latest auditory display research. They touch upon interactive designs of audification and sonification frameworks for various types of data, effective parameter mapping, and the use of audio and multimodal interfaces for the effective design of assistive interfaces and tools.
Roddy and Bridges presented a theoretical approach to the data-to-sound mapping problem, an issue common in sonic information design [11]. Their proposed framework “Embodied Sonification Listening Model” provided a theoretical description of sonification interpretation in terms of the Conceptual Metaphor Theory. Factors that have been identified to impact sonification interpretations included cultural factors and the listeners’ background of embodied knowledge. The authors argued that the adoption of approaches such as the proposed model can help designers select more effective and inclusive mapping solutions.
Newbold, Gold, and Bianchi-Berthouze constructed a theoretical model of the effects of sonification on the users’ movement, based on Huron’s theory of the psychological expectancy in music [9]. Then, they validated this model in terms of harmonic stability within sonification and contextual (visual) cues. This work will provide not only a theoretical basis of musical sonification, but also practical guidelines for interactive sonification design.
In their work, Landry and Jeon developed and evaluated a framework for real-time musical sonifications of dancers’ movements based on their evoked emotions [7]. They demonstrated that an increase in the number of musical mappings used for such sonifications improved the perception of the dancers’ emotions. In addition, they showed that the level of emotion recognition reached using this framework was equivalent to that reached by pre-composed music, confirming the suitability of sonification as a technique for conveying emotions through sound.
Groß-Vogt, Frank, and Höldrich presented a novel sonification strategy, called “focused audification” with an example of seismological data [3]. By modulating both single-sideband and a pitch of the original data stream, this method enabled sonification’s frequency range to be scalable and adjustable to human hearing range. Such work will open the door to supporting more flexible user interactions with the sonification system, which enables more interactive explorations of a data set.
With their work Patrick, Letowski, and McBride added to the ongoing research on the relationship between air and bone conducted sound perception [10]. Their work evolved around equal-loudness perception curves of the two modalities and the Conduction Equivalency Ratio (CER) metric. Through systematic psychoacoustic testing, the authors demonstrated that the perception of loudness in bone conduction can be affected by sound intensity, frequency, and placement location of the transducer on a listener’s head. Such results further support existing findings on the optimal placement of bone conduction headphones.
Aldana Blanco, Grautoff, and Hermann studied sonification possibilities for the support, monitoring, and diagnosis of myocardial infarction [1]. Four sonification designs were proposed and subjectively evaluated based on different criteria, including detection performance, classification accuracy, and aesthetics. Results indicated that different sonification schemes were effective in fulfilling different criteria, highlighting the importance of utilizing different sonification strategies when trying to analyze such convoluted and critical conditions.
Matoušek et al. described the design and evaluation of a web-based assistive system for visually impaired students in lower secondary education [8]. The system used text-to-speech technologies, but emphasized in the presentation of non-textual content, such as mathematical formulas, images, and figures. Such a framework can be effective in mathematics, physics, and other school subject areas that rely considerably on non-textual content for didactic purposes.
3 Conclusion
We hope these articles contribute to new thoughts and present exciting challenges. At the same time, we recognize that this collection only represents a small speck of all the relevant research and design issues related to auditory displays and auditory user interfaces. As we prepared this special issue, we realized that more progress is required to formulate design methods for auditory displays and sonification. We foresee that, as research and science in this field progress, we will see an even increased number of publications in this research domain, which in turn will spread the need for more special issues like this one. We very much appreciate all the authors and reviewers for their contributions to making this special issue. We hope that readers can enjoy reading a bundle of this “audio”-book.
References
Aldana Blanco AL, Grautoff S, Hermann T (2020) ECG sonification to support the diagnosis and monitoring of myocardial infarction. J Multimodal User Interfaces. https://doi.org/10.1007/s12193-020-00319-x
Faste T, Faste H (2012) Demystifying “design research”: design is not research, research is design. In: IDSA education symposium, vol 2012, p 15
Groß-Vogt K, Frank M, Höldrich R (2019) Focused audification and the optimization of its parameters. J Multimodal User Interfaces. https://doi.org/10.1007/s12193-019-00317-8
Jeon M, Walker BN, Barrass S (2019) Introduction to the special issue on sonic information design: theory, methods, and practice, part 2. Ergon Des 27(1):4
Katz BFG, Marentakis G (2016) Advances in auditory display research. J Multimodal User Interfaces 10(SI: Auditory Display):191–193. https://doi.org/10.1007/s12193-016-0226-7
Kramer G (1994) Auditory display: sonification, audification, and auditory interfaces. Addison-Wesley Longman, Boston
Landry S, Jeon M (2020) Interactive sonification strategies for the motion and emotion of dance performances. J Multimodal User Interfaces. https://doi.org/10.1007/s12193-020-00321-3
Matoušek J, Krňoul Z, Campr M et al (2020) Speech and web-based technology to enhance education for pupils with visual impairment. J Multimodal User Interfaces. https://doi.org/10.1007/s12193-020-00323-1
Newbold J, Gold NE, Bianchi-Berthouze N (2020) Movement sonification expectancy model: leveraging musical expectancy theory to create movement-altering sonifications. J Multimodal User Interfaces. https://doi.org/10.1007/s12193-020-00322-2
Patrick RNC, Letowski TR, McBride MEA (2020) multimodal auditory equal-loudness comparison of air and bone conducted sounds. J Multimodal User Interfaces. https://doi.org/10.1007/s12193-020-00320-4
Roddy S, Bridges B (2020) Mapping for meaning: the embodied sonification listening model and its implications for the mapping problem in sonic information design. J Multimodal User Interfaces. https://doi.org/10.1007/s12193-020-00318-y
Walker B, Nees M (2011) Theory of sonification. In: Hermann T, Hunt A, Neuhoff JG (eds) The sonification handbook. Logos Verlag, Berlin, pp 9–39
Yang J, Hermann T, Bresin R (2019) Introduction to the special issue on interactive sonification. J Multimodal User Interfaces 13(SI: Interactive Sonification):151–153. https://doi.org/10.1007/s12193-019-00312-z
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Jeon, M., Andreopoulou, A. & Katz, B.F.G. Auditory displays and auditory user interfaces: art, design, science, and research. J Multimodal User Interfaces 14, 139–141 (2020). https://doi.org/10.1007/s12193-020-00324-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12193-020-00324-0