1 Introduction

In the early days of auditory displays, adding sounds to computers per se was a novel attempt to broaden the concept of user interfaces. The next phase of auditory display research was accelerated by forming ICAD (International Community for Auditory Display: http://icad.org) with its first conference in 1992 [6]. For more than 25 years, new terms, theories, and techniques have emerged at ICAD. It seems that now we are in the third phase of auditory display research, spurring further growth to the domain. Our community is moving forward beyond “designs” based just on analogies of visual displays or simple mappings between referents and sounds [4]. Our effort is to make necessity in mappings so that the outcomes of auditory displays and sonification can meet users’ expectations, including acceptance, aesthetics, as well as usability. We believe that design research is well aligned with the pragmatic nature of auditory displays and sonification, and is a bridge between scientific and artistic research paradigms in our discipline [12].

Designers and researchers have tried to make auditory displays and auditory user interfaces more useful in numerous areas, extending typical visuo-centric interactions to multimodal and multisensorial systems. Application areas include education, assistive technologies, auditory wayfinding, auditory graphs, speech interfaces, virtual and augmented reality environments, artistic performances, and associated perceptual, cognitive, technical, and technological research and development. Research through design [2] or embedded design research has recently become more pervasive for auditory display designers. Following the progression of developments in auditory displays and the multimodal community [5, 13], this special issue aimed at embracing all types of “design” activities as a necessary process in auditory displays and sonification.

In addition, methodical evaluation and analysis have become more prominent, leading to a more robust science. In this iterative process, auditory displays can achieve improved reliability through repeatable scientific research. In some areas, the auditory displays and sonification community already reached this science stage, while in others they are still exploring all possibilities.

The pursuit of novelty encourages artists to seek the integration of different types of arts and transform modalities. By definition, auditory displays and sonification transform data into sound. Thanks to the characteristics of this transformation, there have been active interactions between auditory displays and various forms of arts. Thus, this special issue invited contributions addressing artistic approaches to auditory displays and auditory user interfaces. Rather than insisting on a specific approach, we encouraged contributions from a broad spectrum of diverse strategies, because we strongly believe that all these approaches—art, design, science, and research—should be balanced and utilized flexibly depending on the circumstances to advance the theory and practice of auditory displays and sonification.

2 Summary of contributions

This special issue concerning Auditory Displays and Auditory User Interfaces: Art-Design-Science-Research (ADSR) was inspired by the theme of the ICAD 2018 Conference, a wordplay on the term “ADSR” (Attack-Decay-Sustain-Release), commonly used in sound-related domains. Even though this special issue was motivated by ICAD 2018, any new manuscripts under this broad theme were also encouraged to submit to this special issue.

Each submission was subjected to the normal rigorous journal review process, that included peer reviews by two to four external reviewers, in addition to reviews by the three guest editors. At the conclusion of the review and revision process cycles, seven manuscripts were accepted among 16 submissions (acceptance rate 44%) to form this Special Issue on Auditory Displays and Auditory User Interfaces.

The topics discussed in this special issue are representative of the latest auditory display research. They touch upon interactive designs of audification and sonification frameworks for various types of data, effective parameter mapping, and the use of audio and multimodal interfaces for the effective design of assistive interfaces and tools.

Roddy and Bridges presented a theoretical approach to the data-to-sound mapping problem, an issue common in sonic information design [11]. Their proposed framework “Embodied Sonification Listening Model” provided a theoretical description of sonification interpretation in terms of the Conceptual Metaphor Theory. Factors that have been identified to impact sonification interpretations included cultural factors and the listeners’ background of embodied knowledge. The authors argued that the adoption of approaches such as the proposed model can help designers select more effective and inclusive mapping solutions.

Newbold, Gold, and Bianchi-Berthouze constructed a theoretical model of the effects of sonification on the users’ movement, based on Huron’s theory of the psychological expectancy in music [9]. Then, they validated this model in terms of harmonic stability within sonification and contextual (visual) cues. This work will provide not only a theoretical basis of musical sonification, but also practical guidelines for interactive sonification design.

In their work, Landry and Jeon developed and evaluated a framework for real-time musical sonifications of dancers’ movements based on their evoked emotions [7]. They demonstrated that an increase in the number of musical mappings used for such sonifications improved the perception of the dancers’ emotions. In addition, they showed that the level of emotion recognition reached using this framework was equivalent to that reached by pre-composed music, confirming the suitability of sonification as a technique for conveying emotions through sound.

Groß-Vogt, Frank, and Höldrich presented a novel sonification strategy, called “focused audification” with an example of seismological data [3]. By modulating both single-sideband and a pitch of the original data stream, this method enabled sonification’s frequency range to be scalable and adjustable to human hearing range. Such work will open the door to supporting more flexible user interactions with the sonification system, which enables more interactive explorations of a data set.

With their work Patrick, Letowski, and McBride added to the ongoing research on the relationship between air and bone conducted sound perception [10]. Their work evolved around equal-loudness perception curves of the two modalities and the Conduction Equivalency Ratio (CER) metric. Through systematic psychoacoustic testing, the authors demonstrated that the perception of loudness in bone conduction can be affected by sound intensity, frequency, and placement location of the transducer on a listener’s head. Such results further support existing findings on the optimal placement of bone conduction headphones.

Aldana Blanco, Grautoff, and Hermann studied sonification possibilities for the support, monitoring, and diagnosis of myocardial infarction [1]. Four sonification designs were proposed and subjectively evaluated based on different criteria, including detection performance, classification accuracy, and aesthetics. Results indicated that different sonification schemes were effective in fulfilling different criteria, highlighting the importance of utilizing different sonification strategies when trying to analyze such convoluted and critical conditions.

Matoušek et al. described the design and evaluation of a web-based assistive system for visually impaired students in lower secondary education [8]. The system used text-to-speech technologies, but emphasized in the presentation of non-textual content, such as mathematical formulas, images, and figures. Such a framework can be effective in mathematics, physics, and other school subject areas that rely considerably on non-textual content for didactic purposes.

3 Conclusion

We hope these articles contribute to new thoughts and present exciting challenges. At the same time, we recognize that this collection only represents a small speck of all the relevant research and design issues related to auditory displays and auditory user interfaces. As we prepared this special issue, we realized that more progress is required to formulate design methods for auditory displays and sonification. We foresee that, as research and science in this field progress, we will see an even increased number of publications in this research domain, which in turn will spread the need for more special issues like this one. We very much appreciate all the authors and reviewers for their contributions to making this special issue. We hope that readers can enjoy reading a bundle of this “audio”-book.