top of page
  • Merlin Sunley

Data Sonification: An Investigation into its Techniques and Technologies



Innovation within the context of audio practice is a multi-faceted concept. It can take the form of envisioning new ways of interacting with sound, devising novel techniques in sound creation, performance methods, diffusion technologies, cataloguing and many more. Every aspect of the audio realm is rife with innovative possibilities.


The area which this essay will focus on, namely Data Sonification encompasses and is in dialogue with all of the abovementioned disciplines. This paper will document the history and technological landscape within this relatively new specialisation and suggest how its use could provide new insight and potential solutions to our increasingly novel problems.

Definition and History of Data Sonification

With the rapid accelerating growth across all sectors of society in the creation and consumption of raw data, the need has arisen for new modalities for interpretation and derivation of meaning from such data (Lenzi & Ciuccarelli, 2020). The use of sound as a method of non-verbally communicating information is certainly not a new concept however (Worrall, 2018). In a number of journal articles Axel Volmar (2013) lays out various historical examples of the practice of sonifying non-audible phenomena for the purposes of listening for information:


In 1878 students Julius Bernstein and Ludimar Hermann performed various experiments in which a telephone was used as an auditory display (Volmar, 2013b). They used the telephone to induce a current into the muscles of a vivisected Frog. At first the experiment's results were unsuccessful due to the inability of the telephone to render the weak currents used audible. However, 3 years later new, more sensitive devices were used to conduct the experiments. This enabled the researchers to sonify the natural electric currents present in the animals' muscles (Volmar, 2013).


Another instance and one of the most recognised uses of sonification is the Geiger-counter. Invented by Hans Geiger and Walther Müller in the early 20th century, it is still widely used today. Producing clicks in response to environmental radiation levels it represents an early form of auditory display. Every particle detected is converted into an electrical pulse which is transmitted through headphones to the user. The more particles detected the higher the click rate (Hermann et al., 2011). This capitalizes on the user's ability to perceive minute changes in sonic events allowing them to have their eyes free for other tasks (Kramer et al., 1999).


A key example of the innovative use of sound as a means of interpreting data was the work of psychoacoustician Sheridan Speeth. In the early 1960’s Speeth used signal processing techniques to transpose seismic signals into the audible range in an attempt to find a method of distinguishing earthquakes from underground nuclear explosions (Volmar, 2013). This line of research was abandoned for a number of reasons notwithstanding Speeth’s radical politics; seismologists were not convinced that this modality produced compelling insight into questions pertinent to their research (Supper, 2015). It was not until the early 1990’s that this research resurfaced in the incipient sonification community (Supper, 2015).

Modern Era of Sonification

Data sonification in its modern context traces its roots to the beginning of the digital revolution in the 1980’s and can be defined as a subclass of auditory display that utilises non-speech audio as a way of representing information (Kramer et al., 1999). The aim of sonification can be seen as the translation of correspondences in data into human perceptible sounds in a way that renders the data relationships comprehensible (Hermann et al., 2011).

Sonification Tools & Techniques

The tools available to researchers and practitioners are broadly determined by the sonification technique being employed. Hermann et al (2011) identified 5 sonification techniques: Audification, Auditory Icons, Earcons, Parametric Mapping and Model-based. Whilst Worrall (2019) breaks down the techniques into categories of Data-Type Representations: Discrete, Continuous and Interactive Data Representations, allowing for further explication of techniques into additional sub-categories such as Wave Space and Homomorphic Modulation Sonification, Morphocons and Physical Modelling.


Audification

The simplest form of data sonification. According to Kramer (1994) audification is described as “a direct translation of a data waveform to the audible domain for the purposes of monitoring and comprehension”. A common way of visualising this would be a Cartesian graph. If the data visualised exhibits a wave-like shape for example an EEG signal, to audify this would mean attributing the signal’s values to air pressure and playing back the result via a loudspeaker making the data audible (Hermann et al., 2011).


The practice has shown promising results in scientific research including NASA’s Solar Wind Ion Composition Spectrometer (SWICS) which measures a large number of solar wind parameters. The team used audification as a data mining tool and experts trained in the aural analysis of the SWICS data can attend to a wide number of variables in determining correlative features embedded in the spectral characteristics of the data (Alexander et al., 2011).

Auditory Icons

Auditory icons are designed with the aim of allowing an intuitive connection between the metaphorical space of computer applications and human interaction within this space. Amounting to sonic representations of digital events using sounds familiar to users from daily life. They tend to fall into one of four categories:


  • Nomic, or, realistic which tends to be a direct sonic representation of the occurring event.

  • Symbolic which possess no specific link to the event taking place other than by virtue of association and repetition.

  • Metaphorical auditory icons reflect key elements of their associated action e.g. the copying of a file represented by the sound of liquid filling a container (Buxton et al., 1994).

  • Verbal icons are short snippets of spoken language, these tend towards conveying highly specific information for example assisting users of public transit systems.


The effectiveness of auditory icons results from the qualitative differences between hearing and vision. Humans see objects and make an inference of an event, whereas when an event is heard we infer an object. Relying on our ability to recognize qualities of various sound types and to recall the meaning of discrete sounds in context (Worrall, 2019), listeners can intuit content from simple transformations in sound as corresponding to different informational values.


An area in which auditory icons show potential is within the medical field. Auditory displays are an essential feature within a clinical setting, playing a vital role by alerting medical staff of patient or equipment state changes. Several studies have found that the standard alarms currently in use are difficult to learn and easily confused with one another due to their heterogeneity and lack of acoustical variation. But a study by McNeer et al. (2018) found that anesthesia providers using equipment modified to use auditory icons were able to more quickly and correctly identify the icon alarms over standard alarms. Icon alarms also led to lower perceived fatigue and task load than with the current standard (McNeer et al., 2018).


Earcons

A key difference between Earcons and Auditory Icons is the lack of a relationship between the sound and the information represented. Although only used within HCI applications for a few decades the core features are much older (Hermann et al., 2011). As far back as the late 4th century, military writer Vegetius’s book De Re Militari describes the use of horns as a means of communicating military signals.


“The music of the legion consists of trumpets, cornets and buccinae. The trumpet sounds the charge and the retreat. The cornets are used only to regulate the motions of the colors; the trumpets serve when the soldiers are ordered out to any work without the colors; but in time of action, the trumpets and cornets sound together. The classicum, which is a particular sound of the buccina or horn, is appropriated to the commander-in-chief and is used in the presence of the general, or at the execution of a soldier, as a mark of its being done by his authority.” (Renatus, 2017)


Modern Earcons were originally based on auditory warning research for critical applications such as aviation and ICU’s. Blattner et al. (1989) put forward the idea that Earcons be composed of short synthesized musical motifs. They suggest that this would allow sonic information to be constructed systematically and that the combination or manipulation of motifs would change the meaning of the information delivered. These motifs can be used individually, combined to create compound messages or nested hierarchically to represent objects or events.


A powerful feature of Earcons is the ability to combine them to create compound messages. A useful analogy here is that one-element Earcons represent words while compound Earcons represent phrases. Transformational Earcons then, are based around a grammar in which exists a set of symbolic mappings from data parameters (file type for example) to individual sound attributes such as timbre. A drawback of this method is that it takes longer for a sequentially structured Earcon to play the deeper the structural hierarchy, this can have a negative effect on user response times when responding or interacting with the system (Worrall, 2019).


Parametric Mapping Sonification

Sometimes referred to as sonic scatter plots, parameter mapping sonification or PMSon is the most widely used technique for sonifying high-dimensional data (Worrall, 2019). Aspects of the data are mapped to sonic parameters including physical (amplitude, frequency), psychophysical (loudness, pitch) or perceptually coherent complexes (rhythm, timbre) (Worrall, 2019) and the rendering and playback of the data creates the sonification.


Successful PMSon requires converting features in data into sound synthesis parameters and appropriate data preparation is essential. Often when the data set is high-dimensional or multivariate, dimension reduction is a necessary step. This is because psychophysical features tend to lack orthogonality and perceptually salient synthesis parameters are quite limited thus compounding the need for efficient model design (Hermann et al., 2011). Orthogonality is very important when designing complex systems such as computer software (Raymond, 2003), and indeed it is a problem that sonification researchers have wrestled with for a long time. Perceptual orthogonality in this context means that if two data points are sonified both are able to be interpreted. Furthermore, if one quantity changes then the difference in sound can be unambiguously ascribed to its corresponding data point (Ziemer & Schultheis, 2019).


Often cited as an effective example of PMSon that solves this orthogonality problem is Guillame Potard’s “Iraq Body Count”. The data contains three orthogonal dimensions: crude oil prices, civilian deaths and military deaths. These are mapped to three sonic parameters consisting of: two different types of uniquely timbred transient pulses representing the civilian and military deaths, and a continuous pitch modulated tone representing oil prices (Hermann et al., 2011). Roddy & Bridges (2019) suggest that the theory of spectromorphology (Smalley, 1997) may provide us with some insight into the “richly discursive sonic structure” generated by these data interactions.


https://soundcloud.com/somatic-sounds/iraq-body-count-guillaume-potard


“In Potard’s piece, the mappings for the civilian and military deaths result in two streaming (i.e. perceptually segregated) sound shapes defined by discontinuous textural motion. This textural motion takes the form of a turbulent growth process that is driven by a movement between note, when individual data can be heard, and noise, when the data values increase to create a cloud of sound. The oil price appears as a rich spectral contour whose internal texture contains multiple tonal centers which are not quite consonant with one another, giving this sound a sense of disharmony and instability.” (Roddy & Bridges, 2019)


Model Based Sonification

This sonification technique involves the creation of “virtual data instruments” (Worrall, 2019). In essence the data IS the sonification model and sounds are produced in response to the user’s interaction with the model. In the physical world sound occurs when a system is excited, when we interact with a system we cause an excitation which gives rise to a vibrational reaction that is transmitted via the medium to our ears. Importantly, the sound emerges from the excitation process which is a function of the physics of the system whereas the instrument is defined by its material structure (Hermann & Ritter, 1999).


With this sonification technique variables from your selected dataset are assigned to the properties of a physical model such as elasticity or hardness. The user interacts with the model via ‘messages’ causing it to resonate thus making the unique characteristics of the dataset audible to the listener in much the same way that the structural qualities of a physical object become available to the listener when bowing, scraping, plucking and so on (Worrall, 2019).


An interesting and conceptually enlightening recent use of MBS by Su et al. (2021) involved translating the web geometry of the Cyrtophora citricola spider into sound allowing users to experience its topological features aurally. This project was implemented using Unity3D and Max/MSP and the team created an interactive environment within which the user travels through a virtual spider web. The model is based on the fibre topology of a real 3D spider web that the authors scanned and modeled into a network of nodes and links. The network data mirrors this structure and consists of a list of nodes, their Cartesian coordinates in space and a list of links and their two extremity nodes. Each fibre in the model is a sound source producing a simple sine wave whose frequency is determined by the length of the fiber (Su et al., 2021). As is the case with MBS, interactivity plays a key role in how the model is experienced. The user excites the model by plucking the fibres using a mouse and what is heard is determined by their FOV, hearing radius and their spatial relationship to the plucked fibre which are controlled using an HTC Vive VR headset. The interactive aspect of this MBS instrument presents a compelling and innovative experience, however, its use as a scientific tool is limited by its inability to render mechanical information about the spider web. But future iterations such as real time sound propagation through the web, HRTF audio or tactile interaction could yield new insights into complex network architectures such as those found in transportation or social media networks (Su et al., 2021).


SpidersCanvas - PalaisDeTokyo - Excerpt 4


Challenges and the Future of Sonification

It is necessary to provide some context for where the field is currently situated in terms of the use of sonification for artistic applications and to provide a clear delineation between sonification practices used for auditory display or scientific analysis and those which serve a more artistic and perhaps less scientifically rigorous purpose.


John Neuhoff in a paper for the 25th International Conference on Auditory Display spoke of a need for a bifurcation of the field. Arguing that, in the continuum between art and science across which sonification is situated, the closer a work is to the midpoint the more likely it is that it fails to meet the goals of either art or science (Neuhoff, 2019).

In advocating for a simultaneous shift of both empirical and artistic sonifications away from what he calls the “muddled” middle he lays out a distinction between sonifications classified artistic, whose goals would be oriented towards capturing attention or stimulating curiosity and scientific, which holds reliable representation of the data over aesthetics. The solution, he suggests, is that clearly delineating the goals, methods and evaluation of the sonification would help avoid efforts that end up being neither art nor science.


Indeed within the artistic or journalistic realms of sonification a more subjective set of concerns occupy researchers. Those concerns revolve around the lack of design frameworks allowing for context, objectivity and intentionality (Lenzi & Ciuccarelli, 2020). This problem is compounded by the sheer number of imaginable solutions for a design which then in turn dictates the tools, methodologies used and know-how applied in their use (Worrall, 2019). To date there is no consensus within the sonification community on standards or best practices for transforming data into sound and in what Barass (2011) calls the ‘aesthetic turn’ reconfiguring sonification from a scientific instrument into a popular mass medium to be ‘enjoyed’ moves the focus from engineering theories of information towards social and linguistic theories on the construction of meaning. Supper (2015) also suggests that a shift of focus away from technical tool development towards developing innovative methods for interpreting sonified data may be the next hurdle in legitimizing sonification as both scientific and artistic practice.

References


Alexander, R. L., Gilbert, J., Landi, E., & Simoni, M. H. (2011). Audification as a Diagnostic Tool for Exploratory Heliospheric Data Analysis. The 17th International Conference on Auditory Display (ICAD-2011). Published.


Barrass, S. (2011). The aesthetic turn in sonification towards a social and cultural medium. AI & SOCIETY, 27(2), 177–181. https://doi.org/10.1007/s00146-011-0335-5


Blattner, M., Sumikawa, D., & Greenberg, R. (1989). Earcons and Icons: Their Structure and Common Design Principles. Human-Computer Interaction, 4(1), 11–44. https://doi.org/10.1207/s15327051hci0401_1


Buxton, W., Gaver, W., & Bly, S. (1994). Auditory interfaces: the Use of non-speech audio at the interface. Unpublished Manuscript. https://www.billbuxton.com/Audio.TOC.html


Edworthy, J., & Stanton, N. A. (2020). Human Factors in Auditory Warnings. Taylor & Francis.


Hermann, T., Hunt, A., & Neuhoff, J. G. (2011). The Sonification Handbook. Logos Verlag.


Hermann, T., & Ritter, H. (1999, August). Listen to your Data: Model-Based Sonification for Data Analysis. In G. E. Lasker & M. R. Syed (Eds.), Advances in intelligent computing and multimedia systems (pp. 189–194). Int. Inst. for Advanced Studies in System research and cybernetics.


Kramer, G. (1994). Auditory Display: Sonification, Audification, And Auditory Interfaces (Proceedings Volume 18, Santa Fe Institute Studies in the Sci). CRC Press.


Lenzi, S., & Ciuccarelli, P. (2020). Intentionality and design in the data sonification of social issues. Big Data & Society, 7(2), 205395172094460. https://doi.org/10.1177/2053951720944603


McNeer, R. R., Bodzin Horn, D., Bennett, C. L., Reed Edworthy, J., & Dudaryk, R. (2018). Auditory Icon Alarms Are More Accurately and Quickly Identified than Current Standard Melodic Alarms in a Simulated Clinical Setting. Anesthesiology, 129(1), 58–66. https://doi.org/10.1097/aln.0000000000002339


Neuhoff, J. G. (2019). Is Sonification Doomed to Fail? Proceedings of the 25th International Conference on Auditory Display (ICAD 2019). Published. https://doi.org/10.21785/icad2019.069


Raymond, E. (2003). The Art of UNIX Programming (The Addison-Wesley Professional Computing Series) (1st ed.). Addison-Wesley.


Renatus, F. V. (2017). De Re Militari (Concerning Military Affairs) illustrated with pictures and plans. Leonaur.


Roddy, S., & Bridges, B. (2019). Addressing the Mapping Problem in Sonic Information Design through Embodied Image Schemata, Conceptual Metaphors, and Conceptual Blending. Journal of Sonic Studies, 17(17).


Schito, J., & Fabrikant, S. I. (2018). Exploring maps by sounds: using parameter mapping sonification to make digital elevation models audible. International Journal of Geographical Information Science, 32(5), 874–906. https://doi.org/10.1080/13658816.2017.1420192


Smalley, D. (1997). Spectromorphology: explaining sound-shapes. Organised Sound, 2(2), 107–126. https://doi.org/10.1017/s1355771897009059


Su, I., Hattwick, I., Southworth, C., Ziporyn, E., Bisshop, A., Mühlethaler, R., Saraceno, T., & Buehler, M. J. (2021). Interactive exploration of a hierarchical spider web structure with sound. Journal on Multimodal User Interfaces. Published. https://doi.org/10.1007/s12193-021-00375-x


Supper, A. (2015). Sound Information: Sonification in the Age of Complex Data and Digital Audio. Information & Culture: A Journal of History, 50(4), 441–464. https://doi.org/10.1353/lac.2015.0021


Volmar, A. (2013a). Listening to the Cold War: The Nuclear Test Ban Negotiations, Seismology, and Psychoacoustics, 1958–1963. Osiris, 28(1), 80–102. https://doi.org/10.1086/671364


Volmar, A. (2013b). Sonic Facts for Sound Arguments: Medicine, Experimental Physiology, and the Auditory Construction of Knowledge in the 19th Century. Journal of Sonic Studies. Published.


Worrall, D. (2018). Sonification: A Prehistory. Proceedings of the 24th International Conference on Auditory Display - ICAD 2018. Published. https://doi.org/10.21785/icad2018.019


Worrall, D. (2019). Sonification Design: From Data to Intelligible Soundfields (Human–Computer Interaction Series) (1st ed. 2019 ed.). Springer.


Ziemer, T., & Schultheis, H. (2019). Psychoacoustical Signal Processing for Three-dimensional Sonification. Proceedings of the 25th International Conference on Auditory Display (ICAD 2019). Published. https://doi.org/10.21785/icad2019.018


4 views0 comments
bottom of page