top of page
  • Merlin Sunley

Dystopian Sonics: An investigation into Sound, Art and the Ethics of Virtual Reality


Chapter 1: Introduction


The aim of this project is to render audible the invisible algorithms that hide within the structures of the internet. It is hoped to be able to Illuminate the crossover between the digital and the sensible through a combination of data sonification techniques and Virtual/Augmented reality technologies. The hidden architecture of the internet plays a dominant role in the functioning of global capitalist society. From logistics and politics to commerce, energy, finance and communication. Within this network topology various forms of disciplinary power (Hull, 2021) have been quietly erected and algorithms of increasing sophistication are having a dramatic impact on a polarised and alienated populace.


The resulting VR sound-art project explores the reification of these invisible algorithmic systems of control and disruption, rendering them sensible in a way that enables us as participants to meditate on the tangible consequences of these systems. Through phantasms of sonic and data-based events the researcher takes full advantage of the potential of digital audio technology, creating impossible juxtapositions of temporal and spatial phenomena. Inviting the listener to immerse themselves in the contradictions inherent in our age of pervasive algorithmic architecture through sound, feeling and interaction.


1.1 Positioning of Research & Researcher


The concept and the methods of practice evolved out of the germ of an idea based around the artistic sonification of disruptive botnet activity. This is partially derived from a long standing love of dystopian near-future sci-fi such as Ghost in the Shell, Tetsuo: The Iron Man, The Matrix and Bladerunner. Essentially it was an aesthetic desire to create an artefact that would not seem out of place in similar media. The further I probed my own intentions behind the piece, a more fundamental thread began to take shape. I began to more rigorously interrogate my motivations and by extension our society's relation to digital technology, and how through the confluence of VR art and sonic interaction it is possible to expose the ethical and moral questions posed by this relationship.

1.2 Research Questions


The primary questions posed by this project were:

  1. What are the unique features allowed by VR and how can we leverage them to create immersive and interactive sonic-art experiences?

  2. With the new found artistic legitimacy of immersive technologies, what are the ethical risks involved in VR and how can artists address them?

  3. Given the invisible structures pervading society's digital architecture, what philosophies and creative techniques can be employed to help us understand and uncover these structures?

All questions will be explored in the following literature review and applied to the creation of the prototype which makes up the artistic piece detailed within the methodology section.


Chapter 2: Literature Review


2.1 Current View of Sound/Art and Virtual Reality


Here, a broad overview of the contemporary field of VR art is provided. Acceptance and legitimacy within the art world has the potential to bring immersive experiences to a completely new audience opening up new ways of interacting with and experiencing artworks.


Recently the fields of art and VR technology have converged with many art expos, film festivals and galleries showcasing projects that utilise an array of immersive tech. Traditional art institutions such as London’s Tate Modern, MoMA, the Royal Academy of Arts, the Louvre, Art Basel and the Venice Biennale have since 2016 moved to integrate VR artworks and experiences into their collections (Richardson, 2022). Notably in 2017 Vive-arts, a multi-million dollar funding program from HMD maker HTC was set up. This investment program was designed to support art content and institutions that embrace the technology (HTC, 2022). With this renewed cultural focus on immersive tech since 2016-17 an entire ecosystem of art-based VR experiences have come into being. Many well known artists have exploited the medium to varying degrees of success. 2018 saw the debut at Art Basel Hong Kong of VR works by prominent artists Marina Abramovic and Anish Kapoor (Lee, 2018).


According to Lee (2018) Abramovic’s work, Rising, is a laughably ham-fisted approach to the global environmental crisis:


“The virtual reality experience opens up with you in a dark room with Marina Abramovic who beckons you from within a glass tank. As you approach, you are transported to a boat at the foot of an arctic glacier, witnessing the now classic image of its melting, titanic sheets of ice cascading, and crashing dramatically into the sea. Looking around your small vessel, you are once again confronted with Abramovic in her vitrine, only this time the glass is filling with water—because like… sea levels are rising. Unless you make a “commitment to help” (the specifics of this commitment are vague, to say the least), and pledge to become more “eco-conscious,” the water within the vitrine will continue to rise, eventually resulting in the greatest tragedy that could possibly come of global warming—The Artist is Drowning! In Abramovic’s own words, “you’re saving the human being, and you’re saving the planet—OR, you’re not saving the planet, and you make the human being die. The choice is only yours.”


As with any new technology, hype and name recognition can be used to obfuscate subpar experiences. Despite claims of open-mindedness and love for ‘novel forms’ the art world moves slowly when it comes to incorporating new technologies into its institutions. The ‘state of the art’ within the art world appears to be lagging behind in comparison to the true bleeding edge of the field (Wang, 2020).


2.2 Review and Critique of Contemporary Artistic VR Experiences


The following critiques are intended as a review of works which share philosophical and design commonalities with my own when it comes to immersive experience design. Both pieces highlight the benefits and pitfalls of VR’s unique ability to allow users to experience embodiment within immersive environments.


2.2.1 Dominique Gonzalez-Foerster: Alienarium 5 (2022, Serpentine Gallery, London)


(See appendix for video clip)

Alienarium 5 is a multimedia exhibition which includes a multi-user VR artwork which considers unconventional forms of connection via ‘extraterrestrial embodiment’ (Serpentine Galleries, 2022). Participants occupy one of a number of alien lifeforms. During the experience I occupied the body of a creature composed of millions of undulating points of light whilst other participants inhabited respectively a creature with a body similar to a school of fish, one made from swirling segments of light and a multicoloured spaghettified tentacle being (Luke, 2022). The experience lacked a traditional narrative choosing rather to use hypnotic drone like sounds which serve to entrance the viewer. Interactivity was limited, with the character I inhabited there was an interesting use of gaze as an interaction method: As you move and look around the light particles coalesce into waves. Effectively your viewing cone creates a negative space within which the particles do not enter. This effect is coupled with a procedurally generated particle sound effect. Each particle crackles with a static type sound and as the particles move out of the way of your gaze they sound as if they are coalescing into waves of static as they physically bunch together. The drone sounds coupled with this gaze based audio interaction imparted a hypnotic quality which was quite effective in grounding the participant within the alien body. Veteran VR designer Ben Binney (Personal communication April 17, 2022) was somewhat less complementary:


“Gonzalez-Foerster’s experience hit the two biggest pitfalls we’ve seen in VR over the past six years; beyond the use of mostly stock Unity particle and effects examples, 1st year Comp-Sci Boids stuck in the background and a lack of spatial framing, the big issue I had, sitting there for nine minutes was ‘why was this made for VR?’. The thing about XR as a medium (or any digital ‘game’), is the medium inherently requires interactive elements to shine, otherwise why not shoot a video, or stage a show? And during my visit, had I been able to reach out and touch something, make a decision, or get some kind of feedback, I probably wouldn’t have noticed the last minute homework approach to the content, or tried to figure out which Git repo they’d nicked the flocking algorithm from. Don’t build an XR experience because it’s novel, build an XR experience because it can’t work in any other medium.”


Whilst I agree with this take somewhat (particularly the graphical shortcomings) I am also sensitive to what VR artist Mariam Zakarian calls ‘slow VR’ (Wang, 2020). In short this concept relates to the affordance provided by the medium to immerse yourself completely in an environment away from the stressful and hyperactive milieu of modern popular entertainment. This translocation into hyper-unreal environments and auditory ecologies is a unique feature of VR favouring deep, contemplative immersion and meaningful interaction over the violence and gore of yet another zombie game.


2.2.2 Eliza McNitt & Darren Aronofsky: SPHERES part. 1 - Chorus of the Cosmos (2018, Oculus Quest Store/Sundance 2018: New Frontier)


(See appendix for video clip)

This work uses the most up to date cosmological research to imagine ‘the music of the cosmos’ (Aronofsky, 2018). It begins with the user disembodied and floating in outer space. Tiny lights coalesce into twinkling spheres of colourful filaments. I instinctively tried to reach out and touch them and as I did they merged and transformed into singing aurora borealis which chime in harmony whilst the earth itself slowly fades into view. We are then given the opportunity to directly manipulate wavelengths of electromagnetic radiation through a short interactive segment in which an oscilloscope appears between the user’s hands. As we move the controllers towards and away from each other the audio rises and lowers in pitch with our movement. This is coupled with simultaneous haptic vibration from the controllers allowing the user to feel as if they are physically carrying out this action in an embodied manner. We are then enticed to consider the music of earth’s electromagnetic field by reaching out and interacting with particles as they rush past, and then finally to literally play with the “music of the spheres”. As the entire solar system swirls around you, you are encouraged to ‘touch’ the planets who then each chime and ring. If you move the HMD directly into the atmosphere of the planet you are brought directly into a swirling inner sphere within which sonifications of the atmospheres of the planet become the primary auditory source within the scene.


The work resonated on a number of levels. Firstly, from a thematic point of view the concept of rendering invisible forces as visible and interactable is key to both this and my own work and is a major affordance of the VR medium (Rodriguez, 2019). Secondly the piece makes good use of gestural manipulation combined with haptic feedback allowing the user to directly interact with sonic objects in differing ways. One criticism of the piece echoed by Hayden (2018) is the swift transition from scene to scene. He states:


“At moments I just wanted to stop, sit and listen to the universe sing, but SPHERES pushes you along at its own pace, presumably to get you in and out of each chapter under the 15 minute mark.”


This underscores the conflict that can occur when porting experiences meant for an exhibition space or film festival to consumer HMD’s. The experiences are often constrained by time limitations thus not allowing for the affordance of being within the experience (Atherton & Wang, 2020) and utilising one of the more unique aspects of virtual realities; that of comfortably occupying an alien environment and meditating upon the experience (Wang, 2020). This concept will be further explored later in this essay as it relates to my own work.


2.3 Ethics and Philosophy at the intersection Virtual Art


The aim aesthetically for Dystopian Sonics is to evoke a feeling of living within a Cyberpunk present and it is heavily influenced by the dystopian science fiction Cyberpunk genre and tech-pessimist literature. The explosion in accessibility of immersive tech means we are currently seeing technologies from the realms of fiction becoming a fast approaching reality. Bringing with it ethical dilemmas which must be addressed.


The idea of VR as an extension of one’s body is a fascinating one. But rather than the utopian vision of humanity becoming one with technology to transcend the limits of reality being the case, the opposite is turning into a real risk. This risk is encapsulated in Semley’s (2020) article Cyberpunk is Dead where he discusses some of the themes in Shinya Tsukamoto’s film Tetsuo: The Iron Man. Here human augmentation is seen as the manifestation of corporate hegemony. Unable to augment our way out of our collective despair, the rush to reproduce technology and to further integrate it into our lives has led to a world in which according to Semley:


“hackers become CEOs whose platforms bespoil democracy; where automation offers not the promise of increased wealth and leisure time, but joblessness, desperation, and the wholesale redundancy of the human species; where the shared hallucination of the virtual feels less than consensual.”


This concept is expanded upon in Han’s (2017) Psychopolitics: Neoliberalism and New Technologies of Power. In it Han calls back to an old idea; Jeremy Bentham’s Panopticon, which finds relevance in today’s algorithmically manipulated society. Occupants of the contemporary panopticon rather than rendered isolated and restricted in their interaction are encouraged to willingly expose every interaction and to communicate freely. Actively aiding the panopticon’s operations with what he terms ‘voluntary self-illumination and self-exposure’. In this milieu the offering of data is borne out of an inner need rather than submitted under compulsion (Han, 2017). So, there is a price to pay for entering virtual reality. As we allow companies such as Meta and Google unfettered access to our biometric data, our movements and interactions in the virtual world provide insight into our thoughts and feelings in the moment (Mir & Rodriguez, 2021).


In a sense this is Jaron Lanier’s (who coined the term VR in 1985) worst nightmare come true. In The Dawn of the New Everything, Lanier (2018) describes VR as “The perfect tool for the perfect, perfectly evil Skinner box.” Known scientifically as an ‘operant conditioning chamber’ Skinner boxes allowed behavioural scientists a completely controlled environment within which to study animal behaviour using reward and punishment systems. In virtual and online environments designers have carte blanche over all aspects of the world leaving us vulnerable to malign influence without an awareness of its effects or even that it is occurring (Nesterak, 2018). Mir & Rodriguez (2021) warn that the threats to privacy do not stop at the individual but that with ubiquitous XR devices loaded with external sensors, cameras and microphones information can be gleaned about our homes, offices and communities without even personally using the technology.


It is imperative as VR makes its way into more homes that developers and companies create ethical guidelines for the technology. As artists it is also important to use our positions as industry outsiders to cast a critical lens on this relationship between the medium and its effect on society (Powell, 2021). One such philosophical approach suggested by Mariam Zakarian is the previously mentioned ‘slow VR’ which advocates creating works which prioritise the physical and mental wellbeing of users through non-intrusive, less hyperactive and less stressful experiences. Stanford researchers Atherton & Wang (2020) in their treatise on artful VR also outline a number of ways which artists can use VR to encourage human flourishing. For example, allow users to express their creativity rather than simply consuming others’ creations, being mindful of empathy and the healthy processing of emotions and preserving the users capacity for community and social engagement rather than deteriorating it. Zakarian also notes that rather than focusing on at-home VR she consciously creates works for public spaces (Wang, 2020). As a solution this has its flaws, if we are to bring these non-gaming, artistic experiences to a broader audience then accessibility via consumer HMD’s is an essential part of this equation. Rather, it is up to the artists to engage in and confront these topics within their works. The ascendency of VR to ubiquity appears inevitable, the ethical and moral baggage it carries with it does not. But, only if practitioners and developers within the medium approach it critically, with eyes open to the possibilities both positive and negative.


Chapter 3: Methodology


The central purpose of this work is to explore the use of immersive and interactive technologies as a means to illuminate data structures and real-world manifestations of digital phenomena. Inspiration is drawn from an eclectic set of techniques to create the piece. Multiple artistic and technological disciplines were used from game audio middleware and game engines to visual graphing, data sonification, granular synthesis, Virtual Reality (VR), sound design, C# and Python scripting. In the following sections the various technical elements of the project will be detailed and critically analysed and an evaluation of what could have been done differently is performed.


Current Prototype of Dystopian Sonics Sound object.


3.1 Initial Aims


Initial aims for the project were to feed data from a CSV (comma separated values) file obtained from cybersecurity researchers (García et al., 2014) into the Unity game engine to drive the various sonic parameters of an artistic botnet simulation (See Appendix for initial mind map). The concept evolved out of the germ of an idea based around the sonification of malicious network activity explored in Debashi (2018). In this paper Debashi presents a real-time sonification technology called SoNSTAR which is designed for the monitoring of computer networks. The idea of being able to traverse a real-time botnet sonification within a VR environment was highly appealing purely from an aesthetic perspective. This proved to be challenging to set up due to the inability of Unity to read CSV files directly without significant modification. The decision was eventually made to move on from this idea and rather an interactive virtual sonic ‘object’ made up of dozens of sonified drones and sound sculptures was conceived.


3.2 Visualising The Data Object


The current prototype automatically assigns sound sources based on data drawn from Graph Modelling Language (GML) files, based on work by D’Aquin (2021). This allows us to automatically generate an interactive data visualisation graphically representing connections between nodes which can then be handled in VR as a single object. In the current prototype the audio is driven by the inbuilt Unity AudioSource component and the entire object is instantiated in code from the Graph and Node scripts.


Randomly spawning data visualisation/sonic object. Each node is automatically assigned a sound at runtime based on data drawn from a GML file.

3.3 Artistic Data Sonification and Synthesis Techniques


The sounds created are all directly related in some way to the manifestation into physical reality of digital algorithms. A variety of different sonification techniques are used to achieve this. Granular synthesis was the most commonly used method. This method was developed by Dennis Gabor and Iannis Xenakis and is a way of generating ‘grains’ of sound shaped by an amplitude envelope. These grains serve as the basis for sound objects. When thousands of grains are mixed together over time it is possible to create complex and evolving sonic textures (Roads, 2004).


Elements of the soundscape are obtained from scanning open wi-fi networks and recording the resulting traffic data using the application WireShark (www.wireshark.org/).



WireShark performing packet captures on a local Wi-fi network


This data is used in a couple of ways. Firstly the toascii abstraction for Pure Data is used to convert the captured text and numerical data into lists of integers. These lists are then used to drive the parameters of a custom granular synth.




Pure Data toascii abstraction



Granular synth with 5 data arrays each storing lists of integers used to drive synthesis parameters.


Secondly, portions of the data generated are sonified by importing the resulting raw text into Audacity and loading the generated audio into the granular synth as an audio source. This method was also used in combination with the website Shodan (www.shodan.org), which is a search engine for devices connected to the internet. Shodan was used to locate compromised websites (Tracking Hacked Websites - Shodan Help Center, n.d.), then the ‘view page source’ menu in Firefox was used to expose the HTML. This data was converted using the aforementioned technique and used to drive synthesis parameters. This is a technique called Parameter Mapping Sonification or PMSon (Hermann et al., 2011).


Prior to using the PMSON technique it was necessary to cleanse the data for it to be easily parsed by Pure Data’s arrays. This was achieved using a simple Python script:


The numbers are first pre-processed which involves copying the columns out of the CSV and inputting into an online number transposer (https://adresults.com/tools/transpose/). With this you can enter a vertical list and it will automatically replace the spaces with commas ready for the python IDE.


The Python script is then used to take the resulting range of integers and normalises them so they fall within 1 and 0. The script then rounds each result up to 3 decimal places ready to be input into an array in Pure Data.


Another data sonification technique used was the conversion of images to sound via melobytes (https://melobytes.com/en/app/image2music). An image depicting many variations of the “Pepe the Frog” meme was chosen to be fed into the application and the resulting audio was then processed within Logic Pro’s Alchemy Synth. It was then time stretched until it was around 10 minutes in duration, and processed using the Soothe2 plugin to remove harshness and sibilances.



Image depicting a number of variations of the far-right “Pepe the Frog” meme. This was sonified using the melobytes application.


Additionally Logic Alchemy was used for its spectral synthesis capabilities. A supercut of videos of the attempted January 6th insurrection in the United States was fed into the synth turning it into dense clusters of spectral whistles. Spectral modelling synthesis allows sounds to be built from a combination of multiple sine wave harmonics and noise filtering. This is similar in principle to vocoding but rather than tracking frequencies and their individual amplitudes the overall spectrum of the sound is tracked (Apple Inc., n.d.).


3.4 Sonic Interaction Design


An important aspect of the project is the ability of the experiencer to manipulate the sound object in a way that modifies the soundscape. According to Atherton & Wang (2020) when audio acts as the primary modality in VR, users should be able to affect the audio in a manner where physical or gestural actions give meaningful sonic consequences. This affords opportunities for participants to attain creative agency through musical expression (Atherton & Wang, 2020). Although the piece was not initially intended as a gestural interface for music creation it has evolved in that direction. In its current form it is possible to traverse the prototype via a script attached to the main camera and explore the simulation in a disembodied way. It is ultimately intended that it will be possible to handle the object physically via motion controller. To realise this end project it has been important to have an understanding of the various interaction patterns and techniques available to VR designers. Jerald (2015) classified more than a hundred VR interaction techniques into five overall interaction group types: Viewpoint Control, Selection Manipulation, Indirect Control and Compound Patterns. The following patterns will be used in the final project:


3.4.1 Viewpoint Control


This is the task of how one’s perspective will be manipulated, equivalent to rotating, moving or scaling the world (Jerald, 2015) In the piece users will be located in a large open (virtual) space with only the sound object to be manipulated thus walking around the room will be a possibility. There will be however no need for perspective shifting, moving or scaling the environment with users preferably standing or sitting in place whilst the sound object moves around them.


3.4.2 Selection Manipulation


Hand tracking will be provided by the ‘inside out’ tracking features of the Oculus Quest 2 HMD. This will allow the user to use gestural interactions such as pinching, grabbing and rotating via the Direct Hand Manipulation Pattern using virtual hands rather than a less naturalistic hand controller. The user will be initially located immediately next to the sound object removing any distance or traversal limitations inherent in this pattern (Jerald, 2015).


3.5 Multimodality/Haptics


Studies on multimodal and haptic augmentation for VR experiences show adding an additional dimension to the sensory experience increases immersion (Gibbs et al., 2022). In the piece all of the sounds have been tuned to have highly tactile sub frequencies, when equipped with the Subpac S2 (2017) tactile bass system every contour of low frequency is felt as well as heard, putting the user into a physical and auditory-visual virtual space. In a room-scale scenario users would be equipped with wearable systems to create a deep immersive experience.


Chapter 4: Conclusion & Discussion


The research questions investigated during this project were:


  1. What are the unique features allowed by VR and how can we leverage them to create immersive and interactive sonic-art experiences?

  2. With the new found artistic legitimacy of immersive technologies, what are the ethical risks involved in VR and how can artists address them?

  3. Given the invisible structures pervading society's digital architecture, what philosophies and creative techniques can be employed to help us understand and uncover these structures?

It was discovered that Virtual Reality can be a very powerful medium for ‘being’ and meditating on an environment and that this unique affordance can be of major benefit to artists if it is taken advantage of. Nevertheless the ethical concerns with the medium are real, and shared by many working in the field. The answer to this question is a complex one. Artists are well placed to offer critique and encourage discussion of the pitfalls which emerge when the technology reaches ubiquity. The failure of governments to rein in the algorithmic surveillance activities of large tech platforms presently means that creatives and industry professionals need to shine a spotlight on these problems. Work that highlights and critiques the contradictions of the medium should be thoroughly encouraged.


The practical project fell short of expectations of what could be achieved within the timeframe. Over the course of the research the idea was scaled back considerably due to roadblocks caused by technical and time limitations. However, the resulting prototype currently serves as an excellent basis for further work. The descriptive outline for the final product follows:


“You put on the HMD and stand in an infinitely large space with the floor mirroring the skybox, you look up and see a slowly evolving black sky. An object appears before you, a tiny network of pulsating nodes and connections, the object is malleable in your virtual hands. As you grab and stretch the object a mirror version of it deforms and gets bigger until it becomes the size of a skyscraper. You can rotate, resize and manipulate the object using pinching, grabbing and rotating gestures and the sound emanating from each node travels through the virtual environment accordingly via communication between Unity and Pure Data completely enveloping the listener in tactile, spatialized sound. You can enter into any one of the nodes and each one plays back a unique soundscape generated by the sonification of algorithms. Each node contains a 360° video within it reflecting the overlap between the digital and the real. As the object is moved and manipulated the sounds change via real time linkage of its size and spatial position to granular synthesis parameters. The user can stay within this environment indefinitely manipulating the object to create unlimited permutations with over 12 hours of individual sounds.”


Data sonification is uniquely equipped with the ability to expose normally invisible structures, transforming them from nonsensible data into sonic objects that can be sensibly apprehended. However, the integration of sonification systems into Unity via API bindings such as LibPd and Kalimba is still a major barrier to implementing true procedural audio into Unity made experiences. This proved impossible given the time constraints without sacrificing essential features such as spatialization. Alternatives such as Chunity (Atherton & Wang, 2018) and Max-Unity 3D interoperability toolkit (Virginia Tech Department of Music DISIS, 2018) were considered, however at that stage of experimentation switching to an entirely new API/programming language would not have been feasible. The eventual use of the Unity native audio engine came about due to these issues. Early tests with LibPd and Kalimba resulted in severe crashes when spatializing sounds and quirks related to the requirements of the project eventually meant having to rule out the use of middleware. It was also hoped that interaction could be implemented early on, yet due to difficulties getting the audio to spawn when the object instantiates this goal was pushed to the back of the queue in favour of completing the sound object.


Ultimately despite these issues a clear direction of travel has been achieved and a strong foundation for development of the project to meet its ambitious goals has been laid. It is hoped that the future work, by exposing to users soundscapes derived using these methods can offer a way of reflecting back to us our complicity in these systems of control and surveillance. In the sense that by our collective bringing into reality the architecture of our digital world we all have become complicit in allowing these augmentations to dictate our lives, re-form and reshape our social and political cultures. It is this direct conflict with the topics discussed via interactive works of art that will lead to more conversations and awareness of the awesome power of immersive technologies whilst remaining mindful of our ethical responsibilities.


References


Apple Inc. (n.d.). Spectral synthesis. Apple Support. https://support.apple.com/en-gb/guide/logicpro/lgsibf71c5f6/mac



Aronofsky, D. (2018, September). ‘Spheres’: Darren Aronofsky and Creator Eliza McNitt on ‘Making It Strange’ For Their Groundbreaking Virtual Reality Project. Indiewire. Retrieved 22 April 2022, from https://www.indiewire.com/2018/09/spheres-darren-aronofsky-eliza-mcnitt-virtual-reality-project-interview-1201999722/


Atherton, J., & Wang, G. (2018). Chunity: Integrated Audiovisual Programming in Unity. Proceedings of the 2018 International Conference on New Interfaces for Musical Expression, 102–107. https://doi.org/10.5281/zenodo.1302695


Atherton, J., & Wang, G. (2020). Doing vs. Being: A philosophy of design for artful VR. Journal of New Music Research, 49(1), 35–59. https://doi.org/10.1080/09298215.2019.1705862


Bederna, Z., & Szadeczky, T. (2019). Cyber espionage through Botnets. Security Journal, 33(1), 43–62. https://doi.org/10.1057/s41284-019-00194-6


D’Aquin, M. (2021, December 16). 3D Force-Directed Graphs with Unity - Towards Data Science. Medium. https://towardsdatascience.com/3d-force-directed-graphs-with-unity-587ad8f7dff


García, S., Grill, M., Stiborek, J., & Zunino, A. (2014). An empirical comparison of botnet detection methods. Computers & Security, 45, 100–123. https://doi.org/10.1016/j.cose.2014.05.011


Gibbs, J. K., Gillies, M., & Pan, X. (2022). A comparison of the effects of haptic and visual feedback on presence in virtual reality. International Journal of Human-Computer Studies, 157, 102717. https://doi.org/10.1016/j.ijhcs.2021.102717


Hayden, S. (2018, November 14). Award-Winning VR Space Experience ‘SPHERES’ Lands on Rift. Road to VR. https://www.roadtovr.com/award-winning-vr-space-experience-spheres-lands-rift/


Hermann, T., Hunt, A., & Neuhoff, J. G. (2011). The Sonification Handbook. Isd.


HTC. (2022, March 21). HTC VIVE Announces VIVE Arts Program. VIVE Blog. https://blog.vive.com/us/2017/11/08/htc-vive-announces-vive-arts-program


Hull, G. (2021). Infrastructure, Modulation, Portal: Thinking with Foucault about how Internet Architecture Shapes Subjects. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3771595


Jerald, J. (2015). The VR Book: Human-Centered Design for Virtual Reality (ACM Books) (Illustrated ed.). Morgan & Claypool Publishers.

Kahn, D. (2001). Noise, Water, Meat. Amsterdam University Press.


Lanier, J. (2018). Dawn of the New Everything: Encounters with Reality and Virtual Reality (Reprint ed.). Picador.


Lee, S. (2018, March 13). Is There Hope for Virtual Reality in Art? Why Marina Abramovic and Jeff Koons Are Not the Answer. Artspace. Retrieved 22 April 2022, from https://www.artspace.com/magazine/interviews_features/art-tech/is-there-hope-for-virtual-reality-in-art-why-marina-abramovic-and-jeff-koons-are-not-the-answer-55318


Luke, B. (2022, April 14). Dominique Gonzalez-Foerster: Alienarium 5 at the Serpentine Gallery review: sci-fi magic. Evening Standard. https://www.standard.co.uk/culture/exhibitions/dominique-gonzalezfoerster-alienarium-5-serpentine-gallery-review-b994243.html


Mir, R., & Rodriguez, K. (2021, August 4). If Privacy Dies in VR, It Dies in Real Life. Electronic Frontier Foundation. https://www.eff.org/deeplinks/2020/08/if-privacy-dies-vr-it-dies-real-life


Nesterak, E. (2018, February 13). The 21st Century Skinner Box. Behavioral Scientist. https://behavioralscientist.org/21st-century-skinner-box/


Powell, L. G., Jr. (2021, December 13). Defining Art and Ethics in VR: An Interview with Mariam Zakarian of Amaryllis VR. Medium. Retrieved 20 April 2022, from https://medium.com/xrbootcamp/defining-art-and-ethics-in-vr-an-interview-with-mariam-zakarian-of-amaryllis-vr-e37302b28533


Richardson, J. (2022, January 20). Virtual Reality is a big trend in museums, but what are the best examples of museums using VR? MuseumNext. Retrieved 23 April 2022, from https://www.museumnext.com/article/how-museums-are-using-virtual-reality/


Roads, C. (2004). Microsound (The MIT Press) (PAP/CDR ed.). The MIT Press.


Rodriguez, S. (2019, February 27). Spheres: Chorus of the Cosmos, Songs of Spacetime and Pale Blue Dot. Hacking-vr.Mit.Edu. Retrieved 22 April 2022, from https://www.hackingvr.mit.edu/single-post/2019/02/26/spheres-chorus-of-the-cosmos-songs-of-spacetime-and-pale-blue-dot


Semley, J. (2020, December 7). Cyberpunk is Dead | John Semley. The Baffler. Retrieved 11 February 2022, from https://thebaffler.com/salvos/cyberpunk-is-dead-semley


Serpentine Galleries. (2022, April 14). Dominique Gonzalez-Foerster: Alienarium 5. https://www.serpentinegalleries.org/whats-on/dominique-gonzalez-foerster-alienarium-5/


Southern Poverty Law Center. (2017, May 8). What the Kek: Explaining the Alt-Right ‘Deity’ Behind Their ’Meme. https://www.splcenter.org/hatewatch/2017/05/08/what-kek-explaining-alt-right-deity-behind-their-meme-magic


Tracking Hacked Websites - Shodan Help Center. (n.d.). Shodan.Io. https://help.shodan.io/data-analysis/tracking-hacked-websites


Virginia Tech Department of Music DISIS - Digital Interactive Sound and Intermedia Studio. (2018). Http://Disis.Music.vt.Edu/Main/Portfolio.Php. http://disis.music.vt.edu/main/portfolio.php


Virtual Reality Society. (2020, January 2). History Of Virtual Reality. https://www.vrs.org.uk/virtual-reality/history.html


Wang, A. (2020, August). The Haunting World of Amaryllis VR. WeAnimate Magazine. http://www.amaryllisvr.com/wp-content/uploads/2021/01/2020-08-VOIDmag_006_AmaryllisVR.pdf


Worrall, D. (2019). Sonification Design: From Data to Intelligible Soundfields (Human–Computer Interaction Series) (1st ed. 2019 ed.). Springer.


Appendix


Early Mind Map of Concept

Alienarium 5 VR promo

Spheres pt. 1 - Chorus of the Cosmos


Full Graph Script


using System.Collections;

using System.Collections.Generic;

using UnityEngine;

using UnityEngine.Audio;

using Unity;

using UnityEditor.Experimental.GraphView;

using UnityEditor;

public class Graph : MonoBehaviour

{

public TextAsset file;

public GameObject nodepf;

public GameObject edgepf;

public float width;

public float length;

public float height;

void Start()

{

if (file == null)

{

// instantiate A, B, C, D, E

GameObject A = Instantiate(nodepf, new Vector3(Random.Range(-width / 2, width / 2), Random.Range(-length / 2, length / 2), Random.Range(-height / 2, height / 2)), Quaternion.identity);

GameObject B = Instantiate(nodepf, new Vector3(Random.Range(-width / 2, width / 2), Random.Range(-length / 2, length / 2), Random.Range(-height / 2, height / 2)), Quaternion.identity);

GameObject C = Instantiate(nodepf, new Vector3(Random.Range(-width / 2, width / 2), Random.Range(-length / 2, length / 2), Random.Range(-height / 2, height / 2)), Quaternion.identity);

GameObject D = Instantiate(nodepf, new Vector3(Random.Range(-width / 2, width / 2), Random.Range(-length / 2, length / 2), Random.Range(-height / 2, height / 2)), Quaternion.identity);

GameObject E = Instantiate(nodepf, new Vector3(Random.Range(-width / 2, width / 2), Random.Range(-length / 2, length / 2), Random.Range(-height / 2, height / 2)), Quaternion.identity);

// make nodes children of graph object

A.transform.parent = transform;

B.transform.parent = transform;

C.transform.parent = transform;

D.transform.parent = transform;

E.transform.parent = transform;

// change name

A.name = "node A";

B.name = "node B";

C.name = "node C";

D.name = "node D";

E.name = "node E";

// get script instances

Node AS = A.GetComponent<Node>();

Node BS = B.GetComponent<Node>();

Node CS = C.GetComponent<Node>();

Node DS = D.GetComponent<Node>();

Node ES = E.GetComponent<Node>();

// add edges

AS.SetEdgePrefab(edgepf); AS.AddEdge(BS);

AS.AddEdge(CS);

CS.SetEdgePrefab(edgepf); CS.AddEdge(DS);

DS.SetEdgePrefab(edgepf); DS.AddEdge(ES);

DS.AddEdge(AS);

}

else

{

LoadGMLFromFile(file);

}

}

void Update() { }

void LoadGMLFromFile(TextAsset f)

{

string[] lines = f.text.Split('\n');

int currentobject = -1;

int stage = -1;

Node n = null;

Dictionary<string, Node> nodes = new Dictionary<string, Node>();

foreach (string line in lines)

{

string l = line.Trim();

string[] words = l.Split(' ');

foreach (string word in words)

{

if (word == "graph" && stage == -1)

{

currentobject = 0;

}

if (word == "node" && stage == -1)

{

currentobject = 1;

stage = 0;

}

if (word == "edge" && stage == -1)

{

currentobject = 2;

stage = 0;

}

if (word == "[" && stage == 0 && currentobject == 2)

{

stage = 1;

}

if (word == "[" && stage == 0 && currentobject == 1)

{

stage = 1;

GameObject go = Instantiate(nodepf, new Vector3(Random.Range(-width / 2, width / 2), Random.Range(-length / 2, length / 2), Random.Range(-height / 2, height / 2)), Quaternion.identity);

n = go.GetComponent<Node>();

n.transform.parent = transform;

//adds a new audio component to the node

go.AddComponent<AudioSource>();

n.SetEdgePrefab(edgepf);

continue;

}

if (word == "]")

{

stage = -1;

}

if (word == "id" && stage == 1 && currentobject == 1)

{

stage = 2;

continue;

}

if (word == "label" && stage == 1 && currentobject == 1)

{

stage = 3;

//this is where the label for the event is stored so if there is a node selected,

//it then gets the node and sets the path to be the "event:/" with the newly found word on the end of it

if (n != null)

{

AudioSource aSource = n.gameObject.GetComponent<AudioSource>();

aSource.clip = (AudioClip)AssetDatabase.LoadAssetAtPath("Assets/AudioClips/" + words[System.Array.IndexOf(words, word) + 1] + ".wav" , typeof(AudioClip));

aSource.loop = true;

aSource.spatialize = true;

aSource.spatialBlend = 1.0f;

aSource.Play();

}

continue;

}

if (stage == 2)

{

nodes.Add(word, n);

stage = 1;

break;

}

if (stage == 3)

{

n.name = word;

stage = 1;

break;

}

if (word == "source" && stage == 1 && currentobject == 2)

{

stage = 4;

continue;

}

if (word == "target" && stage == 1 && currentobject == 2)

{

stage = 5;

continue;

}

if (stage == 4)

{

n = nodes[word];

stage = 1;

break;

}

if (stage == 5)

{

n.AddEdge(nodes[word]);

stage = 1;

break;

}

}

}

}

}

Node Script


using System.Collections;

using System.Collections.Generic;

using UnityEngine;

public class Node : MonoBehaviour

{

GameObject epf;

List<GameObject> edges = new List<GameObject> ();

List<SpringJoint> joints = new List<SpringJoint>();

void Start(){

transform.GetChild(0).GetComponent<TextMesh>().text = name;

}

void Update(){

int i = 0;

foreach (GameObject edge in edges){

edge.transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z);

SpringJoint sj = joints[i];

GameObject target = sj.connectedBody.gameObject;

edge.transform.LookAt(target.transform);

Vector3 ls = edge.transform.localScale;

ls.z = Vector3.Distance(transform.position, target.transform.position);

edge.transform.localScale = ls;

edge.transform.position = new Vector3((transform.position.x+target.transform.position.x)/2,

(transform.position.y+target.transform.position.y)/2,

(transform.position.z+target.transform.position.z)/2);

i++;

}

}

public void SetEdgePrefab(GameObject epf){

this.epf = epf;

}

public void AddEdge(Node n){

SpringJoint sj = gameObject.AddComponent<SpringJoint> ();

sj.autoConfigureConnectedAnchor = false;

sj.anchor = new Vector3(0,0.5f,0);

sj.connectedAnchor = new Vector3(0,0,0);

sj.enableCollision = true;

sj.connectedBody = n.GetComponent<Rigidbody>();

GameObject edge = Instantiate(this.epf, new Vector3(transform.position.x, transform.position.y, transform.position.z), Quaternion.identity);

edges.Add(edge);

joints.Add(sj);

}

}

5 views0 comments
bottom of page