DADAsets is an ongoing interdisciplinary music project, and a response to the cultural and economic ecosystem of voice AI and voice data that is rapidly terraforming the meaning and function of voice. The project involves researching the development of open digital music tools and bespoke voice datasets that challenge the popular narratives and agendas around voice AI – which focus on the spectacle and fears around the technological results, such as digital clones that perfectly reproduce the voice of a famous narrator or pop singer. DADAsets aims to create work that playfully, intimately and artistically foregrounds less visible labor and relationships around voice AI, and to build conversations around vocal values.

While the dominant narrative of Generative AI is often focused on the uncanny realism, and shock and awe of its audible and visible results. The goal of DADAsets is to decenter the spectacle of output and instead foreground AI’s total dependence on a complex ecosystem of training data, often involving huge amounts of hidden vocal labor.

The project will develop throughout 2024 as a series of public workshops, interviews, technological R&D into open voice synthesis tools, and artistic voice data collection experiments. The project will involve a series of public workshops to map out communities of voice AI and voice data, and create a proof-of-concept public dataset (DADAset) as an archetype for future diverse and ethically sourced voice datasets, especially those that fall outside of the mainstream economies of voice data and music AI. These DADAsets aim to be carefully crafted in collaboration with vocal artists who fall outside of these cultural and economic value systems, such as experimental vocal artists who have developed a particularly unique craft and vocal communities across different traditions and cultures, and will be released under a speculative fair use license.

The history of voice technologies such as the vocoder and autotune tells us that the most interesting part of these technologies is usually how they are “misused”. On the technology and scientific side of this project, DADAsets involves the creation of a new AI voice synthesis instrument “Tungnaa” (named after the Icelandic “River of Tongues”), which is an AI voice software instrument meant to be an open, hackable, fun and playful tool that allows artists to explore the unique aesthetics of neural network generated voice audio. Tungnaa will be able to run on a modest laptop without the need for high-end GPU computing resources, and without its underlying technology being hidden behind a web-based gatekeeping portal or paid service like. Inspired by live coding and the typographical experiments of the postwar Dada art movement, Tungnaa also invites artists to invent their own text-based vocal notation systems, and to explore possible vocal notation systems for all the possible things a human voice can do – including those that exist beyond conventional language or popular singing styles.

This work is supported thanks to S+T+ARTS AIR, an artist residency program of the European Comission that supports interdisciplinary work at the intersection of art, science and technology with a focus on holistic and human-centered ways of thinking. The project is developed through a collaboration with the core AIR hub PINA, in Koper, and with composer Mauricio Valdes, who runs PINA’s spatial sound lab HEKA.

More announcements to come!
Special thanks to:

PiNA – Association for Culture and Education, Koper

University of Sussex
Experimental Music Technologies Lab (EMUTE)
Sussex Digital Humanities Lab

Intelligent Instruments Lab, Iceland Academy of Arts

The Leverhulme Trust, who support my doctoral research into real-time performance with vocal AI

S+T+ARTS EU, who are funding DATAsets via the European Union call CNECT/2022/3482066 – Art and the digital: Unleashing creativity for European industry, regions, and society under grant agreement LC-01984767. It is part of the S+T+ARTS programme.

Category: CommunityPerformanceResearchWorkshop