Keynote Speakers

Keynote 1
Taming the untameable: How to study naturalistic music listening in the brain by means of computational feature extraction

Listening to musical sounds is a brain function that has likely appeared already tens of thousands of years ago, in homo sapiens and perhaps even in Neanderthal ancestors. The peripheral hearing apparatus has taken its shape to decompose sounds by transforming the air pressure waves into ion impulses and by extracting the frequencies in a way similar to a Fourier transform at the level of the basilar membrane in the inner ear. These neuronal codes are then transferred in the several relay stations of the central nervous system up to reaching the primary and non-primary auditory cerebral cortex. The ways those codes for musical sounds are obtained and represented in the cerebral cortex is only partially understood. For investigating this, various stimulation paradigms have been developed, most of them being distant from the naturalistic constantly-varying sound environments in order to maintain strict control over manipulated variables. This controlled approach limits the generalization of findings to real-life listening situations. In our recent studies, we introduced a novel experimental paradigm where participants are simply asked to naturalistically listen to music rather than to perform tasks in response to some artificial sounds. This free-listening paradigm benefits from music information retrieval, since it handles the computationally extracted features from the music as time series variables to be related to the brain signal. Our studies have advanced the understanding of music processing in the brain, demonstrating activity in large-scale networks connecting audio-motor, emotion and cognitive regions of the brain during listening to whole pieces of music.

Elvira Brattico

Bio

Professor Elvira Brattico, PhD, is Principal Investigator at the Center for Music in the Brain, a center of excellence funded by the Danish National Research Foundation and affiliated with the Department of Clinical Medicine at Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark. Moreover, in Finland she is Adjunct Professor ("Dosentti") of Biological Psychology at the University of Helsinki and of Music Neuroscience at the University of Jyväaskyläa. She has a background as classical concert pianist in Italy and holds a PhD in Cognitive Neuroscience and Brain Research Methods from the University of Helsinki (2007). During her academic career, she has published more than 140 scientific papers, including 2 books and several invited book chapters (e.g., Oxford University Press, Routledge). In her research, she is a recognized world pioneer in music research, particularly with regards to naturalistic music neuroscience combining MIR with brain signal. Moreover, she is a leader in music neuroaesthetics and neuroplasticity studies, as witnessed by her keynote addresses in several international conferences, associate editor appointments (e.g., Frontiers, Psychomusicology, PLOS ONE), and board membership of international scientific societies (e.g., International Association for Empirical Aesthetics, ESCOM Italy, Neuromusic) and training networks (Auditory Neuroscience, CICERO Learning). She also has wide experience with supervision and teaching having given several courses on experimental musicology, cognitive neuroscience, emotions, neurophysiology and brain research methods in Finland, Denmark, Italy and Spain.


Keynote 2
Towards explicating implicit musical knowledge: how the computational modeling of musical structures mediates between curiosity-driven and application-oriented perspectives

Over the past decades we have witnessed a rapid development of music technology for many different application contexts, such as music recommender systems, music search engines, automatic music generation systems, and new interactive musical instruments. They have enabled new ways of accessing of and interacting with music. At the same time, the process of developing these new technologies employing an application-oriented perspective has revealed many open questions about music as a fundamental human trait. In this talk I will discuss how the explicit modeling of musical structures in the computational domain uncovers layers of implicit musical knowledge applied by expert and ordinary listeners when interacting with music. Starting from our research on developing online search methods for Dutch folk songs and on developing online music education systems, I will demonstrate how crucial concepts such as music similarity, harmonic variance, and repeated patterns, are scrutinized in the process of developing computational models. The explicit modeling within the computational context enhances our understanding of how we employ these concepts implicitly when interacting with music. This contributes to curiosity-driven research about music as a fundamental human trait, paving the way for cross-disciplinary approaches to music encompassing computer science, musicology and cognition.

Anja Volk

Bio

Anja Volk (MA, MSc, PhD), Assistant Professor in Information and Computing Sciences (Utrecht University) has a dual background in mathematics and musicology which she applies to cross-disciplinary approaches to music. She has an international reputation in the areas of music information retrieval (MIR), computational musicology, and mathematical music theory. Her work has helped bridge the gap between scientific and humanistic approaches while working in interdisciplinary research teams in Germany, the USA and the Netherlands. In 2011, she started her own research group at Utrecht University at the intersection of MIR, musicology and cognition. Her research aims at enhancing our understanding of music as a fundamental human trait while applying these insights for developing music technologies that offer new ways of interacting with music. Anja has given numerous invited talks worldwide and held editorships in leading journals, including the Journal of New Music Research and Musicae Scientiae. She has co-founded several international initiatives, most notably the International Society for Mathematics and Computation in Music (SMCM), the flagship journal of the International Society for Music Information Retrieval (TISMIR), and the Women in MIR (WIMIR) mentoring program, which organizes yearly mentoring rounds with participants from academia and industry, in order to foster greater diversity in MIR. Anja's commitment to diversity and inclusion was recognized with the Westerdijk Award in 2018 from Utrecht University. She is also committed to connecting different research communities and providing interdisciplinary education for the next generation through the organization of international workshops, such as the Lorentz Center in Leiden workshops on music similarity (2015), computational ethnomusicology (2017) and music, computing, and health (2019).


Keynote 3
Music's changing fast; FAST is changing music

The FAST project (Fusing Audio and Semantic Technology for Intelligent Music Production and Consumption) with 5 years of UK funding, seeks to create a new technological ecosystem for recorded music that empowers people throughout the value chain, from professional performers to casual listeners, and thereby help them engage in new, more creative, immersive and dynamic musical experiences.
In the future, music experiences will demand far richer musical information, that supplements the digital audio. FAST foresees that music content will be packaged in a flexible, structured way that combines audio recordings with rich, layered, standardised metadata to support interactive and adaptive musical experiences. The core unifying notion of FAST is the embodiment of these packages as Digital Music Objects, constructed using the Semantic Web concepts of ontologies, linked data and RDF. FAST therefore proposes to lay the foundations for a new generation of 'semantic audio' technologies that underpin diverse future music experiences.
This keynote will describe the overall vision of FAST, and by highlighting some key outcomes (including some live demos) explore the notion of Digital Music Objects and where the occur in the music production-consumption value chain.

Mark Sandler

Bio

Mark Sandler, FREng, received the BSc and PhD degrees from the University of Essex, UK, in 1978 and 1984 respectively. He is Professor of Signal Processing and Founding Director of the Centre for Digital Music in the School of Electronic Engineering and Computer Science at Queen Mary University of London, UK. He has published nearly 500 papers in journals and conferences, graduated around 40 PhD students. He is Fellow of the Royal Academy of Engineering, IEEE, AES and IET. He is PI on the FAST Programme grant (semanticaudio.ac.uk) and co-investigator in the UKRI Centre for Doctoral Training in AI and Music (aim.qmul.ac.uk)