Oral Session S3: Augmented and virtual realities
Thursday, May 30 (09:00- 10:30)
Session Chair: Marcella Mandanici |
S3.1. Comparison and Implementation of Data Transmission Techniques through Analog Audio Signals in the Context of Augmented Mobile Instruments Romain Michon, Yann Orlarey, Stéphane Letz and Dominique Fober |
S3.2. Mass-Interaction Physical Models for Sound and Multi-Sensory Creation: Starting Anew Jerome Villeneuve and James Leonard |
S3.3. Exploring the Effects of Diegetic and Non-diegetic Audiovisual Cues on Decision-making in Virtual Reality JAnil Çamci |
S3.4. OSC-XR: A Toolkit for Extended Reality Immersive Music Interfaces David Johnson, Daniela Damian and George Tzanetakis |
S3.5. No Strings Attached: Force and Vibrotactile Feedback in a Guitar Simulation Andrea Passalenti, Razvan Paisa, Niels Christian Nilsson, Nikolaj S. Andersson, Federico Fontana, Rolf Nordahl and Stefania Serafin |
Oral Session S4. SMC tools and methodologies
Thursday, May 30 (12:00- 13:00)
Session Chair: Emma Frid |
S4.1. A Framework for the Evaluation of Interpolated Synthesizer Parameter Mapping Darrell Gibson and Richard Polfreman |
S4.2. Composing with Sounds: Designing an Object Oriented Daw for the Teaching of Sound-Based Composition Stephen Pearse, Leigh Landy, Duncan Chapman, David Holland and Mihai Eni |
S4.3. Insights in Habits and Attitudes Regarding Programming Sound Synthesizers: A Quantitative Study Gordan Krekovic |
Oral Session S5. Sound synthesis & analysis
Thursday, May 30 (14:30- 16:30)
Session Chair: Federico Avanzini |
S5.1. Experimental Verification of Dispersive Wave Propagation on Guitar Strings Dmitri Kartofelev, Joann Arro and Vesa Välimäki |
S5.2. Real-Time Modeling of Audio Distortion Circuits with Deep Learning Eero-Pekka Damskägg, Lauri Juvela and Vesa Välimäki |
S5.3. MI-GEN~: An Efficient and Accessible Mass-Interaction Sound Synthesis Toolbox James Leonard and Jerome Villeneuve |
S5.4. Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering Eduardo Magalhães, Emil Rosenlund Høeg, Gilberto Bernardes, Jon Ram Bruun-Pedersen, Stefania Serafin and Rolf Nordhal |
S5.5. Percussion synthesis using loopback frequency modulation oscillators Jennifer Hsu and Tamara Smyth |
S5.6. Deep Linear Autoregressive Model for Interpretable Prediction of Expressive Tempo Akira Maezawa |
S5.7. Metrics for the Automatic Assessment of Music Harmony Awareness in Children Federico Avanzini, Adriano Baratè, Luca Andrea Ludovico and Marcella Mandanici |
Poster Session P2, Thursday, May 30 Session Chair: Anja Volk |
P2.1. RaveForce: A Deep Reinforcement Learning Environment for Music Generation Qichao Lan, Jim Tørresen and Alexander Refsum Jensenius |
P2.2. Music Temperaments Evaluation Based on Triads Meihui Tong and Satoshi Tojo |
P2.3. Composing space in the space: An Augmented and Virtual Reality sound spatialization system Giovanni Santini |
P2.4. Graph Based Physical Models for Sound Synthesis Pelle Juul Christensen and Stefania Serafin |
P2.5. ADPET: Exploring the Design, Pedagogy, and Analysis of a Mixed Reality Application for Piano Training Lynda Gerry, Sofia Dahl and Stefania Serafin |
P2.6. Chord Prediction with The Annotated Beethoven Corpus Kristoffer Landsnes, Liana Mehrabyan, Victor Wiklund, Fabian C. Moss, Robert Lieck and Martin Rohrmeier |
P2.7. Sonic Characteristics of Robots in Films Adrian B. Latupeirissa, Emma Frid and Roberto Bresin |
P2.8. Virtual Reality Music Intervention to Reduce Social Anxiety in Adolescents Diagnosed with Autism Spectrum Disorder Ali Adjorlu, Nathaly Belen Betancourt Barriga and Stefania Serafin |
P2.9. Teach Me Drums: Learning Rhythms through the Embodiment of a Drumming Teacher in Virtual Reality Mie Moth-Poulsen, Tomasz Bednarz, Volker Kuchelmeister and Stefania Serafin |
P2.10. Real-time Mapping of Periodic Dance Movements to Control Tempo in Electronic Dance Music Lilian Jap and Andre Holzapfel |
P2.11. Increasing Access to Music in SEN Settings Tom Davis, Daniel Pierson and Ann Bevan |
Demo Session D2. Thursday, May 30
Session Chair: Hendrik Schreiber |
D2.1. Interacting with Musebots (that don't really listen) Arne Eigenfeldt |
D2.2. Extending Jamsketch: An Improvisation Support System Akane Yasuhara, Junko Fujii and Tetsuro Kitahara |
D2.3. Visualizing Music Genres using a Topic Model Swaroop Panda, Vinay P. Namboodiri and Shatarupa Thakurta Roy |
D2.4. CompoVOX: Real-Time Sonification of Voice Daniel Hernán Molina Villota, Antonio Jurado-Navas and Isabel Barbancho |
D2.5. Facial Activity Detection to Monitor Attention and Fatigue Oscar Cobos, Jorge Munilla, Ana M. Barbancho, Isabel Barbancho and Lorenzo J. Tardón |
D2.6. The Chordinator: An Interactive Music Learning Device Eamon McCoy, John Greene, Jared Henson, James Pinder, Jonathon Brown and Claire Arthur |
D2.7. Automatic Chord Recognition in Music Education Applications Sascha Grollmisch and Estefania Cano |
D2.8. Sonic Sweetener Mug Signe Lund Mathiesen, Derek Victor Byrne and Qian Janice Wang |