Heart.FM, selected for ERC proof-of-concept funding, is an app-creation initiative to deliver tailored music therapy with physiological feedback in cardiovascular disease.
All members of the COSMOS team are affiliated with the STMS (Sciences et technologies de la musique et du sons) Laboratory, a UMR (Unité mixte de recherche) joint research unit comprising of the CNRS (Centre national de la recherche scientifique), IRCAM (Institut de Recherche et Coordination Acoustique/Musique), the Sorbonne Université, and the French Ministère de la Culture. Doctoral students are enrolled in the Sorbonne University’s EDITE (Informatique, télécommunication et électronique de Paris) Doctoral School.
Daniel Bedoya, PhD student, designs citizen science experiments to help understand musical structures created in performance, and analyzes the perception of musical structures in performed music and physiological responses to these performed structures. He has an undergraduate degree in Sound Engineering (UDLA Quito-Ecuador) and a Master’s degree in Computer Science, Acoustics and Signal Processing Applied to Music (ATIAM – IRCAM-Sorbonne Université). Previously, he was a research assistant with Jean-Julien Aucouturier in the Perception and Sound Design (PDS) Team on the relationship of music and emotions in the ERC project CREAM and explored the influence of smiled speech in dyadic interactions in the REFLETS project.
Elaine Chew is Principal Investigator of the ERC ADG project COSMOS. Her research centers on the mathematical and computational modeling of musical structures, with present focus on structures as they are communicated in performance and in ECG traces of cardiac arrhythmias. As a pianist, she has collaborated with composers to create and première new works, and she frequently designs and performs in concerts that present visualizations and compositions created in her research team. She is past recipient of PECASE/CAREER awards and Fellowships at the Radcliffe Institute for Advanced Studies|Harvard in the US. Her research has been supported by the ERC, EPSRC, AHRC, and NSF, and featured on BBC World Service/Radio 3, Smithsonian Magazine, Philadelphia Inquirer, Wired Blog, MIT Technology Review, and Los Angeles Philharmonic’s Inside the Music. She has also recorded on Albany and Neuma Records.
Emma Frid, Postdoctoral Fellow, is a Swedish Research Council International Postdoctoral Scholarship recipient hosted by the COSMOS project at the STMS Laboratory. Emma’s project is titled Accessible digital musical instruments – Multimodal feedback and artificial intelligence for improved musical frontiers for people with disabilities, in the “Medical technology, other medicine and health care” category. The scholarship is administered by the KTH Royal Institute of Technology in Stockholm. Emma received her PhD in January 2020 from KTH, in Sound and Music Computing from the Division of Media Technology and Interaction Design. Her PhD thesis, entitled “Diverse Sounds – Enabling Inclusive Sonic Interaction,” focused on how Sonic Interaction Design can be used to promote inclusion and diversity in music-making.
Lawrence Fyfe, Research Engineer, is creating web-based visualisation software and database infrastructure to harness volunteer thinking in the project’s citizen science modules. Lawrence received his PhD in Computational Media Design from the University of Calgary and a Master’s degree in Music, Science and Technology from the Centre for Computer Research in Music and Acoustics (CCRMA) at Stanford University. Before joining the COSMOS project, he worked on a binaural telepresence system for the Digiscope project at INRIA. The Digiscope project connected various visualisation labs around Paris via telepresence (audio and video conferencing) to facilitate collaboration. Before that, Lawrence developed a web site for listening to sonified EEG data, which was used to facilitate the diagnosis of epileptic seizures.
Emily Graber, Postdoctoral Fellow, is a Marie Skłodowska-Curie Fellow whose Ear Stretch project investigates the role of active tempo control in augmenting enjoyment of contemporary music as measured by physiological monitoring. After studying violin performance at the University of Michigan, Emily received her PhD at Stanford’s Center for Computer Research in Music and Acoustics in 2018. Her doctoral research with Takako Fujioka focused on how performers and listeners anticipate and experience musical tempo changes. Her dissertation, “Neural Correlates of Top-Down Musical Temporal Processing,” examined the process of temporal anticipation with neuroimaging. Following her PhD, Emily was a postdoctoral fellow at the Sunnybrook Research Institute in Toronto, where she examined how interactive musical training assists in rehabilitating speech processing in deaf adults with cochlear implants.
Corentin Guichaoua, Postdoctoral Researcher, is researching mathematical and computational techniques for automatic extraction of musical structure in performed music and cardiac signals. Previously, he was a postdoc with Moreno Andreatta at the University of Strasbourg in the SMIR project, where he implemented algebraic and topologic methods for systematic analysis and comparison of pieces of music. He holds PhD and Master’s degrees in Computer Science from the University of Rennes 1, and a concurrent Masters of Science and Engineering (Diplôme d’Ingénieur) from INSA (Institut national des sciences appliquées). His doctoral thesis, supervised by Frédéric Bimbot, focused on compressed descriptions of chord sequences from pieces of music using formal models, in order to extract information on their structure.
Paul Lascabettes, PhD student, is an ENS (Ecole normale supérieure Paris-Saclay) CDSN scholarship student hosted by the ERC ADG project COSMOS who recently joined the team. Paul completed a Masters in the ATIAM (IRCAM’s Masters degree in Acoustics, Signal Processing, and Computer Science Applied to Music) Program. As part of his Mathematics studies at the ENS Paris-Saclay, he recently concluded a year-long research exchange at the MARC Institute for Brain, Behaviour, and Development in the Western Sydney University in Australia, where he worked on computational pattern detection for the analysis of the fugues of Bach’s Well-Tempered Clavier with Andrew Milne and David Bulger.
Charles Picasso, Heart.FM Engineer, is responsible for creating an app to deliver personalized music therapy to lower blood pressure based on physiological feedback. He has previously spent nine years at IRCAM working as a software engineer and contributed to OpenSource projects such as SuperCollider. Picasso is also an electronic music composer and sound designer. Classically trained as a musician, he started early as an electronic music producer and work as a composer for theatre companies, documentaries, films and exhibitions. His artistic work focus on abstract electronic soundscapes and is inspired by generative and experimental processes. His music is often depicted as contemplative and melancholic.
The Centre for Translational Electrophysiology and Data Science led by Prof. Pier Lambiase—Professor of Cardiology at UCL’s Institute for Cardiovascular Science and Barts Heart Centre (BHC), Co-Director of Cardiovascular Research at Barts NHS Trust, and BHRS Committee Research Lead; member of the European Society of Cardiology and International Heart Rhythm Society clinical guideline committees—and BHC Electrophysiology Researchers, including Prof. Peter Taggart, Drs Ross Hunter (AF Lead) & Michele Orini (Research Associate).
Gonzalo Romero, ATIAM (IRCAM’s Masters degree in Acoustics, Signal Processing, and Computer Science Applied to Music) intern (Feb 2020-Sep 2020), is developing scalable algorithms for the automatic transcription of rhythmic variations, and applying the computational techniques to create symbolic representations of long arrhythmia ECG sequences for structural analysis. He received a Masters in Fundamental Mathematics at the Sorbonne University, and a Bachelor’s degree in Mathematics from Complutense University in Madrid, and a Bachelor’s degree in Composition from the Madrid Royal Conservatory (Real Conservatorio Superior de Música de Madrid). Gonzalo hails from a musical family and plays the violin and piano.
COSMOS: Computational Shaping and Modeling of Musical Structures (Principal Investigator: Elaine Chew) is a European Research Council Advanced Grant (AdG) project supported by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 788960. COSMOS aims to use data science, optimization / data analytics, and citizen science to study musical structures as they are created in music performances and in unusual sources such as cardiac arrhythmias.
The project is hosted by the Centre National de la Recherche Scientifique (CNRS) at the Sciences et Technologies de la Musique et du Son (STMS) Laboratory, a joint research unit (UMR9912) of the CNRS, the Institut de Recherche et Coordination Acoustique/Musique (IRCAM), Sorbonne University, and the French Ministry of Culture. STMS is located at IRCAM, in the heart of Paris.
The project summary is given below and on CORDIS – Grant agreement ID: 788960.
|Objective: Music performance is considered by many to be one of the most breath taking feats of human intelligence. That music performance is a creative act is no longer a disputed fact, but the very nature of this creative work remains illusive. Taking the view that the creative work of performance is the making and shaping of music structures, and that this creative thinking is a form of problem solving, COSMOS proposes an integrated programme of research to transform our understanding of the human experience of performed music, which is almost all music that we hear, and of the creativity of music performance, which addresses how music is made. The research themes are as follows: i) to find new ways to represent, explore, and talk about performance; ii) to harness volunteer thinking (citizen science) for music performance research by focussing on structures experienced and problem solving; iii) to create sandbox environments to experiment with making performed structures; iv) to create theoretical frameworks to discover the reasoning behind the structures perceived and made; and, v) to foster community engagement by training experts to provide feedback on structure solutions so as to increase public understanding of the creative work in music performance. Analysis of the perceived and designed structures will be based on a novel duality paradigm that turns conventional computational music structure analysis on its head to reverse engineer why a perceiver or a performer chooses a particular structure. Embedded in the approach is the use of computational thinking to optimise representations and theories to ensure accuracy, robustness, efficiency, and scalability. The PI is an established performer and a leading authority in music representation, music information research, and music perception and cognition. The project will have far reaching impact, reconfiguring expert and public views of music performance and time-varying music-like sequences such as cardiac arrhythmia.|