Pioneering Computational Music and Sound Analysis

Developing machine listening systems for real-world applications

Imagine you are a musician, improvising music — while your computer is listening and suggesting harmonic accompaniments to complete your music-making. Or a hiker, using her smartphone to identify a bird species by its song. The development of robust listening machines has many potential applications that can have transformational impact in music, biology, urban studies and other areas of scholarly and professional practice. Dr. Juan Bello, Associate Professor of Music Technology and Electrical and Computer Engineering at New York University, studies the fundamentals of sound and develops cutting-edge computational methods in the arising field of machine listening. By examining the patterns of information in music and environmental sounds, Dr. Bello hopes to create systems that can intelligently interact with humans.

Further advancements in machine listening research will enhance the ability to automatically recognize the sources in an auditory scene, determine the pitch, rhythmic, structural, and emotional content in recorded music, and help characterize the similarities that exist between sounds or musical pieces. Dr. Bello’s research sits at the intersection of the fields of music, acoustics, electronic engineering, and computer science, while seeking to advance the state of the art in all of them. A fundamental step towards context awareness in machines, new machine listening technologies can enable novel applications in fields as diverse as robotics, human-computer interaction, hearing aids, and security.

Current research projects include:

  • Foundations of Machine Listening: One of the key components of Dr. Bello’s research is how to improve existing audio representations by exploring the trade-off between feature design and feature learning strategies. Data-driven and deep feature learning methods have shown tremendous potential in a variety of domains including audio. Despite their versatility, however, the success of these methods still depends on a thorough adaptation to the specificities of the domain. This requires new paradigms for feature design that differ from how audio features are traditionally generated, but still benefit from knowledge of signal processing, acoustics, sound perception, and music, extending to the way data is collected, annotated and used. Dr. Bello’s lab is currently applying these insights to develop novel and robust solutions to a range of important machine listening problems.
  • Large Scale Music Analysis and Interaction: Dr. Bello seeks richer descriptions of music content that go beyond standard methods of organizing music in terms of genre, artist, album, sales or play counts. His team is developing techniques for analyzing large collections of music recordings and automatically obtaining chord sequences, melodic lines, sections (e.g. chorus, verse, bridge), bar boundaries, repetitive motifs and instrument identities over time. They are also exploring computer applications to the retrieval of cover songs, identifying similarities between music from different traditions, and interactively accompanying solo performers.
  • Understanding Sound Environments: Dr. Bello’s work holds the potential for transformational advances in the way we monitor and interact with the environment around us. His team’s machine listening solutions are soon to be deployed as part of intelligent sensing networks in both urban and natural environments to, for example, identify sources of noise pollution and their emission patterns across time and space, or characterize the composition of bird species in a migratory path. These applications can be used to empower city agencies tasked with monitoring noise pollution and enforcing its regulation, and thus have a direct impact on the quality of life of urban citizens. It can also advance research in fields such as ornithology, marine biology and entomology in understanding ecosystems and how they are affected by human and other factors. 


Music has always been a central part of Dr. Juan Bello’s life. He started to play the “cuatro”, a traditional Venezuelan stringed instrument, when he was eight years old, and followed in his teens with several years of classical guitar and piano education. He performed in a number of pop and rock bands in Venezuela, and learned to play the electric bass during graduate school. While he has not performed publicly in many years, Dr. Bello remains an avid music listener and concert-goer, deeply passionate about all things music.

At the same time, Dr. Bello was fortunate to have been taught by several inspiring science and mathematics teachers during his school years, a process that helped him discover an inclination and a continuous sense of wonder for these subjects. After high school, he dreamt of becoming an audio engineer. Luckily, this is a subject for which College-level education was not available in his country, which led him to pursue an education on what he thought was the closest fit, electronic engineering. He has not looked back since.

During college, he dwelled in the study of science, math and technology, and stumbled upon the field of signal processing, which became a natural home for him. He was mentored by perhaps the only professor in the country specializing in audio signal processing at the time, and completed his undergraduate thesis in the topic of sound synthesis using optimization techniques. After a brief stint in industry, Dr. Bello was awarded a governmental grant to pursue graduate studies abroad, and was accepted as a doctoral student in a well-reputed audio signal processing lab at King’s College London in the UK. After two years, the entire group moved to Queen Mary University of London, where it expanded to become the Centre for Digital Music. In his years as a doctoral student, postdoctoral researcher and technical manager at the Centre, Dr. Bello finally managed to fully combine his passions for music, science and technology, and became actively engaged in the emergence of a new field, that of music information retrieval (MIR), which has been the main focus of the last decade of his research.

Since coming to NYU in 2006, Dr. Bello has been both deepening and expanding on that agenda, incorporating knowledge and insights from machine learning, acoustics and music cognition, exploring new and exciting applications from computational musicology to the analysis of environmental sounds, and trying to cast new light into the problems and the assumptions that sustain and motivate his work.

Aside from research, Dr. Bello enjoys hiking and spending time with his wife and two daughters.

For more information, visit

In the News

The “Science of Music” outreach program

An online initiative to attract interest on Science, Technology, Engineering and Mathematics education through music and music technology


Unsupervised Feature Learning for Urban Sound Classification


On the relative importance of individual components of chord recognition systems


Feature Learning and Deep Architectures: New Directions for Music Informatics


Unsupervised discovery of temporal structure in music


A tutorial on onset detection in music signals



US Fulbright Research Scholar – Multidisciplinary Studies, 2013-2014

Institute of International Education, Council for International Exchange of Scholars

Faculty Early Career Development Award, 2009-2014

National Science Foundation

Best Paper Award at the 11th International Conference on Music Information Retrieval (co-authored with R.J. Weiss), August 2010

Utrecht, The Netherlands

Best Special Session Paper, 11th International Conference on Machine Learning and Applications (co-authored with E.J. Humphrey), December 2012

Boca Raton, FL, USA