Data Visualization is a fundamental tool for understanding our increasingly data-oriented world. What can visualization of sound data teach us about musical concepts like pitch, rhythm, and timbre? And, conversely, might we gain a deeper understanding of data by translating it into sound, rather than images?
The Sound Visualization & Data Sonification Hackathon had two goals. The first was to use the mature tools of data visualization to analyze music and help express music to the deaf and hard of hearing. The second was to encourage attendees to use their knowledge of music to create visceral communications of data sets through sound.
Monthly Music Hackathon NYC (MMH) brings together diverse NYC-area music communities to share ideas and create new art and research. Each month, MMH spotlights a unique theme, and invites experts to present a range of viewpoints. In a single day, participating musicians, programmers, artists, scientists, composers, hardware tinkerers, designers, musicologists, and entrepreneurs go through the full lifecycle of creative projects, through brainstorming, planning, building, and demoing or performing.
The Sound Visualization & Data Sonification Hackathon was the 36th event in this community-run series, and the 13th hosted at Spotify NYC. With over 200 attendees, it was one of the biggest events in MMH’s 3 1/2 year history.
Jay Alan Zimmerman is a successful composer who has become deaf. He presented a challenge to the hackers: create meaningful visualizations of music that express the substance of music, conjuring a similarly musical experience through vision. He urged participants to avoid the pitfalls of creating “eye candy” and “eye stats” and to instead create a true “eye music.”
Jay’s presentation resonated with everyone in the audience — including those with hearing impairment, thanks to efforts of the second presenter, Mirabai Knight. The stenographer, musician, and all-around hacker transcribed the entire event using her real-time, open source transcription-assistance software, Plover. In her presentation, she explained how her system assists a human transcriber in the task of turning speech sounds into visual text.
Designer Pia Blumenthal presented Sonify, a web-based musical image editor. Sonify is an interface for creating sounds through images. Effects such as “contrast” and “brightness” are simultaneously visual and auditory. About the interpretation of data through sonification and the human auditory and visual systems, Blumenthal said,
“Not only do we hear at a higher resolution than we see — 44,100hz compared to the traditional 24fps — but as we listen we can parse multiple streams of information (pitch, timbre, location, duration, source separation, etc.) with no effort. Because of our human ability to identify simultaneous changes in different auditory dimensions, we can integrate them into comprehensive mental images. Sonification can be used to exploit this for the purposes of identifying trends and patterns in large sets of data. And when paired with data visualization, sonification can provide a more holistic approach to exploring information.”
Artist and programmer Brian Foo, is the creator of Data-Driven DJ, “a series of music experiments that combine data, algorithms, and borrowed sounds.” He described his process for creating the first part of this ongoing series, a sonification of income inequality on the NYC subway. When asked how data sonification can be used to interpret data and about the differences between pattern recognition in the human auditory and visual systems, Foo said,
“Unlike charts or visualizations, music is abstract and not that great at representing or communicating large sets of data accurately or efficiently. However, music has a few clear advantages. First, it has the ability to evoke emotion and alter mood which can help the listener understand data intuitively and suggest how they should feel about the data rather than just communicate the data itself. Also, music is beneficial to the creator since they can curate a temporal experience for the listener that can be consumed casually and viscerally. In contrast, a visual chart can be navigated in many ways and usually does not impose a particular narrative structure. Lastly, music gets stuck in the listener’s head. If one can attach meaning or data to that music, perhaps those things (e.g. economic issues, environmental issues, personal stories, etc.) will get stuck in the listener’s head as well.”
Hackers included high school aged members of Girls Who Code, numerous PhD music information retrieval scientists, two big groups of recent graduates from Full Stack Academy coding school, students and graduates of NYU’s ITP, MusEdLab, and Steinhardt, and many other musicians and programmers. A group from the United States Holocaust Memorial Museum in Washington, DC came in order to explore sonification of their data as a potential future exhibition. Below are some highlights from the 20 hacks demoed.
Ambiance by Alec Zopf Creator’s description: Sonification of image concepts – turn Instagram photos into ambient soundscapes! The Sound of Success by Keith Davis and Thomas Levine Creators’ description: A sonification of IED casualties and sentiment towards the Afghan war from 2008 to 2012. See video here. Biela Podlaska Sonficiation by Nate Joseph, Pete McNeelym, Rocio DeLaO, Kiran Chitraju, Justin Gaba, Qian Sun, Michael Haley Goldman Creators’ description: What do historical records sounds like? Can sonification help us better understand history? Using incredibly messy data from WWII era records of the Polish community Biela Podlaska, this dedicated team has attempted to adapt Brian Foo’s “Two Trains” sonification to the stories of families and communities during the Holocaust.
Creators’ description: A web application that allows you to upload any song and add meows to the beats accompanied by a cat music video.
Creators’ description: Transform images to audio to images etc using the Fourier transform. We wanted to explore the way digital audio and images are processed by a computer. For example, both images and audio use similar signal processing techniques and effects. Our idea is to convert image data to sound data, and vice-versa so that we could apply sound effects to images, and image effects to sounds.
jams.sonify by Brian McFee
Creator’s description: Automagical sonification of annotations made by JAMS: A JSON Annotated Music Specification for Reproducible MIR Research
Monthly Music Hackathon’s Upcoming Events
- January 30: Lyrics & Language in Music
- February 20: Automatic Music – Algorithmic composition and performance
- March 26: Hip Hop Hackathon April: Gender in Music
- May: Classical Music June: New Musical Instruments (article about last year’s)
- July: What is Music Information Research? (right before ISMIR)
- August: Music of Asia (right after ISMIR)
- September: Free For All
- October: Education
- November: Visualization December: Low Tech