Speaking and Listening with Sound Objects was exhibited at @soundscenefest on June 3 and 4 at the the @hirshhorn in collaboration with Susan Jahoda @susan.jahoda with sound contributions from sound Camilla Padgitt-Coles @ivymeadows , Emily Salz @saltzshaker , and Hannah Tardie @hannahtardie .
Thank you DC Listening Lounge and @goetheinstitut In DC for funding this project.
Every time lights gets in, the objects choose to play something from a sound bank so the pairings are somewhat surprising combinations. People are invited to listen to the sound pairings between objects, in this case by moving apart (opening to light) both the bread and the brick halves.
More:
In creating such object pairings I hoped to gesture towards the intersection of organic-flowing and organized activities, reflecting on the observations that systems and actions emerge from group interactions that can't be entirely known or predicted. A person can compose a song and use an instrument to play it but one can't prefiguratively compose how social and ecological systems interact, for example. The "feral piano" doesn't sound like it's being controlled by a human.
The objects use a light sensor to turn on sounds. Randomly played (random play selection feature of the software MAX/Msp) the sounds range from abstracted sounds of collective animal and human groups vocalizing illegibly, bubbling of fermentation (bread), sounds of protest, of bricks falling, crashing of the window.
The first video shows objects that emit varying sounds. // These are two of the objects in a testing session recording from a group of sound objects for @soundscenefest in collaboration with Susan Jahoda. Invited by @goethe_dc .
Some of them are generated by sound artist Camilla Padgitt-Coles (drones of different tones that seem to be conversing, but the clip it's fairly short in this video) and Emily Saltz, and some of them are sound samples I found, like a brick falling and a hint of a protest or some unidentifiable group action. I also used sounds I made in Max/MSP - granular synthesis of the sounds of a piano piece and then a separate granular synthesis of a group of rhinoceroses.
The granular synthesis was used to abstract the sounds, making the rhino vocalizations less identifiable as a specific species. What I want to get across, among other things, is an impression of groups of living organisms, both human and nonhuman and to highlight the similarities between those groups.
The granular synthesis of the piano was also a way to blur the specifics, making it less legible as a traditional human-made composition and rendering it more organic and erratic. I wanted to bring it closer to the generative aesthetic of the murmurs of groups of people and their counterparts in the animal (masked rhino vocalisations) world.
The second video captures the objects as they vocalized intermittently together on the lawn of the Hirshhorn.
Technical details:
Technical (sensors and max/MSP) help from Cem Çakmak.
Turns out you just have to put a light sensor in the objects and tell MAX/Msp to play from a sound library at random. This way you are always surprised!
Inside objects are small Bluetooth speakers.