This post is mostly a brain-dump of some information (actually, a lot of information) gathered over the past week as part of an effort to write a proposal for a sound art thing involving The QuadTool. Gustavo Alfaix suggested doing this, and I am grateful for his interest and help. Hylynyiv Lyngyrkz also provided some advice from past experience, for which I am again grateful.
I’m still figuring this stuff out, so there’s a chance I’m wrong about something, or am leaving out necessary information. If that’s the case, please educate me. So here you go:
Data sonification is, on its own, rather straight-forward: Find a dataset that has a time-series (meaning simply that the data changes over time), map listening objects to the data, season to taste and serve. Yes, that is a bit of an over-simplification, and should you be interested in more details, Shawn Graham’s post might be of interest. One takeaway from this article is the idea of organizing the sonification in terms of classification and clustering.
Sonification as a technical effort seems kind of easy, repeatable. What makes it interesting, what “aestheticizes” it, is the selection of the dataset, and the treatment of the data set.
Pitch/frequency information seems to be the most common axis to map the data onto. What I would like to end up with is a workflow to sonify data in the the quad listening space, rather than pitch-space. To put the focus of the work on the spatial domain, rather than pitch/frequency domain. Changes encoded/embodied in the data would be reflected in modulation of basic spatial patterns, rather than modulations in pitches/frequencies.
Here are some links from information about a software called Play-splom, received from Chris Sattinger/CrucialFelix: