Categories
Archive-2019

Do you want a "Data Sonification Player" for Astrophysical Data or your own data?

Colleagues:

Under the leadership of ASL member Scot Gresham Lancaster, Sharath Chandra Ram and I have just submitted a proposal to the US National Science foundation:

AstroSounds: Developing spectrally based multivariate time series data sonification player for Astrophysics

Overview:

We are going to use auditory cognition techniques of speech perception for the listening that astrophysicist would be asked to do with a spectrally based multivariate time series data player.

LISTENING TO DATA (Sonification) is a new tool needed for analyzing the ever growing data cloud. Our work is perfecting the many ways to make better decisions by not just visualizing, but also LISTENING TO DATAs This has lead us to helping others in many fields, neuroscience, business,
general physics etc. but now we are returning our attention back to astrophysics and the advantages that the integration of audio brings to the data science interfaces that are such an integral part of astrophysical research.

We have developed a new approach to classifying the techniques of data to audio conversion and the specific types of applications that can be offered using the advanced techniques available to top digital audio experts to realize those conversions. Collaborating with our colleagues at UC Berkeley, Stanford and other institutions, we have been sharing our ideas and approaches to the sonic realizations that highlight features blurred or misrepresented in visualizations. We have exposed artifacts occluded within only imaged representations of data. Also we can simultaneously represent many other dimensions not visually represented in the complex time varying graphs. Data visualization tools have successfully been augmented to allow more rapid and accurate decision making in the context of ever growing data sets.

We continue to create training modules to learn to listen, explore and pass testing of sonifed data to interpret data as information via ongoing cognitive testing. Working from our existing, tested and published “fuzzy” taxonomy of classifications and techniques we will use 2nd order sonification that remaps information systematically via spectral analysis and auditory spectrum resynthesis of time series data. Additionally, we will use our machine listening techniques to find spectral similarities in the
overwhelmingly large amounts of available data in the existing open archives. Using new techniques of binaural audio augmented reality, we will allow researchers to move through the soundscape created by time varying multidimensional data sets. This is based on our existing and cognitively tested earlier work in this area.

Broader Impacts:

The adoption of sound in the market of Smart Speakers (Alexa, Google Home, etc.) and Audio AR (Bose, Onset, etc. ) have shown the growing more dominant place for audio oriented design. For the first time we and leading with astrophysics, we will move to scale across all areas, that are becoming dependent on data science techniques and outcomes. The perception of multidimensional auditory information will lead to new discoveries that would not be made by visual means alone.