Categories
Archive-2015

ArtSciLab Paper Accepted for Understanding Visual Music 2016, Brazil

An ArtSciLab paper by Andrew Blanton, Connectome Data Dramatisation: The Human Brain as Visual Music, has been accepted for Understanding Visual Music to be held June 10, 2015 in Brazil.
Below is an abstract:


 
Connectome Data Dramatisation: The human brain as visual music.
Authors: Andrew Blanton, MFA; Sruthi Ayloo, MS; Micaela Chan, MS; Scot David GreshamLancaster, MA, MFA; Roger Malina, PhD; Tim Perkis; Neil Savalia, BA; Maximilian Schich, PhD; Anvit Srivastav, MS; Gagan Wig, PhD
Abstract
We, as a collaboration of scientists and artists, have built a visual and sonic representation of highly connected areas in the human brain. This model was developed to not only be a tool of scientific research but also as a tool for art creation. In the process of developing the software, the tool was built to interface with musical instruments for real time visualization and sonification. Working conceptually with the idea that scientific data can be repurposed for art creation, the Connectome is performed as both a sonic and visual representation of fMRI data, manipulating the model in real time as a form of multimodal data dramatisation.
Introduction
Partnerships between artist and scientist allow for creative forms of collaboration that can push both scientific and artistic research. With the Connectome Data Dramatisation project, our principal interest was in the creation of a hybridized tool, one that could work as both scientific instrument as well as artistic work. Beginning with a dataset that consisted of 441 neural bundles or nodes systematically differentiated into 21 areas or systems of interest in the human brain based on fMRI data collected by one of us (Gagan Wing) as part of the work of the UTDallas Cognitive Neuroimaging Lab.[1]
Area Centers Coded by System Membership
Our team was able to extract visual and sonic representations of the connections between those areas using custom software. We then developed that representation further in the form of an interactive three dimensional node edge graph and sonification of the 421 highly connected areas of the brain (in the case of the visualization, the width of the edges).
This would form the basis of the representation. With the addition of the ability to activate nodes from external data feeds via Open Sound Control[2] different nodes could be excited at will creating a virtual, three dimensional instrument that could be used for visual and sonic performance. Using four small drums, the visual and sonic representation of connections between areas of the brain can be played in real time. Custom software receives input in the form of audio signal from each drum and excites specific areas of the brain. Each section of the brain that is played will present a unique visual and sonic representation.
Historical Perspective
Building on previous explorations in bridging art and science through the development of new technology, we were actively looking to understand how this project is situated within the history of visual music. In looking at the work done at Bell Labs in the 60’s and 70’s[3] and with the work of artist such as James Whitney[4], the question emerges, what are the components of a successful art and science collaboration? How do separate practitioners collaborate while furthering each of their own research? Phill Mortin and Dan Sandin’s image processing units[5] also played a role in both the conceptual development as well as the technical development of the work. How is information shared and disseminated after it’s creation? Other contemporary artist were looked at as well including the work of Noisefold[6] in their sound extraction techniques form visual information, Ryoji Ikeda[7] in his visual and sonic representation of data as well as Semiconductor[8] in their blending of art and science amongst others working with visual music as a contemporary practice.
Visual music has been historically tied to the development of technology. This holds true now as much as it has in the past. Current rendering technologies are evolving rapidly within the gaming community and practitioners of visual music are greatly benefiting from real time rendering advancements within the gaming communities. Robust community support and the indie gaming movement have provided new tools for interfacing with gaming environments[9]. Two areas that are underdeveloped with regard to these environments and practitioners of visual music can provide insight are in the development of procedural animation, and the assimilation of data into these environments. With this project we have begun to build a framework that can both provide a series of procedural animations with regard to node edge graphs as well as interface a gaming environment with a dataset of approximately 77,000 connections. In doing so we have tried to maintain the work as both a piece of art and a scientific instrument.
Future Work
In the process of building this project, we have worked with many technologies to find the right combination of frameworks and development to allow for extensive flexibility in artistic representation of the data set. We have worked with Max/MSP Jitter[10], Unity[11], Syphon[12], Three.js[13], node.js[14] midi.js[15], coffee collider[16] and D3.js[17] in a exploration to find what technology would serve the representation of this dataset best. Beginning with a representation using three.js hosted on a node.js server we were able to bring in live data via OSC to trigger the model. We found ultimately that building everything in the web browser provided great accessibility for global use of the tool, however, confining the project to the web browser also creates limitations with regard to power for rendering and audio synthesis. We have built a framework that now uses the Unity game development environment specifically for it’s strength with regard to real time rendering and are working on integration of Pure Data[18] via the Kilimba Unity extension[19]. This process will allow us to build a platform addressing the two primary areas of dataset integration into gaming environments and procedural manipulation as well as sonification and visualization of said dataset.
Summation of Findings
The creation of the Connectome project has led to some interesting further work in collaborations between artist and scientist. Beginning with the fundamental question can scientific instruments be used as tools for art creation and can artist tools produce scientifically valid results, our team was working to further a dialogue between artist and scientist while creating real value for each party involved. In doing so we have opened up another path of exploration in the form of using game development platforms for data visualization and sonification as well as the reappropriation of these platforms for use in real time audio visual work. By creating a core representation, we were able to build a model that could be manipulated in real time using incoming Open Sound Control data and provide a scientifically accurate representation of the underlying dataset.


[1] Area of interest in this case were areas of concentration of neurons in the brain as identified by researchers at of the Center of Vital Longevity Cognitive Neuroimaging Lab at the University of Texas at Dallas. http://vitallongevity.utdallas.edu/cnl/ accessed march 7 2015.
[2] http://opensoundcontrol.org/ accessed March 7 2015
[3]http://www.ieeeghn.org/wiki/index.php/Archives:Bell_Labs_%26_The_Origins_of_the_Multimedia_Artist accessed March 7 2015
[4] William Moritz on James Whitney’s Yantra and Lapis http://www.centerforvisualmusic.org/WMyantra.htm accessed March 7 2015
[5] Museum of Modern Art https://www.moma.org/momaorg/shared/pdfs/docs/press_archives/5958/releases/MOMA_1982_0014_14.pdf?2010 accessed March 7 2015
[6] http://noisefold.com/
[7] http://press.web.cern.ch/press-releases/2014/01/japanese-artist-ryoji-ikeda-wins-third-prix-ars-electronica-collide-cern
[8] http://semiconductorfilms.com/
[9] http://pjim.newschool.edu/issues/2011/01/pdfs/ParsonsJournalForInformationMapping_Medler-Ben+Magerko-Brian.pdf
[10] https://cycling74.com/ accessed March 7 2015
[11] http://unity3d.com/5 accessed March 7 2015
[12] http://syphon.v002.info/ accessed March 7 2015
[13] http://threejs.org/ accessed March 7 2015
[14] https://nodejs.org/ accessed March 7 2015
[15] http://mudcu.be/midi-js/ accessed March 7 2015
[16] https://github.com/mohayonao/CoffeeCollider/wiki accessed March 7 2015
[17] http://d3js.org/ accessed March 7 2015
[18] http://puredata.info/ accessed March 7 2015
[19] https://github.com/hagish/kalimba accessed March 7 2015