!preliminary project status information - status September2012!
SonapticonThe human brain consists of a network of billions of interconnected neurons. Although the mechanisms underlying the dynamics of individual neurons are extremely simple, their interconnectedness create activity patterns of incomparable complexity. Despite over a century of extensive research since the neuron as the fundamental computational unit has been identified by Ramon y Cajal, the basic principles of neuronal computation remain a mystery. Since two years now the artist Tim Otto Roth and the neuro mathematician Benjamin Staude are working on the Sonapticon, bringing to life this fascinating world of neuronal interaction by means of synthetic audio neurons. The Sonapticon, with its unique translation of the electrical communication in biological nervous systems into musical communication using sound waves, provides an unprecedented sensual experience of the dynamical mechanisms that underlie all neuronal processes. link to the website September 2012
Finally the visualization helped to study the characteristics of different neuronal networks. Tim Otto Roth started to compose spatio temporal patterns applying various micro tonal scales to the space. He experimented with 1/4, 1/6, 1/8 tone scales or the Indian Sruti scale and assigned clusters to specific areas in the Klangdom. The result was for instance an oscillating sound wash. Tim Otto Roth developed a rudimentary choreography starting with one neuron which was excited and inhibited by three external relatively high tones. In the concert three piccolo flute players will play this part. Successively the number of neurons is augmented: a trio of neurons, then ten, half of the Klangdom and finally all loudspeakers are included in the scenario. The final experiment was the tuning of the scales transposing them into the inaudible ultrasound.
August 2012The Sonapticon was presented in a informal performace in the evening of 1 August 2012 at the Bernstein Center for Computational Neuroscience (BCCN) on the Campus of the Charité in Berlin Mitte. For the first time we introduced the acoustic neurons in an interactive version: The participants changed their laptops into an audio neuron with a special python based software developed by Benjamin Staude, so the laptops interacted with the other audio neurons building simple spatio temporal patterns. Interstingly the participants started to "identify" with the neurons running on their computers. May 2012Holger Stenschke developed two alternatives for the FFT based pitch detection: A kind of band pass filter method and a recent filter method based on the Görtzel algorithm. Both methods were very reliable tracking specific frequencies. Whereas the conventional filter method afforded too much computing power to run filters for 43 neurons simultaneously, the Görtzel method was quite performative affording only 30 percent of the computing power. But to use the DFT based Görtzel method the whole acoustic space needs to be callibrated. Due to a lack of personal resources this method couldn't be elaborated further. January 2012Benjamin Staude developed a connectivity matrix in python which allows to define graphically the connections between all the neurons in the Klangdom. Holger Stenschke created a tool to assign specific notes to individual neurons. So we can create now propagation patterns in the Klangdom space and clarify local interactions by assigning specicif harmonies. Using this tool basic connectivity patterns were designed and tested: Above all the system was tested with all different pitches. It even seemed to work with ultrasonic frequencies! But also some problems appeared with the pitch tracking which was difficult to analyze. Here it comes into play that we are still working with a limited number of 16 microphones of different type. For the next session we should have 43 microphones of the same type to work reliably with the system and to get reproducable results. Above all the reaction of the system by the interaction with music instruments (clarinet, singing saw and cello) was tested. With diffent techniques the system can be activated or stopped. Individual local patterns can be evoqued and stabilized for instance by a rythmical play of syncopes combining activating and inhibiting frequencies.
Next steps for sonaptic session in May 2012: October 2011To get a better control on the neuronal activity a monitor was added to MaxMsp to display in realtime changing parameters as voltage and the synaptic conductivity.
Currently it can be chosen between the following scales: Jens Barth developed a first prototype for I-devices as acoustic neurons. Benjamin Staude arranged the pyhton based network server to communicate via OSC the connectivity parameters to the mobile devices. So the devices can be embedded in the Klangdom network or independent mobile subnetworks can be created. Due to software restrictions the mobile devices can be connected at most with three other neurons because only three frequencies can be tracked in parallel. Tests made on I-Phones, I-Pads and I-Pads showed that deeper tones played by the mobile devices are difficult to track by the other devices To get a better control also basic neuron parameters can be monitores on the display. Furthermore Jens Barth analyzed the reliability of the tracking routine. Here he plotted statistics:
Holger Stenschke finally made an interview with Tim Otto Roth explaining the basics of the project: August 2011
Jens Barth developed a first demo for I-devices.
May/June 2011 Breakthrouh: After one year hard work - we got the system spiking acoustically! The sudden breakthrough on Friday 2 June was due a complete reconceptualization of the whole network. The crucial key was parallelization: Instead of controlling all the 43 acoustic neurons (speaking the loudspeaker and microphone combinations) with one complex software in Max, we modularized the system, every acoustic neuron is controled now by an individual Max patch. This patches can even distributed at ease on different computers. We just did need to build up in python a server which initializes the individual patches via OSC sending also the connections speaking the number of frequency each neuron patch listens to. The neuron model is embedded as a python script into the max match using jython. Above all the sound analysis in the neuron patches is handled now by FFT instead of using filter maps. This allows to process every 20 ms the sound information. After four days of hard work we got the system working and we started the first primitive tests: The system reacted on rythmic impulses, but also distinguished a continuously pitched sign wave generated on an iphone. Watch a clip with some preliminary results: The next steps are: December 2010
July 2010
We started testing the system with a system of 10 neurons and 10 interneurons. To our surprise this costs yet quite computing power, so we had fire intervals between 10-60 ms. Here are the first records of the tests in the Klangdom with realtime audio processing:
March 2010In March we finished the programming of the calibration processes to adapt the microphones to the environment. Above all we linked the sound platform based on MaxMsp with the neuronal model based on python running on a separate computer. Via OSC (open sound control) the data are exchanged between the two platforms.
|