!preliminary project status information - status September2012!

Sonapticon

The human brain consists of a network of billions of interconnected neurons. Although the mechanisms underlying the dynamics of individual neurons are extremely simple, their interconnectedness create activity patterns of incomparable complexity. Despite over a century of extensive research since the neuron as the fundamental computational unit has been identified by Ramon y Cajal, the basic principles of neuronal computation remain a mystery. Since two years now the artist Tim Otto Roth and the neuro mathematician Benjamin Staude are working on the Sonapticon, bringing to life this fascinating world of neuronal interaction by means of synthetic audio neurons. The Sonapticon, with its unique translation of the electrical communication in biological nervous systems into musical communication using sound waves, provides an unprecedented sensual experience of the dynamical mechanisms that underlie all neuronal processes.

link to the website



September 2012

Tim Otto Roth developed a visualization displaying the activity of each neuron in the Klangdom. Each neuron is represented by a circle. The changing diameter of the circles represent the current membrane potentials of each acoustic neuron.
The visualization was finally embedded by Holger Stenschke and Benjamin Staude into the framework receiving real time data up to a rate of 50 ms. In the concert in November the visualization will be projected on the floor in the centre of the Klangdom.

Finally the visualization helped to study the characteristics of different neuronal networks. Tim Otto Roth started to compose spatio temporal patterns applying various micro tonal scales to the space. He experimented with 1/4, 1/6, 1/8 tone scales or the Indian Sruti scale and assigned clusters to specific areas in the Klangdom. The result was for instance an oscillating sound wash. Tim Otto Roth developed a rudimentary choreography starting with one neuron which was excited and inhibited by three external relatively high tones. In the concert three piccolo flute players will play this part. Successively the number of neurons is augmented: a trio of neurons, then ten, half of the Klangdom and finally all loudspeakers are included in the scenario. The final experiment was the tuning of the scales transposing them into the inaudible ultrasound.

walk through the Klangdom, 1:57 min.., OGG )

eigth tone dynamics with changing update rates using around 20 neurons, 59 sec., OGG )

eigth tone dynamics with all neurons, 1:04 min.., OGG )

"screwing" pattern with relatively deep tones, 21 secs., OGG )

pattern with Indian Shruti scale, 18 secs., OGG )

spatio-temporal pattern with hight tones up to ultra sound, 23 secs., OGG )

spatio-temporal pattern mainly ultra sound, 23 secs., OGG )


soundspectrum of the acoustic spike pattern mainly in the inaudible ultra sound

August 2012

The Sonapticon was presented in a informal performace in the evening of 1 August 2012 at the Bernstein Center for Computational Neuroscience (BCCN) on the Campus of the Charité in Berlin Mitte. For the first time we introduced the acoustic neurons in an interactive version: The participants changed their laptops into an audio neuron with a special python based software developed by Benjamin Staude, so the laptops interacted with the other audio neurons building simple spatio temporal patterns. Interstingly the participants started to "identify" with the neurons running on their computers.


May 2012

Holger Stenschke developed two alternatives for the FFT based pitch detection: A kind of band pass filter method and a recent filter method based on the Görtzel algorithm. Both methods were very reliable tracking specific frequencies. Whereas the conventional filter method afforded too much computing power to run filters for 43 neurons simultaneously, the Görtzel method was quite performative affording only 30 percent of the computing power. But to use the DFT based Görtzel method the whole acoustic space needs to be callibrated. Due to a lack of personal resources this method couldn't be elaborated further.


January 2012

Benjamin Staude developed a connectivity matrix in python which allows to define graphically the connections between all the neurons in the Klangdom. Holger Stenschke created a tool to assign specific notes to individual neurons. So we can create now propagation patterns in the Klangdom space and clarify local interactions by assigning specicif harmonies.

Using this tool basic connectivity patterns were designed and tested:
- a circular movement
- a propagation wave
- a spiral in combination with a chromatic scale [worked perfectly]
- local subnetworks (interconnected and not connected) [worked perfectly]
- A-type Turing network (random Boolean network with always two inputs) [worked perfectly]

Above all the system was tested with all different pitches. It even seemed to work with ultrasonic frequencies! But also some problems appeared with the pitch tracking which was difficult to analyze. Here it comes into play that we are still working with a limited number of 16 microphones of different type. For the next session we should have 43 microphones of the same type to work reliably with the system and to get reproducable results.

Above all the reaction of the system by the interaction with music instruments (clarinet, singing saw and cello) was tested. With diffent techniques the system can be activated or stopped. Individual local patterns can be evoqued and stabilized for instance by a rythmical play of syncopes combining activating and inhibiting frequencies.


Connectivity matrix in python (right window) with preview window at the left to see the connections of the loudspeakers in the Klangdom (blue=inhibitory; red=excitatory). The matrix shows four subnetworks which create an activity jumping from corner to corner in the Klangdom.

Next steps for sonaptic session in May 2012:
- development of tracking test routine.
- comparison of the currently used pitch tracking algorithm with other methods.
- embedding the visualization.
- final development of the choreography.


October 2011

To get a better control on the neuronal activity a monitor was added to MaxMsp to display in realtime changing parameters as voltage and the synaptic conductivity.


Monitor for a single neuron in MaxMsp showing the activity of the neuron. In the green and blue window different scales can be assigned to the excitatory (green) and inhibitory (blue) neurons and the starting tone.

Currently it can be chosen between the following scales:
- chromatic
- major
- minor
- fulltone
- pentatonic
There is an extra function to shift the pitch of the inhibitory tones about a 1/4 tone.

Jens Barth developed a first prototype for I-devices as acoustic neurons. Benjamin Staude arranged the pyhton based network server to communicate via OSC the connectivity parameters to the mobile devices. So the devices can be embedded in the Klangdom network or independent mobile subnetworks can be created. Due to software restrictions the mobile devices can be connected at most with three other neurons because only three frequencies can be tracked in parallel. Tests made on I-Phones, I-Pads and I-Pads showed that deeper tones played by the mobile devices are difficult to track by the other devices

To get a better control also basic neuron parameters can be monitores on the display. Furthermore Jens Barth analyzed the reliability of the tracking routine. Here he plotted statistics:


The fiddle tracking routine seems to fail at certain frequencies by octavation. Click pictures to see tracking statistics in detail.

Holger Stenschke finally made an interview with Tim Otto Roth explaining the basics of the project:


August 2011


MaxMsp environment showing the improved tracking routine by Holger Stenschke..

Jens Barth developed a first demo for I-devices.
The aim to use is to create an app with the basic two qualities:
- displaying live the interaction of the activity of the nue
- embeding the mobile devices as acoustic neurons in the Klangdom network


Klangdom in the background with its spatial representation on an I-Phone which turns around with the changing orientation of the mobile device


May/June 2011

Breakthrouh: After one year hard work - we got the system spiking acoustically!

The sudden breakthrough on Friday 2 June was due a complete reconceptualization of the whole network. The crucial key was parallelization: Instead of controlling all the 43 acoustic neurons (speaking the loudspeaker and microphone combinations) with one complex software in Max, we modularized the system, every acoustic neuron is controled now by an individual Max patch. This patches can even distributed at ease on different computers. We just did need to build up in python a server which initializes the individual patches via OSC sending also the connections speaking the number of frequency each neuron patch listens to. The neuron model is embedded as a python script into the max match using jython.

Above all the sound analysis in the neuron patches is handled now by FFT instead of using filter maps. This allows to process every 20 ms the sound information. After four days of hard work we got the system working and we started the first primitive tests: The system reacted on rythmic impulses, but also distinguished a continuously pitched sign wave generated on an iphone.

Finally we had conversations with programmers and electronic specialists discussing how to enhance the system by a custom electronics based on analogue chips analyzing and filtering the sound information or how to let interact iphones with the Klangdome.

Watch a clip with some preliminary results:

The next steps are:
- Testing with different connectivity maps and fine tuning of the parameters in the neuron model.
- Minimizing the current 20ms time frame. Here a Groetzel algorithm might help to optimize the performance.
- Interaction experiments using diverse music instrument (string, wood and percussion instruments).


December 2010

16 microphones were distributed over the whole space ot the Klangdom (19m x 13.50m) and associated to 2 or 3 loudspeakers. Holger Stenschke fnished the connection interface in MaxMsp. Using an average conncetivity of 5 neurons it turned out that the processing of the resulting 210 filter banks brought our control computer (MAC Quad-Core) to its performance limits. So we need to continue our work with two parallel computers or with an even faster machine for the digital sound processing.  
Benjam Staude enhanced the python code controling directly the led lights mounted at the loudspeakers. So the activity can be demonstrated by a red (interneuron) or blue light (neuron). Above all he worked on the implementation of a second neuron model demonstrating not the single spike activity but the spike rate of a neuron (Lotka–Volterra system). This rate model produces in the simulation a kind of "Klangteppich" of sounds chaning continuously their amplitude.


Visual scenario in the Klangdom: Benjamin Staude implemented a function to control directly over the pyhton neuron module the led lights attributed to the loudspeakers in the Klangdom. They light up if the loudspeakers play a sound. Click the picture to see a video impression.


July 2010

Holger Stenschke continued working on the connection interface in MaxMsp, so by a connection map (image left) the neural connncetions can be definded. Benjamin Staude programmed with python an external midi control board. So we can change now on the fly the parameters of the neuron model.

We started testing the system with a system of 10 neurons and 10 interneurons. To our surprise this costs yet quite computing power, so we had fire intervals between 10-60 ms.

Here are the first records of the tests in the Klangdom with realtime audio processing:

test 1 60 secs., OGG )

test 2 (60 secs., OGG)


 

March 2010

In March we finished the programming of the calibration processes to adapt the microphones to the environment. Above all we linked the sound platform based on MaxMsp with the neuronal model based on python running on a separate computer. Via OSC (open sound control) the data are exchanged between the two platforms.


Holger Stenschke and Benjamin Staude discussing the OSC communication between MaxMsp and the python based neuron model.



January 2010

In January 2010 the hot phase of the spike music project translating the loudspeakers of the Klangdom at ZKM Karlsruhe into a system of interacting acoustic neurons has started.


Dr. Benjamin Staude from the Bernstein Center for Computational Neuroscience in Freiburg developed the neuronal model for the project. The model is based on a python script comprising latest theories of neural networks (Desetexe & Izihikevich). The model is also used to simulate in advance how a neural network might sound like in the Klangdom at ZKM Karlsruhe including the decrease of amplitudes in the spatial situation.

Here a recent simulation of the interaction of 10 neurons and 10 interneurons according to a model by Destexhe which reveals a conituous changing interaction pattern:
60 secs. = 60.000 interaction cycles (!), 20 channels, 1360 Kb (alternative quicktime: )


Holger Stenschke at ZKM Karlsruhe started programming the filter bank in MaxMsp. The filterbank works like the synapses of a neuron interpreting the amplitudes registered by the microphones.


Project synapsis (status 2009):

Special thanks to Ludger Brümmer, head of the Institute for Music & Acoustics at ZKM, for the invitation to work in the Klangdom!