Deutsche Version

    "Communication in the nervous system does not work because a pulse crawls along an axon. It would crawl into all distortions and excite thousands of nerves or muscle fibers. Precisely because it does exactly that, communication can only function via wave interference, via interference images (interference projections). So anyone who thinks they can do neuroresearch without understanding interference networks is wrong. "

Properties of interference integrals at a glance

Space-time projections and interference patterns
between connected wave spaces

Gerd Heinz

The page gives an overview of interference integrals and interference networks*. For more detailed research, please see the lists of publications or historical pages. Interference models for the nervous system can be found under biomodels.


1. Motivation

When a nerve cell is stimulated, it generally responds with a brief impulse. If it is more strongly stimulated, it pulses faster, there is an increase in pulse frequency. The pulse amplitude always remains constant. Excitation is coded in pulse rate. We find this basic form of signaling in the nervous system in all known species. Adrian received the Nobel Prize in 1932 for his fundamental contribution to the research of the function of nerve impulse mechanism. For detailed investigations of nervous pulse parameters, Hodgkin, Huxley and Eccles followed with the Nobel Prize for Medicine in 1963. Erlanger and Gasser systematically examined the conduction velocities of various nerves. John Eccles noticed a relationship between the fiber diameter of the nerve and the conduction velocity. He wrote:

The author noticed in 1992 that a short pulse duration t together with a slow conduction velocity v generates geometrically very short pulse lengths s. The geometric pulse length s results from the conduction velocity v and the pulse duration t with

For Eccle's example, the pulse lengths vary with a duration of a tenth of a millisecond (t = 0,1 ms) according to s = vt from 12 mm (for 20 m and 120 m/s) to 0,12 mm (for 0,2m and 1,2 m/s). We find that these are pulse lengths that would credit to a radar! But how do you link such extremely short pulses?

While a geometric pulse length of twelve millimeters is more appropriate for muscle control, with one hundred and twenty micrometers we almost reach the columnar grid of the cortex. For more information see [NI93] or [IWK1994].

There is no immediate action at a distance in the nervous system. All information moves ionically and extremely slowly compared with a computer. Information spreads as a pulse in a spherical shape like a wave. Conductive speeds in the nervous system are anything but homogeneous - the spherical propagation turns into a wave propagation that most likely takes the form of a chaotic explosion cloud. The researcher's imagination is also challenged by the fact that the wave particles of the explosion cloud do not necessarily have to spread away from the center - the pulses flow back and forth on multiple curved nerve pathways.

If information is to be processed, we need input signals which, in accordance with Hebb's rule, act on the place of processing at the same time.

If one sorts practically occurring conduction speeds v and associated pulse durations T according to their product vT, the geometric pulse width s = vT, see [IWK1994], then the attentive observer notices a correlation between the geometric pulse width and the functional grid. The geometric pulse width in muscles is larger than in the cortex.

If the pulse-shaped input signals are geometrically a few tenths of a millimeter long, information processing can only take place at very defined locations, namely where pulses meet. Since this location changes as soon as only a single pulse arrives earlier or later, processing at a location is only possible if pulse patterns occur coherently (ie with an unchanged time difference). Since sensor and actuator fibers ultimately flow into a nerve network at discrete locations, the question arises as to how the computer science of the network must ultimately be designed in order to ensure that many pulses to be processed in the task to be solved at the same time in exactly this location, for example an actuator connection for a specific muscle. So what does the demand for 'local coherence' mean for the computer science of the networks?

To clarify the extent of the problem to be solved: For IT necessity it must also be assumed that only some neurons are of the OR type, but that there must also be neurons of the AND type (better: with a higher treshold). The highest recorded number of synapses of a neuron is approximately 80,000 ([Eccles], p.134). A pyramidal neuron of the cortex, for example, may have 10,000 synapses. The threshold value for excitation should be able to vary between 0% and 90% (fuzzy OR to AND behavior). Then this means that for an AND-like excitation of the neuron 9000 synapses have to be coherently excited: up to 9000 tiny pulse peaks have to touch the right synapse of the neuron at exactly the same time so that it can be excited. The question immediately arises: How can such extreme precision be achieved in a network with the highest, absolute parameter fluctuations? How can such precision be achieved in a network in which forty to one hundred billion neurons interact flexibly with one another?

To say it another way: If pulses flowing in the nervous system are geometrically short compared to the addressed grid, information is only processed where pulses coherently interfere positively. Thus temporal patterns become spatial codes. A code is no longer processed by a neuron X, Y or Z, instead of each temporal pattern addresses other neurons. Information is processed where twins of a pulse meet again at the same time, where they (positively) interfere with one another. This creates an expanded concept of waves and a wave model of the widely branched neuron is created. The resulting computer science has nothing to do with our digital circuit technology or Boolean algebra.

Wave theories were previously located in the spectral range (Fourier range). However, since pulse patterns do not match spectral transformations (Fourier), a wave theory in the time domain had to be developed that includes discrete and inhomogeneous spaces of a neural type. In 1993 the most important features were outlined in the manuscript "Neural Interferences" [NI93]. Almost all of the ideas discussed later go back to this manuscript. Often they are simply too short, too weak or too difficult to understand.

Back to the coherence of interfering pulses. Coherent pulse interferences are conceivable in the form of mirror-inverted images of a self-interferential type or spectral maps of cross-interferential type. Where a pulse wave interferes with itself, it generates a mirror-inverted image, a projection. A spectral mapping is created where an impulse interferes with its (coherent) predecessors or successors. Seeing and hearing merge with one another. A new, previously unknown type of communication and information processing is emerging. At the end of 1996 the term 'Wave Interference Networks' was created for such delaying, pulsating networks.

The author's attempt to record nervous pulses using a data recorder and software and to calculate their nervous projections was only partially successful, not least for commercial reasons, see [BIONET96].

In contrast, microphones connected to the data recorder produced the first acoustic images. Acoustic experiments with the interference simulator PSI-Tools (parallel and serial interference tools) showed the world's first (standing, passive) sound images and sound films, acoustic photo and cinematography and the term 'acoustic camera' became the first application of such simplest interference networks.

Looking at the theory of interference networks, it expands the physical wave theories into two directions: On the one hand, the wave concept is extended to inhomogeneous and discrete delay spaces, namely to nerve networks. At the other hand, force pulse patterns to leave the spectral range and begin a wave theory in the time domain.

Only because the wave theory in the time domain was initially easier and better manageable than competing wave theories in the frequency domain, the Acoustic Camera technology was in the beginning of the new century the first to be launched worldwide. As early as 1994, first simulations with PSI-Tools confirmed an algorithmic core (interference reconstruction) that completely solves the problem of over-determination: In contrast to the off-axis blurring of optical lens systems, the acoustic camera works with any number of channels with any widest, sharp image field, see [DAGA07].

With the first ideas about the interference approach and waves on interconnects, it was not yet certain in 1992 whether the theory of interference networks would actually be applicable to nerve networks. My formulations have been correspondingly cautious in all publications so far. Only in the course of many publications and discussions did it become more and more transparent that the interference approach is not of a hypothetical but of a real, of a systematic nature. On the one hand, the many 'coincidental coincidences' of discussed network structures with known research results or behavioral patterns speak for it. On the other hand, the theoretical treatise can be structured in such a way that its parts are systematic and comprehensible.

If we want to evaluate the interference approach objectively, Eccle's findings on synaptic transmission are the focus. John Eccles advocated initially a (delay-free) electrical transmission of the pulse at the synapse, but then demonstrated a slow, predominantly chemical transmission in higher organisms. Eric Kandel explored details. The chemical transmission, in turn, can also have an integrating effect: for example on a neuromuscular endplate (see Eccles: Human Brain, Chapters II and III, p.107). The development of the excitatory or inhibitory, postsynaptic potential (EPSP, IPSP) evidently shows a small, integrating effect everywhere. An EPSP/IPSP pulse seems to have a time constant that is about ten times longer than the pulse that triggered it. Although very important, more detailed investigations into pulse relations at the synapses are not yet known.

In all simulations of neural projections it becomes clear that a slipping together of a projection with its externally interferential ghost images is determined solely by the refractory period (pulse pause), see the pain simulation. The pulse pause must be more than ten times longer than the pulse, otherwise we generate 'potentials'. Therefore a long EPSP/IPSP is not a problem.

The concept of the 'pulse' is to be seen in relative terms. An investigation with radioactively labeled leucine [Ochs72] is known, in which a pulse wave moves with a propagation speed of 4.75 m/s or 410 mm per day, see [NI93], chapter 11, p. 220. Let us assume that a pulse lasts one hour, then the pulse would have a geometric pulse width of around 410/24 mm = 17 mm. In opposite to the long duration the geometrical pulse length is extremly short! The question inevitably arises whether one can even observe such slow signals. Observations of any kind are usually only stable for a few seconds or minutes. Such a slow wave is not perceived by an observer as a wave, but wrong as a static potential.

Too it remains a problem, that any reliable data on geometric pulse widths in various nerve fiber parts are not known. Questions of weighting don't seem so clear either: Is the individual synapse weighted, or are dendritic branches weighted at the access to the soma? The work on interference networks shows that these questions becomes very important! A study of the wave extinction on the sciatic nerve of the frog showed a hundred years ago, that a nerve segment that is excited at several places cannot be modeled as a threshold value gate. If threshold logic is not an approach for modeling neural networks, then we have to ask ourselves for different modeling techniques.

Interested scientists occasionally asked to present the theory of interference networks (IN) in a mathematically clearer way. Different attempts followed. Most of the time they had the same, frustrating result: the general principle was sacrificed to a formula or point of view that was applicable in only one, individual case. The tendency is increasingly to be found in recent conference contributions. This is significant insofar as even the basic approaches of neurocomputing, ie very common description methods such as threshold value logics, are only tenable in exceptional cases through considerations of an interferential nature. The commentary will therefore concentrate as little as possible on mathematical details.

We will try to shed light on the harsh IT consequences that undoubtedly result from pulse interference on delaying networks. We usually assume that the geometric wavelengths are roughly in the range of the neuronal grid under consideration.

2. Race circuits and addressing

Length-proportional delay times of the nerves automatically and invariably generate dynamic addressing, a mapping into space, see Fig.1a. The resultant interference networks (IN) are located in time and space at the same time, maps are created in space and time, we call it "spatio-temporal maps".

Fig 1a: On the addressing principle in delayed pulse networks. Case 1 (Fall 1) activates neuron N2, while case 2 (Fall 2) activates neuron N1, provided that the transit time difference between a and a' and b and b' is τ, and the neurons have AND-character.

In Fig.1a we consider two neurons N1 and N2, whose threshold values are set so high, that they show AND behavior. The output should only be activated if both inputs of neurons N1 and N2 receive a pulse at the same time.

The finite conduction speed generates the delay times a, a', b, b' on the interconnecting wires. The delays a and a' as well as b and b' may each differ by τ with

a' - a = τ
b' - b = τ

Now we apply two pulses delayed by τ at points A and B, Fig.1a below. In case 1, let the pulse at A appear first, in case 2, let the pulse at B precede. While in case 1 only neuron N2 is excited, in case 2 only neuron N1 is excited.

The delays of the interconnects consequently imply, that a changing, temporal pattern addresses a changing location. If our two neurons had weight-learning inputs of Hebbian character, it would not be of much use to them. They could only learn, not to react at all.

If we expand this addressing model by further neurons (later in Fig.3), we can see that the relativity in the progress of the pulses between A and A' determines the location of the interference, the destination of the information. Therefore, such networks were introduced by the author as Interference Networks (IN).

At the same time, we notice in Fig.3 that a mirror-inverted mapping from P to P' is created between a generating field (below) and a receiving field (above). This is unavoidable reasoned by delays: The map occurs at the places, that have the same transit times on all paths between the sending and receiving neurons.

The information processing therefore lies in the superimposition (interference) locally on the neuron of pulses arriving at the same time. Conversely, the simultaneity of arrival means that, in addition to weights, the decisive role for understanding the computer science of a nerve network is played by the delay structure of the network. This is fundamental different to most of the electronics, we use! Exceptions are GPS, RADAR or SONAR.

Since the temporal structure of the network is documented both in the hard-wired delays and in the fed-in time code, for example every noise and every frequency will produce different interference patterns.

Every location in the nerve network has an address via its specific delay network. It can only be addressed using a time pattern that corresponds to the network of delays.

The question of the slowness of pulses is answered using the geometric pulse length as the product of the conduction speed and the pulse duration. This ultimately determines the neural grid that can be mapped by a pulse, see Fig.3. For example, we will need wavelengths in the centimeter range for muscle control, whereas wavelengths in the micrometer or millimeter range are required for intracortical communication. Ultimately, known pulse durations are in the range between microseconds and days, measurable conduction speeds between micrometers and meters per second.

To further calculate our example:
Let N2 be the beginning of an efferent (descending) motor neuron. In order to control the muscle in question, the exact location of N2 must be excited. In Fig.1a, this is only possible with the combination of the time functions at points A and B offset in time by τ according to case 1. Let τ be one millisecond, then there would be a length difference ds between the paths at a conduction speed v of 1.2 m/s of 1.2 millimeters: ds = v τ = 1.2 m/s * 1 ms = 1.2 mm. A very small interference range decides over function or disfunction!

Fig 1b: MacDougall's reflex arc. Source Sherington, Charles: The Integrative Action of Nervous System, 1906, Fig.56, p. 201, with reference to Ref. 262: MacDougall, W.: Brain, Part cii, p.153

If we look at the hundred-year-old sketch by Sherrington, Fig.1b, we find an interference circuit exactly in the constellation described in Fig.1a. Unfortunately, Sherrington withholds from us which geometries and conduction speeds he found. A static function, as described by Sherrington, is impossible in terms of circuitry. In order to achieve a function, both muscles (flexor and extensor) have to be controlled dynamically by different pulse patterns (case 1 or 2). To do this, the neurons must have AND character with high threshold. With the logical AND function (AND), both inputs are needed at the same time to excite the neuron.

If, on the other hand, you read Eccles, there would also be another possibility of interpretation. Analogously, he writes that inhibitory synapses dock exclusively on the cell nucleus, excitatory synapses only rarely dock on the cell nucleus. If the two synapses that dock on each neuron were of different types, the circuit would also be able to function statically. But, however, in that case we would not need a cross connection for correct function. Other findings (Crick & Asanuma in PDP, Vol. 2, p. 338, 1986) state:

This would rule out a static function of the circuit.

3. Learning in delaying pulse networks

Donald Hebb, a student of Karl Lashley and a colleague of Karl Pribram, formulated a first fundamental learning hypothesis ("Hebb's Rule") that is still widespread in the ANN-world today:

Hebb's Rule names synaptic weight learning. Is weight learning also applicable for interference networks in which delayed pulses interact - that is, for nerve networks?

Hebb's Rule says nothing about dynamic addressing or a delay structure. While neural network research (NN, ANN) has been deriving mapping principles via modification of weights (everyone, from perceptrons to SOM) for forty years, delaying pulse networks can only generate their mapping on the delay structure of the network. But we found: Weights cannot withstand addressing due to delays!

For the example in Fig.1a, there is no weight constellation that can reverse the assignment of the code pattern (case 1 or case 2) to the neurons. Delays are much more stronger than weights. Primarily there are delay addresses in a nerve network, only these can be used for weights learning.

This means: If codes are sent to a network that do not have any delay addresses in the network, the codes fizzle out. Nothing happens.

This realization produces a general rethinking of Hebb's rule:

Ultimately, it means that a first process must be that of the growth of a (of whatever kind) nerve felt, and a second process that of (weighted) learning, and only in those places that already have an address. Synapses then only arise there.

Conversely, this could result in:

That would be a plausible, modified Hebb's rule adapted to pulse interference. The view coincidentally again corresponds to findings of individual genesis. There, Pomerat 1964 (Fig.2-2 in Pribram: Languages of the Brain) described a felt-like sprouting of the nerve endings (growth cones). Pomerate differentiates between stochastic, felt-like growth and synaptic generation/degeneration. Synapses are only strengthened where patterns correspond to a delay structure. If sprouts or synapses are not needed, they degenerate again.

We receive a criterion for assessing the performance of neural learning algorithms. If code patterns of an algorithm match delay addresses, it is potentially able to simulate nerve networks. All other algorithms have nothing to do with nerve networks, they belong in the realm of Artificial (Neural) Network (ANN) theory.

However, since the delay structure of the network is spatially fixed, this means a three-dimensional bond and a certain physicality. "Form codes behavior" wrote the author in the preface to 'Neural Interferences' in 1993:

There is a sad story about this finding. In 1990, tens of thousands of completely neglected toddlers from Ceaucescu victims were found in Romanian children's homes, who had hardly any contact with anybody. Children who were more than two years old at that time will suffer from chronic behavioral deficits for life. Apparently, the basic structure of our nerve network develops in the first two years.

Because the address space (as a delay structure) results from a three-dimensional physicality of delaying interconnects, it will only adapt to changing patterns within a modest extent. Changes in the conduction speed are conceivable through slight variations in the diameter or length of a fiber. If the pattern or network changes beyond adjustment limits, what has been learned disappears forever, even if the learned weights are completely retained: knowledge, coordination or behavior can suddenly no longer be accessed.

This fact gives an indication of diseases in which the myelin sheaths of nerves degenerate and nerve fibers become drastically slower. In multiple sclerosis (MS), the delay structure of the network gets mixed up. Codes no longer reach the actually addressed neurons. If, in the course of the spontaneous healing of MS, it turns out that everything suddenly works again, that would mean that the weights have outlasted the disease.

Karl Lashley, at that time the head of Donald Hebb and Karl Pribram at the Yerkes Laboratory for Primate Biology in Florida, studied learning with animal experiments. In his search for storage locations of a learned behavior, he was able to remove different areas of the cortex of rats without destroying learned information (path through a maze). After 30 years he came to the ironic conclusion that "what has been learned is not stored in the brain". He, of all people, was the first to speak about interference patterns. Karl Pribram writes in 'Brain and Mathematics' on page 4:

Today we know that a neuron can only become active where all partial waves of a sending neuron arrive at the same time. I wrote "delays dominate over weights" in various essays. Weight learning without delays as the basis of neural network theory (ANN) inevitably leads to a completely different behavior compared to the nerve network.

Lashley apparently already sensed the interferential blockade of weight learning through delay addresses. He may also have suspected that wave interference can only lead to one type of interferential learning. Be that as it may: Hebb's Rule is limited to weight learning and is therefore only valid on a network with pre-existing delay addresses, or on a delay-free network. But this does not exist in nature. Or in the words of Karl Lashley: 'Hebb is correct in all his details but he's just oh so wrong'.

4. Nervous wave propagation

Andrew Packard discovered in 1995 that an interferential spread of excitation between chromatophores (coloring cells) of squid can be observed. This suffices for a very simple interference model. He observed color waves of spontaneous excitation [AP1995], see Fig.2. The special thing about it: The substrate has an almost homogeneous, constant speed of propagation, the waves resemble water waves.

Fig 2: Waves of spontaneous excitation on an octopus with the spinal cord severed

What we think we see on the octopus are waves. But what we really see are opening and closing chromatophores. So what is our wave abstraction? From physics lessons we know the one-dimensional, elementary description of a time function, e.g. in the form f(x-vt) (see the site of animations). Let us imagine many such time functions that flow in the mesh of a 3-dimensional network. Its nodes may make any links between incoming time functions (addition, multiplication...) and forward them. (Sending back is initially excluded.)

Let us assume that our network is at rest and we excite a single node with a pulse. When we zoom out of this network, we can observe pulse propagation in the form of a ball-like wave that spreads around our node. However, it would only be observable at a homogeneous conduction speed, e.g. in acoustics or with Andrews Squids. In the case of inhomogeneous conduction speeds and an inhomogeneously designed network (cortex), the idea becomes more difficult. The visual impression of a spherical wave will quickly give way to that of a spherical chaos. Be that as it may: We notice that the one-dimensional moving time function in the n-dimensional area resembles a wave - even if we can no longer see this with the naked eye in the case of inhomogeneity. Hence the names: waves on wires or time function waves.

Why do we care? Because in 1993 it was discovered that these interference networks have imaging properties. Andrew's squid experiments are suddenly in a row with known, mirror-inverted images in the nervous system (homunculus; visual cortex etc.) via the aid of interference network theory. One conditionally affects the other. Wherever an image can be found, wave propagation is the cause; imaging projections are to be expected where wave propagation is to be found. Incidentally, this sentence also applies to optical or acoustic images.

To explore the properties of such "interference networks" (IN), a simulator has been created since 1993 (Bio-Interface, later called PSI-Tools and much later NoiseImage), the first images of which were originally available for colleagues and the press to download on this page*. With PSI-Tools the first simulations of simple interference networks as well as the first acoustic images and acoustic films are achieved. A demonstrator was needed to demonstrate the properties of the IN: the acoustic camera. Between 1994 and 1996 the world's first acoustic images and films were made.

The focus of software development on acoustic imaging led to the first acoustic cameras), marketed since 2001, which was honored with the Otto von Guericke Prize in 2001, the Berlin-Brandenburg Innovation Prize in 2003 and a nomination for the German Future Award (Prize of the Federal President) in 2005.

Didactically, the simulations shown here have some special features:

- PSI-Tools was a very specific network simulator. It is only possible to represent simulations that show the interference of time functions from a generator room via "axons" to a detector room.

- The time functions were only linked at two points: at the feed point of the axons and at the final image point in the detector space. Strictly speaking, a two-layer interference network could be implemented with PSI tools.

- The assumption of homogeneous master speeds generally does not apply to a nervous system. However, it is didactically unavoidable in order to make the consequences of interference clear.

- The information runs within the wave spaces (generator or detector) without interaction. This is an extremely rough approximation because partial waves in the nervous system can be blocked by neurons along the way.

It is therefore assumed that waves propagate in incrementally meshed networks in a very similar way to waves in coarse-knit networks, if one only considers the interrelationship between cause and effect. This abstraction is borrowed from a Huygens wavefront merging of elementary waves.

Fig.3: First sketch of the simplest, neural mapping (pulse projection), title page of the book 'Neuronale Interferenzen', 1993.

A sending room S interferes via two axons A and A 'with a receiving room M. Only where pulses from a source neuron arrive again at the same time does an excitation arise. Excitation from the neuron at location P is thus passed on to a neuron at location P' - or an image P is assigned to a mirror-image P'.

We know exactly this property from the optics of lens images. And we suspect that the reception assignment is defined by the runtime properties of the connecting network.

This simplest, neural interference circle (Fig.3) was chosen as the cover picture for the manuscript Neuronale Interferenzen (1993) because of its optical analogy. The discovery of mirror-inverted images in "neural networks" was a sensation for those in the know in 1993, as mirror-inverted maps were known from anatomy (homunculus), but not from network research (neural networks).

5. Pulse projections

Let us consider that a single interrupted, electrical conductor path in the car leads to equipment (blinkers, headlights, horn, radio ...) no longer working, and let us also consider that a nerve cell is only about seven years old, we on the other hand, seventy-five on average, then a problem becomes apparent. At seventy-five, not a single nerve connection would work in our body. We would neither feel the hot stove, nor could we take our hand away.

As a result, we cannot afford a simple 'bell wire' connection between sensor/actuator and brain. Every nerve cell needs many double (s). Maybe we lay each line several times? Or do we solder all the lines together at all the plugs? From now on, headlights, starters, indicators, window lifters and horn would be activated when we press the wiper switch.

Or does the defective line repair itself?

In principle that would be possible. If it weren't for the learning of the necessary cross and cross connections: muscles should be tensed when standing up, others should be tensed when sitting down, and still others should be relaxed when standing up and sitting down. The biological network is interconnected a million times.

So if we have to do the task without individual bell wires - how do we get even a single piece of information in a chaotically interconnected, short-circuited network of neurons: "Please bend down!" At the same address (right index finger) for seventy-five years?

A neuron does not have an internet-IP address. In nature there is no protocol available with which information can be sent specifically to a target. It is also generally unclear to the sender which goal the data should achieve if neural learning is to be possible at various points in the network.

So how could a solution be?

In a cross-connected network (short-circuited everywhere), apparently only signal propagation times can connect the source and destination of information - via interference integrals. To do this, we have to send out every piece of information (pulse wave) in all directions. Where several pulses from a sender happen to arrive again at the same time (colloquial: interfere), a higher effective value arises - the goal has been reached [NI93].

This can also be several goals at the same time. Or goals staggered one after the other. In contrast to the WWW, the origin and destination address of information in nerve networks can only be defined via the geometry of the (inhomogeneous) delay space. All delay-changing units can be regarded as 'switches': glia potential, chem. messenger substances, synaptic strength (this changes the time constants), inhibited or excited nerve cells as detours, stretching and compressing fibers (see thumb experiment).

As a result, a new computer science arises: In contrast to the Internet PC, the nerve cell does not know to whom and where it is sending its data. Even we cannot observe it, since all information initially disappears in all directions - but only interferes positively with itself again in a few places.

Do we have the slightest idea what it means to understand such computer science? Only when this interferential computer science has been sufficiently validated should we begin to interpret consciousness or intuition. Everything else is charlatanism.

When the idea was born in 1993, see annual reports or the project directory, it was initially uncertain whether the points of view were correct. The simulation of a wave field in the head is generally beyond our imagination. Simple detection experiments had to be developed. So the idea arose to write the simplest simulator that can simulate some essential properties of interference networks (PSI-Tools at the end of this page).

We mentally took a bitmap with black pixels, acted as if the bitmap were a square pond without a border and the pixels were stones thrown into the pond and lowered three sensors (green) into it, which record the wave movement of the pond surface as a function of time. This then results in three time functions, see second picture. The first question was: Do these time functions actually contain the image of the bitmap? Can the generator image be read out again from the recorded time functions ? And under what restrictions is this possible?

Generator field

Fig.4a: Bitmap as generator field: Imagine a garden pond.

Let stones be thrown into the pond one after the other at the locations of the black pixels. Waves propagate in a circle around the emission locations and finally reach the sensors shown in green at the borders. The time functions of the wave field may be recorded at the three sensor locations marked in green.

Time functions (3 channels)

Fig.4b: Resulting time functions of the bitmap for the three sensor locations (the time axis points to the right).

We can see different time functions (blue, green and red) that have spikes at different points in time. The pulse image should be transmitted in three channels (we think of axons).

Reconstructive interference integral

The time functions are then passed into a second (wave) pond at defined locations (black). (In the case of interference reconstruction, in principle backwards in time). However, its x, y coordinates of the feed locations were chosen to be slightly different from those of the sensor field. The transit time on the three interconnects is assumed to be identical here (equal length, zero). The pulses now interfere with themselves at the original source locations (we call this self-interference) but also with predecessors and successors (called cross-interference or aliasing). The self-interference generates an interference integral in the form of the "GH". Cross interferences create the additional emissions that are visible around the self-interference image.

The computer allows us to choose the time direction for calculating the interference integral. If we want to test into the channels from behind, so to speak, we choose a backwards running time or negative delays and get what is known as an interference-reconstruction. If we are interested in a mapping that actually takes place in a nerve network, we select forward time and get what is known as an interference projection.

With otherwise the same parameters, the reconstruction and projection are exactly mirror-inverted to one another, ie the reconstruction appears the right way round to the original, while the projection is mirror-inverted (see also a note below). Nature only knows the (mirror-inverted) projection, the (non-mirrored) reconstruction necessarily requires the computer.

Fig.4c: Three-channel interference integral from the channel data Fig.4b from November 14, 1994 - historically one of the first successful interference reconstructions (PSI tools). The source location at the bottom right has been moved so that the original image is reproduced in a distorted manner.

The coordinates of the right channel in Fig.4(c) were shifted up/in from the lower right corner to see how the interference integral reacts. There was simply a lack of imagination to imagine how a simple integral would react to distortion of the source geometry.

At that time there was a huge challenge in computing time: this image calculation took about a weekend, the first movies took a week. If the wrong parameters for exposure or reconstruction were chosen, everything started all over again.

A receiving room that is spatially different from the sending room can be used to check the superposition of pulses (interference) in target rooms of deviating geometry. An integration via the excitation of each location (interference integral) records the interferences that occurred at different times in the image. The simulation shows the conditions under which a receiver can reconstruct the transmission locations (transmission addresses), e.g. sensory excitations, from channel data transmitted multiple times in parallel. In the image, the lower right electrode was pushed 10% into the receiving field, which is equal to the generator field, before the reconstruction was started; the interference image appears distorted to the top left.

As you can guess, these questions are based on those who are asked about GPS (Global Positioning System), Phased Arrays from SONAR or RADAR, or the questions that colleagues from SKA (Square Kilometer Array) or sonography devices answer were. Here, too, it is a matter of comparably simple interference systems of a technical nature.

After irritations with the theory of the so-called Neural Nets (NN), I delimited the research on (pulse wave) networks around 1997 as "interference networks (IN)". The cause was permanent misunderstandings during lectures or publications. Neither the approach nor the statements about the IN theory could be understood, as no corresponding knowledge was available. To date, interference networks are nowhere taught. My attempts to set up appropriate lectures at institutes of the Humboldt University in Berlin unfortunately came to nothing. Neither the motivation nor the content of the theory was understood.

However, since it was registered in the neuro-community that common approaches (with state machines and delay-free interconnects etc.) are obviously completely unsuitable for the description of nerve networks, the following more and more went over to the classic research field "neural networks" (NN) to be referred to as "Artifical Neural Nets" (ANN).

We owe the curiosity to this development that we have to designate the networks that deal with the modeling of nervous properties as interference networks, while "artificial neural networks" (ANN) with no run-time - especially in the years up to 1997 - were designated as "neural networks".

Even today, the irritation is fatal for student training. Open any book on neural networks: The introduction contains the biology of nerve cells, followed by the theories of artificial neural networks - which apart from threshold values ??and integrators with neural networks have nothing in common.

6. Wandering interference integrals - zoom

Survival in the animal kingdom is directly linked to the concept of recognition, think of ways to food or watering places, the visual distinction between poisonous plants and food, between cliffs and steps, between friend and foe, between large and small. Recognizable optical features, however, are subject to changes in distance and consequent constant changes in size and shape.

If you try to train a somehow weighted network with a face at a distance of one meter from the recording camera, this network will, under favorable circumstances, recognize the face at the same distance. Recognition will be impossible as soon as we change the distance to the face, rotate it, move it or tilt it.

How can we convince a network of nerves to recognize a face that appears at varying intervals? How could nature help itself?

We remember that the geometric wavelength L = vT = v/f (v: conduction velocity of nerve, T: pause duration between pulses, f = 1/T) depends on the conduction velocity. If the conduction speed varies, the interference locations vary. But what does that mean in a mapping network? How can you imagine this variation?

We want to make an experiment similar to the one above. G-shaped pulse sources (eg neurons) serve as the generator. Again, the waves may spread out in a circle around the source and reach the three channels at different times. Again we project via the channels into the receiving field. Once in the target field, the waves spread out again in circles. Where they meet, our screen changes from yellow to red to blue. We only vary the speed of propagation of the waves in the receiving field (background speed).

As a result, we see that, depending on the selected propagation speed of the waves in the receiving room, interference integrals ("images") of different dimensions arise. Comparable to a photo lens, this effect is called "zoom".

Fig.5: Simulation of an interferential projection between two neural fields that are connected via three axons shows a 'zoom'-effect comparable to optical zoom. As more we zoom, as more cross-interferences came into the field, producing a hologram-like image (g).

The time functions are generated in a generator room with a normalized speed v = 50 (mm/s). The projection is calculated in a second, receiving room. The image size varies under the influence of the (normalized) master speed (v = 100, 75, 50, 20, 10) in the receiving room. a) Simulated generator field, black pixels pulse; b) resulting channel data; c) to e) Interference integrals over the channel data, the parameter is the background speed v. If a speed that is identical to the transmitter a) is used, a mirror-inverted image is created on the same scale d). If, on the other hand, the background speed is changed, the image 'zooms', see c) and e). Interference arises in the receiving field where the pulse waves of all three channels arrive at the same time. If the coordinates are assumed in cm, for example, the speed results in cm per second (pulse width 2 ms, sampling rate 5 kHz). This picture was first published here: (Virex), (Bionet96). For very first simulations, see the GFaI annual report 1994.

Now you may be wondering why we have put all this effort into? To this end, we remember the (neuro-) glia as a supply substrate for the nerve cells. It is known that the glia influences the conduction velocity of nerve fibers passing through it. We can perceive this via electrostatic conductors: When the conduction speed changes, the measurable potential of the glia changes at the same time. If we measure potentials at the cortex with the EEG (ECoG), we can assume that different potential regions have different conduction speeds - no more and no less (see also the EEG-experiments page). Since it was already pointed out in the book "Neural Interferences" (1993) that a variation of the conduction speed in the transmitting or receiving field influences the scale of the mapping between the two, it was obvious to develop a scenario for this simulation.

What we see here is nothing more and nothing less than the effect of a potential field in the EEG: the images below begin to zoom. And the gradient at the edge could even cause an offset (see movement).

EEGs therefore have nothing to do with nervous data. We see focusing voltages for projections between neural fields, the technical utilization of which could lead to the defocusing (derailment) of the overall system. (Incidentally, this realization stopped own ideas for technical applications of EEG in 1997.)

But it should also be remembered that certain squids do not have the chance to zoom. They only recognize prey if they appear at the right distance in front of them. The background seems to be more of practical nature.

7. Holomorphism and Lashley's rat experiments

(Note: There is a further supplement to this section in deutsch and english)

Lashley was looking for the location of memory contents. He trained rats to find food in a maze. Then he systematically removed small parts of the brain in the grid and observed the effect. Whichever part of the cortex he removed, the rats remembered what they had learned more or less well. After 30 years he resignedly confessed that he was no longer sure whether the brain was really the place of their memory: "The series of experiments ... has discovered nothing directly of the real nature of the engram". And yet Lashley had discovered something crucial: apparently all information in the cortex is apparently holographically encoded.

David Bohm and Karl Pribram have been giving holographic organizational possibilities since the 1950s. Karls student Walter Freeman even gave a fictional image of a wave field in his 1972 work: Waves, Pulses, and the Theory of Neural Masses (Progress in theor. Biology, Vol. 2, 1972, New York/London).

As luck would have it, self-holographic maps were immediately visible during the first experiments with zooming images, see Fig.5g or the GFaI-Annual Report 1994. These are inherent in the nature of the interference networks. In Fig.5g you can see that not only the picture of the original, a capital "G", is mirror-inverted in the detector space, but many, incomplete G's are also mapped around it. The realization: A neural map cannot be saved in just one place! It is always stored holographically.

Holography or tutography?

When Dnes Gbor invented holography in the forties, it was created in the color space of light, in the Fourier space. However, if we apply the Fourier transform to pulse-shaped time functions, nothing clever will come of it. It should be noted that the concept of interferential tutography is open to any kind of time function (Dirac to sine, code pattern or sequence of states), while holography is always associated with sinoidal time functions, think of the well-known light spectrum.
This means that (interferential) tutography should be viewed as a generic term for holography, which is also existent in the time domain.

Let us look again at the zooming images. While only a mirror-inverted 'G' can be seen in images (c) to (e), further interferences enter the image field in images (f) and (g). Apparently the following pulse waves now interfere with each other and with the original pulse wave. The result is fascinating. We find the same interference integrals of the 'G' all around! Image (g) consequently shows a kind of hologram as a fundamental peculiarity of interferential images.

Suddenly Lashley's rat experiments can be interpreted: interference networks store holographically (better: tutographically) and Lashley provided the decisive evidence for this! Lashley's unsuccessful search thus becomes important evidence of an universal, interferential effect in the nervous system!

Neuroscientists Lashley, Pribram, and Hebb had noticed Gbor's idea of holography. When Karl Pribram met Dnes Gbor years later at a UNESCO conference, he explained his holographic brain analogy to him. He wrote about this in "Brain and Mathematics" 1971:

The interesting thing about it is the phrase "but not exactly Fourier". Gabor suspects or knows clearly that a Fourier transform of stochastic pulse patterns can bring no meaningful results. At that time all holography takes place in the optical spectral space. Holography in the time domain does not even begin to exist, Fig.5g was not possible to make before the 1990's). What Gabor wanted to express was something like: Fourier and holography go well with optics, but not with the nervous system. To understand Fourier better, remember that the spectrum of light - its Fourier transform - can be generated with a simple prism.

If we take a closer look at the result of the simulation, we can see that pulse projections not only map a "G" onto a "G". Rather, many more "G" appear around a central "G". The originally unique information "G" in the generator field is mapped several times in the detector field - but why?

To understand it, let us recall the two possible types of interference:

8. Self- and cross-interference

To see or to hear?

How is the actual distance R between the recurring 'G' defined? For this we remember that there are only places in the interference integral that have a high effective value. These are exactly the places where waves meet.

Fig.6a: If a wave hits its own twin brother (colloquially itself) at a defined location, we speak of self-interference (a). If, on the other hand, it encounters a predecessor or successor pulse, we speak of cross-interference (b).

This is exactly where "hearing" and "seeing" differ fundamentally: depicting projections (pictorial ideas, imaginations) are self-interference images (a), while temporal structures (pitch, sounds, language, etc.) can only be depicted via cross-interferences (b).

But if we look at relationships within images (textures, patterns, affiliations), then we are at cross-interference in the picture: We recognize an object through cross interference! So seeing is also hearing.

However, so that one place can get a higher effective value than any neighboring one, care must be taken that as many waves as possible from different directions meet exactly at the place.

But every wave has a predecessor and a successor. So if many waves meet in one place, then their predecessors and successors are also in well-defined places - shortly before or shortly after.

Fig.6b: Meeting locations of waves correspond in the interference integral with excitation locations. Consequently, the cross-interference radius can be determined from the wave field. It results from the distance between successive waves with wavelength λ = 2R. (Sorry: a 4-channel wave field is drawn on the left, while a 3-channel interference integral is drawn on the right).

So we found a relationship between the average pulse pause T of the time functions at (maximum) rate of fire f and the center-to-center distance of the interference locations (cross interference radius) R.

Let the wavelength λ = vT with v: conduction velocity and T : pulse pause, waves in opposite directions interfere with each other again at half the geometric wavelength. Consequently, there is a relationship between the average wavelength λ and the cross-interference radius R(cross-interference radius):

(1) R = λ/2 = vT/2

With a fire rate f = 1/T we find

(2) R = v/2f

or, if we want to determine the expected conduction velocity from a fiber density

(3) v = 2f R = 2f λ/2 = f λ.

In an interference network, the parameters of the conduction velocity v, the cross-interference radius R, the rate of fire f, or the pulse pause T = 1/f are obviously inextricably linked. If a system has an cross interference radius specified, for example by somatotopia, then a well-defined velocity belongs to it. If a velocity can be measured, the size of the somatotopic area can be estimated from it.

However, it should be noted that the partial pulses generally arrive at fibers of different thicknesses. The guiding speed varies greatly in proportion to the thickness. In this respect, the above formulas are only approximations for an averaged velocity.

Calculation example

If we want to trigger a movement in our little toe, cross-interference on the way from the cortex to the toe must be excluded.

How high should the guiding speed v be, so that the places to be addressed in our body are not overlaid by cross interference? We assume that the nerve network under consideration has a maximum fire frequency fof approximately 30 Hz, and that our body is two meters tall - with this we have to establish an cross-interference distance R of 2 meters:

R = 2 meter

mit R = λ/2 folgt

λ = 2R = 4 meter,

so we require a velocity v of

v = f * 2R = 30Hz * 4m = 120 m/s.

If we look at known fiber conduction velocities this would correspond to a type "Aa" according to Erlanger/Gasser or type "I" according to Lloyd/Hunt. It is the fastest type of fiber, it is myelinated.

Should we note, that examples on this homepage match known values from the nervous system, it may not be a coincidence.


It is important to note here that cross-interference as well as self-interference have independent tasks to perform and that they complement or exclude each other depending on the task. For more information, see [NI93] and later papers in the list of publications.

9. Nervous calculation examples

In the case of interference integrals, the image content is always in relation to the parameters of the time functions, conveyed via the cross-interference distance or radius (see above): Around a channel (ganglion) it is only possible to project aliasing-free into an area whose radius is not larger than the average cross-interference distance (geometric length of the pulse pause).

The following examples do not claim to be correct, they are to be understood as a hypothesis in which direction the matter to be analyzed can be researched.

Example 1: Model of retina
Let us assume that the average pulse pause between successive pulses is T = 20 ms at the maximum fire rate. The pulse width should be negligible and the average, radial guiding speed is v = 1 mm/s (including synaptic processes; arbitrary assumptions). We calculate an cross-interference radius of R = vT/2 = 1 mm/s * 20 ms = 10 m. This means that the distance between two ganglia in the source or sink area can not be greater than R = 10 m, in order not to lose information due to cross-interference overflow.
If the ganglion root distance in the area of the retina (~ 100mm) is about 1,000,000 / 100 mm = 10,000 / mm = 100 * 100 per mm, the result is an cross-interference radius per ganglion of slightly more than R = 10 m, see calculation above. We find the neural grid exactly in this order of magnitude. Can it be a coincidence again?

Example 2: Model of the visual cortex
In the visual cortex (VC) a much larger area, almost 100 cm = 10,000 mm, has to be covered with the fiber bundle of the optic nerve. As a result, a different background velocity is required here in order to prevent interference overflow. The fiber density is F = 1,000,000 / 10,000 mm = 100 per mm, the cross-interference radius (= fiber spacing) is here approximately R = sqrt(F) = 100 m. According to eq. (1) there would be a guiding speed in the VC of v = 2R / T = 2 * 0.1 mm / 20 ms = 0.01 m/s = 10 cm per sec. It should be possible to measure this difference experimentally.

Example 3: Units coupled in the cortex
How can a connection to another part of the cortex be established with this cross-interference distance without us violating the cross-interference condition? (Only 100 m are allowed?)
If we want to achieve an cross-interference radius of 10 cm, we need a background speed of v = 2R / T = 2R f = 2 * 100mm * 50Hz = 10,000 mm/s = 10 m per sec (f: maximum fire rate, arbitrary assumption here 50 Hz). But for this we need a myelination of the nerve tracts. This is visibly grayer than the non-myelinated areas, see the sectional view. Just pure coincidence again!

Example 4: Body projection to the little toe
If it is to be ensured that a skin surface is mapped unambiguously in any cortical area - avoiding cross-interference, that can lead to confusion (the individual would not be able to clearly assign sensory excitations), it must be ensured that the cross-interference radius R is large enough in relation to the mapping surface is. So for an cross-interference distance R = 2 m (distance cortex - toe) with an arbitrarily assumed fire pause 1/f corresponding to f = 30 Hz, we would need a conduction speed of v = 2Rf = 2 * 2m * 30Hz = 120 m/s. Do we now think of the conduction velocity of peripheral, myelinated nerves? This is in fact around 120 m/s. Pure coincidence again! Of course, these are only rough guidelines. We know in detail that the most varied fiber speeds are encountered, the nervous system is seriously inhomogeneously interconnected.

But what does this coincidental coincidences mean? It can only mean that nature found a solution to the unsolvable question to avoid bell wires: In order to save interconnects, not only the cortex, but also the so-called peripheral nervous system is interferential interconnected.

Example 5: Multiple sclerosis
The clinical picture of multiple sclerosis can be analyzed from the point of view of the interference networks (cf. NI93). In this disease, among other things, the conduction velocity of myelinated (generally peripheral) nerves decreases. Since the geometric wavelength is equal to the conduction speed multiplied with the pulse pause, the geometrical wavelength is reduced. Cross-interference maps move into the area of ??the self-interference maps, see pain model. This means that peripheral actuators (muscles) and peripheral sensors (sense of touch, etc.) can no longer be clearly addressed/controlled/assigned, see example 4. Cross-interference creeps into areas that should actually be clearly addressed by self-interference.
From the theory of IN, the fatal consequences can be predicted, which again coincidentally coincide with the medical findings: from a sensory point of view, ambiguities in the interpretation of place assignments are to be expected. Motorical we can expect, that every wanted muscle adressing produces unwanted excitements of other muscles at unwanted places. Cramps and twitching would be the result (spasticity, tremor, pain). A simulation of the process is very close to the pain model.
As a remedy, drugs could be used, that increase the conduction velocity v and/or that extend the pulse interval T (as so-called refractory time), see equation (1).

Example 6: Short-term memory
We remember, that different cross-interference radii are coupled to different conduction speeds. And different conduction speeds are bound to different cell types, synapses and layers.
Let us ask, how a short-term memory comes about (Plato: pantha rhei - everything flows), and what this could be - after all, we want to suspect two minutes after the colleague's arrival that he is still behind us, without again having to turn us around - so we have a problem.
On the one hand, it can be assumed, that the completely different pieces of information, which are generally to be linked here, lay not in closest distance to one another in the cortex. Larger distances, however, need higher conduction speeds; if we take v = 2Rf = 20 m/s, to retain the self- interference map.
But to be able, to let a pulse run through the cortex for two minutes, we need the opposite: we need very low conduction speeds, perhaps v = s / t = 20 cm / 100 s = 2 mm/s. That would be a factor of ten thousand less.
One solution to the problem would be, to first bring the information quickly to where it can be linked and second, to couple it there into another interference network that is 10,000 times slower. However, this would then only have an cross-interference distance of R = vT / 2 = 2mm/s * 20ms / 2 = 20 m.
What does that mean? It means nothing more and nothing less than that the coarse network (with 10 m/s) in the fine network (2 mm/s) can no longer separate any place. The location disappears during this operation, what remains is the time, is only the time reference! Our short-term memory can then only be interpreted as a degenerate, too harshly coupled interference network, from which the location assignment disappears!

Example 7: Hearing maps
If we do not want to analyse location assignments, but rather frequency-sensitive 'audio maps' or code-dependent behavior as an I-map (I: interference integral), we need exactly the opposite as before: we need the cross-interference (outside the self-interference radius), see above. We assume that when a tone is recognized, the self-interference integral shrinks to neuron format, so that it no longer needs any image-content. So we are only interested in the cross-interference map. It would be as if we zoom-out much further out of an image (g) with v = 100, eg with v = 1000. Then the "G" melts together to a single point and we see only the cross-interference map of the noise, for example.
To check, whether the proportions also correspond to the nervous reality, we do the following calculation. If we take (arbitrarily) from the auditory cortex a conduction speed v = 10 mm/s and a frequency to be mapped f = 1 kHz, we get a radius of cross-interference
R = v/(2f) = 10 mm/s /(2 * 1000 Hz)
R = 5 m

In fact, this cross-interference ratio is really suitable for reducing the self-interference mapping to neuron size! With this constellation, only frequencies, sounds or noises are mapped. They produce an image - but only of the sound. (Have we already understood the potential of this simple calculation?)

Finally, if we ask ourselves why primates hear in the frequency range between 100 Hz and 10 kHz, a new aspect suddenly comes into play: the geometry of the nervous system must match the frequency range so that the cross-interference maps fit into it. This also implies that animals can hear in other frequency ranges, for example dolphins or bats.

In place of a conclusion
Would you have thought, that elementary properties of a nervous system would be so easy to calculate? With every sample calculation I was surprised by the perfect fit of the proportions.

10. Wandering interference integrals - moving projections

Have we already thought about why we can't see a car speeding by in single frames like in a film? What is different about our nervous system in relation to the technical world around us? Why is our thinking not limited to the two dimensions of the film image? Why and how do we perceive the n-dimensional world that surrounds us?

Since we have already recognized, that the destination of information in interference networks is not caused by wiring, but by delays, we want to look at the influence of a single delay on the interference integral. We choose a test setup similar to that used for zooming. G-shaped, pulsating pixels serve as the generator. The channels are again projected into receiving fields with forward running time. We calculate images for delays of a single channel from dt = +4 to -12 ms.

Fig.7: Interferential projection between two neural fields that are connected via three axons. We change in a three-channel projection delay dt of one channel. For the simulation, a variable delay dt is switched into interconnect of a channel. We see that the picture in the receiving field begins to shift when the delay time of that channel varies. The center of the image shifts to the side of the higher delay.

The importance of this simulation can hardly be assessed. It is sufficient to change the conduction speeds or delays (dt) so that images in the cortex begin to wander or to move. Incidentally, we can measure the changes indirectly with the EEG: Changes in potential (EEG) in the glia cause changes in conduction velocity in dendrites and axons. In the EEG, we probably only measure steering potentials for zooming and movement. We do not measure information content in the EEG, but control parameters that determine the pathes of information! From the IT perspective, we do not visualize data in the EEG, but addresses!

Why is a homunculus needed?

(Homunculus as the projective fields for motor- and sensory information within the cortex).
When we recapitulate, that target areas of information in interference nets are given not via interconnects but via delays, and we remember Penfield's Homunculus, in which many target areas have to be ranked next to one another, we begin to suspect the tremendous achievement that zooming and movement has to provide. Only if each partial map is projected exactly into its target area does the overall system work without confusion.

If we remember, that in the thumb experiment it was found back in 1992, that wavefronts orient themselves according to the alignment of the thumb, we also have an inkling of what the sensory and motor homunculus are actually needed for, and why both lie in a precisely defined strip of the cortex. In principle we could assume that the homunculus would be superfluous. But for what is this strange interface of the motor- and sensory body projections needed?

Let us assume that a stretching or bending of the spine - as with the thumb - causes the ascending (cortical) projection fields to give way to the sides. Then all the ascending, sensory information in the cortex would arrive in wrong places! Conversely, all descending motor information would also arrive incorrectly. Instead of the little toe, the thigh would move. To prevent this, special interfaces are required that use control information from the wave field to correct its field alignment. The Homunculus in the cortex is apparently used for this purpose. An standardized image, so to speak, with zooming and movement, is passed to these from both the cortex and the spinal cord for further processing.

The information leakage from the spinal cord is directed via an ingeniously simple, hyperbolic projection.

The above figure also shows that simple, digital circuits would be able to move images (2D/3D) in space by changing a single interconnect delay. For nerve networks, this circuit provides a basic possibility of following an object parametrically to a certain extent or adaptively adapting its change in shape. (In the computer, image movements - as on the canvas - are resolved by completely different processes that cannot be compared with nature.)

11. Permutations for channel reduction

(Abstraction for columns of the visual cortex)

Interferential projection forms a type of geometric coding when many channels are arranged close together in the source area. Around 130 million receptors converge on one million ganglion cells on the retina (eye), ie one 'channel' (ganglion) supplies 130 receptors. A corresponding, roughly simplified interference model shows essential, interferential properties of such structures. See the following 16-channel projection of a "GH" (below) onto an opposite area (above).

Fig.8: Information reduction between two corresponding interference integrals (I).

If a template is observed at suitable points, interference integrals can be developed, that can only be synthesized by very few neurons. The background is the possibility that in any spatial dimension (inhomogeneity) every mapping is decomposable and reducible to a neuron, for derivation approaches see [NI93]. We discover Rizzolatti's mirror neurons as a possible biological equivalent.

While our original image (GH below) may consist of around 40 firing neurons, we only see three strongly activated areas (peaks) above. These represent the picture below. If we were to include additional, inhomogeneous fibers, a single point of interference could be found that represents the entire GH. In other words: an abstraction and information reduction take place here. The complex GH below converges in the fire of three neurons above. The maps below and above interfere with each other, one is the counterpart of the other, for more see [NI 1993] under 'Permutation'.

Fig.9: Interferential coding by permutation. Equivalence of a higher-dimensional image on the left with three lower-dimensional images on the right. According to this principle, a single neuron can reference any complex mapping or sequence of states (we find it in the homunculus, for example). But strongly the principle works only in one direction, from left to right or from high to low channel numbers.

Regarding the principle of permutations [NI93]: If all transit times between source and sink between subspaces P12, P23, P34 and P1234 are identical, recoding is possible. Here three interference locations P12, P23, P34 of lower spatial dimension are bound by a location P1234 of higher dimension. We recognize a new problem: While the interconnection works from left to right, things do not work quite as well from right to left (see over-determination). We either need a synchronization (hardly conceivable here); we have to couple in separately with the same determination ( k = d + 1 ) or we have to delay/integrate in time. At the moment one can only guess at the various meanings of this image for the neurosciences.

Interpretation for cortical columns
If we choose a local neighborhood of all neurons in the receiving field, we could choose parameters so that every one neighborhood is mirrored upwards around an axon ascending here. The upper map would then look as if viewed through a pane of glass structured with bubbles: the lower detail is reflected in each bubble, but overall the map is reproduced without a mirror. A column organization becomes visible.
There is much evidence that this is the reading of the visual cortex.

Another reading arises with global coupling, as shown in the picture. Completely different image qualities emerge here, indicating mechanisms of abstraction.

On the one hand, the picture illustrates the inevitable formation of 'columns' around ganglia; on the other hand, the 1:130 ratio can be used to determine all parameters that contribute to the calculation of the retina - ganglion - visual cortex interference system.

Since we know that a single neuron cannot distinguish whether it is processing information from the eyes, ears, nose, speech organ or locomotor organs - it is always only pulses that it 'sees' - we can create adequate models for speaking/listening or observing/performing movement. In all cases, a more complex interference integral is mapped to a more abstract one by means of interferential permutation (for more see manuscript Neural Interferences NI93). It does not matter whether the origin of the maps comes from cross-interference (audio maps, spectral maps, behavior maps) or from self-interference (images). Only the space-time parameters between the source field, channels and sink field are essential for the calculation.

12. Overlaid interference maps

How is it possible that millions of sensors in our legs project mirror-inverted onto the sensory part of the homunculus without wiring errors, without one or the other circuit error leading us to believe that the left big toe is right and vice versa? Does nature have a code at its disposal to interweave images of thoughts or to combine ideas with one another?

To check this, we simply add the time functions of three channels in each of two generator fields. We append the channel data sets of a before generated 'g' and a 'h'.

Fig.10: Projection of two generator rooms (above) onto one detector room (below). Black marked places pulse. The time function data sets of the generator rooms were appended before the reconstruction.

In the detector space, the images overlap in a most remarkable way. It can no longer be traced from which source image the respective excitation in the detector room originates. Here, for the first time, two images merge into one. As CS Peirce (1837-1914) remarked in 1902: "All thought is in signs".

What does semiotics mean in relation to our interference integrals? The word heard forms a sound map, see above. This is associated with a pictorial map, called an idea of ??the object. Both can then still associate with a font card via permutation - however, it gets more complicated here.

In principle, time function bundles can technically be attached to one another (to append) or added to one another (to add). (Nerve cells can only add, not append). The difference between the two methods lies in the fact that the pulse spacing becomes smaller when adding up, roughly halving, compared to appending, with the consequence that the cross-interferences come nearer.

The more time functions are added per channel (several sensory impressions at the same time), the denser the pulse trains and the closer the cross-interference distances come into the picture. We all know this phenomenon. In the event of an accident, our thoughts roll over and we can no longer think clearly because the cross-interferences occupie the field of vision complete and confuse us. Because of field overflow, under such circumstances the nerve system is completely blocked, it is really not possible to do or to memorize anything, see details in the pain simulation.

13. Topological inseparability

What actually happens if we move the source locations of the channels around in a detector field at will? Since we cannot imagine such interference integrals, we have to simulate them. To do this, we again use the channel data set of the conjugate mapping from Fig.10 and change the source locations in the detector space.

Fig.11: Topological projections in different detector spaces. Variation of the channel arrangement causes partial zooming and moving effects, partial image delimitation, image distortion or multiple appearances. The topological cohesion of the projections cannot be resolved. The images 'g' and 'h' merge inseparable.

The resulting interference integral locations look as if they are held together by a rubber net. It can be seen that neighborhoods do not tear apart. The local neighborhood is always preserved, it cannot be separated.

From the "movement" we learned that, unlike the doorbell system in interference networks, any wire does not indicate the direction of the flow of information. It is delays and it is the simultaneity of the arrival of several impulses that define the destination.

In interference networks, excitations arise only at interferentially defined locations. Consequently, the transit time, the source and sink as well as the (temporal) channel geometry of the transmission lines reproduce the mapping of a generator map onto a detector map. Any number of branches can be switched into the transmission lines. In the case of interferential transmission, the address of the data depends only on the location of the interference, never on the geometry or the fanning out of the pathways (nerve ramifications).

Note the problem, that arises with a fiber bundle that transports images. Parametric fluctuations in the conduction speed can cause a movement of the I in the bundle, which means that the desired mapping can arise wide outside the fiber axis. However, if the image physically leaves the neural space or the interference field, it disappeares.

It is to be feared that this problem will be the cause of many nerve diseases. Long before a nerve dies, its parameters change: And for our images (I) to slip, a tiny change in the conduction speed or in the pulse pause is sufficient.

But if images slip, they slip out of the network into nowhere or in the network into a neighboring (partial) map: we can refer to the first case by analogy as forgetting, the second as confusion. So maybe there is still hope for Alzheimer's and Parkinson's? Are both diseases initially of the same cause? If large areas of nerve tracts are affected, it is to be expected that images will initially become blurred as a result of the change in the delay time structure.

By the way, what does "out of the network" mean? Please remember that every n-channel interference location is defined by an (at least) n-digit mask with n delays. If a channel breaks due to a synaptic failure, the information would no longer be available (forgotten). That is why many more channels have to be involved. But be careful: reliability in the interference network is bought at the price of overdetermination (number of channels n > dimension d + 1)! And overdetermination limits the possibilities for wide zooming and movement.

What does this mean, for example, for a boxer whose interference network is permanently damaged? He has to train strongly overdetermined images. (We don't yet know how he does it). If we think about the fact that a high degree of overdetermination blocks cross-interference and costs flexibility in zooming and movement, the boxer is in a hopeless situation: Either his nerve network becomes inflexible (overdetermined) due to the blows received: he becomes the "taker type". Or he deals out blows, receives little, and remains mentally flexible (intelligent). If we think of the biographies of today's great boxers (Cassius Clay alias Muhammad Ali), we suspect that there "could be something to it".

Another problem that is faced by a fiber bundle also becomes clear. In order to be able to transmit images (I) efficiently (think of the retina - the visual pathway - the visual cortex), one million fibers must have exactly matched transit times or conduction speeds. If not, a map moves into an adjacent map and there is confusion, see above. It is an interesting task for neurobiologists to investigate which mechanisms actually ensure this adjustment.

Multiple sclerosis is an example of what happens when the conduction velocities change. Synapses or nerve fibers may also die here. However, as a result of slow myelin dissolution (insulation of the fibers), the conduction speed changes measurably, it decreases, fibers that are no longer myelinated become slower by about a power of ten. If this process does not take place uniformly in a fiber bundle, interference locations wander or zoom out, or the images disappear. This is called paralysis. Quite apart from the fact that the cross-interference radius then becomes smaller if the rate of fire remains the same - the system can then no longer clearly identify or control locations. Medicines that reduce the rate of fire would help. Incidentally, from the point of view of the interference networks, these are painkillers - see the pain simulation for more.

Aha, you might say now: Apparently muscles are also controlled by interference and not by bell wires! This wouldn't be surprising, after all, nerve cells only live for seven years. And only interference networks are fail-safe. You probably know the consequences if a bell wire fails in the house: Then the postman can press the bell button, but the bell stays silent. That can't be in the nervous system.

It is also possible to narrow the point of interference by adding further interconnects - we then come to the questions of the overdetermined pulse projections, which can only be broken by folded n-dimensionality (for more see NI93).

14. Spatio-Temporal Maps

(Sounds, processes, codes: behavior)

How can tones or temporal behavioral patterns be stored in interference networks? Of course also as I. Since the nerve cell does not see where a time function comes from, whether from the nose, the leg, the eye or the ear, it always does the same thing: it integrates excitation.

Perhaps we remember Huygens' double slit experiment. This provides interference lines, the spacing of which encodes the corresponding frequency.
We can try to simulate this with PSI tools. To make it easy, we will test what the interference patterns of a channel that interferes with itself (multiple times) look like.

At the maximum (self-interference) waves i interfere with themselves (shown as i*i). But they also interfere with the predecessor and the successor (cross-interference), shown as i*(i-1) or i*(i+1) etc. In the case of phased array antennas or microphones, the cross-interference locations are called 'side lobes'. There is an essential difference between the two: while images emphasize self-interference, frequency maps in the nervous system, for example, only need cross-interference.

(Note: the simulation results shown here were created exclusively using time functions, it appears as a non-materialistic field theory).


Fig.11a, 11b: In the case, that the geometric size of the interference field is greater than the wavelength (speed times pulse frequency), cross-interferences between pulses with originally different time references become visible. It becomes clear that this could be the way in which biology can store or evaluate frequencies, frequency-coded sensor amplitudes or serial codes by means of their location assignments. For the two-channel case, we get the well-known Huygens double-slit pattern, but here linked to the presence of interconnecting wires. The output of a virtual AND gate connected to the source node would only pulse in locations with high interference values; in locations with low interference it would remain silent.

Bild 12: Interference maps of a periodic pulse train that is branched from several source locations and directed into a detector field. With suitable dimensioning, pulses interfere with predecessors and successors, creating an interference pattern with maximum and sidelobes. The maximum characterizes the self-interference (interference of wave i with i, the maxima around the middle self-interference location characterize cross-interferences). Depending on the number and arrangement of the channels, different images are created.

In case of non-periodic pulse-trains the interference pattern characterizes the non-periodic codes.

15. On the role of overdetermination

and something about n-dimensional perception

Just as the four-legged table or chair tilts on a two-dimensional plane, interference locations are given by a defined number of waves from different channels in relation to the spatial dimension.

If we go further inductively, we get a d-dimensional mapping with

(4) k = d+1 (Dimensionssatz, source: NI93)

with k the channel number and d the dimension of the space.
In words: The dimension of the delay space (especially in the case of highly inhomogeneous delay spaces: nervous system) must increase with the number of channels in order to avoid overdetermined (in optics we would say blurred) images.

To put it another way, network structures have to be formed which, due to their inhomogeneity, offer the possibility of accommodating higher numbers of channels. Now a closely meshed network has the property of forming Huygens' elementary waves. But then, the inhomogeneity is smoothed out by the wave fronts that are forming. So that this does not happen, evolution designed layers, fiber bundles and furrows. It creates the greatest possible disturbances that prevent wavefront formation. It genetically pre-determines the cortex in such a way that, if possible, no feedback in too densely networked structures arise.
Oh yes: and with neighborhood inhibition it ensurs that two closely adjacent neurons cannot excite each other in the first place! What an ingeniously devised mechanism our nervous system is! The reward for all this effort is certain: The system is not tied to two dimensions, neither behind nor in front, the perception of three or four dimensions (within the narrow framework of the geometric and temporal limits of the network) does not cause any effort. There should even be a seventh sense - theoretically the one with eight channels could not even be ruled out.

If we use fewer than k = d+1 channels, the result is wiping images (oh yes, we also need them: their trajectories can be used to detect movements, see virtual experiments or one of the more recent papers).

If we use more than k = d+1 channels, then overdetermined images arise. We say the image is blurred at the edge (optical images of all kinds). An overdetermined image can also no longer be moved so easily, see the chapter about movement.

But this only applies to projections (e.g. in the nervous system or in optics). If we use negative delays for a reconstruction of I in the Acoustic Camera, then even very high-channel images can be mapped to two or three dimensions, since overdetermination is completely eliminated due to the time-compensating approach with negative delays.

In the nervous system, however, nature has to use a trick: As in optics, overdetermined images (k > d+1) can in principle only be shown sharply in limited zones due to the inhomogeneity of the delay space geometry, see [NI93].

Conclusion: A physically three-dimensionally structured interference network can be represented as an n-dimensional network through convolution and inhomogeneity. With only five channels, an interference network can, for example, display four-dimensional images (3-dimensional space plus time).

It becomes clear why in the course of our individual genesis we first have to be made aware by teachers that a living world consists of past, present and future. An interference network requires five channels to map four-dimensionally.

The difference between long-term and short-term memory is also documented here. It is sufficient to recognize that the physical limits of the interference network (short-term memory) are exceeded somewhere. Everything then has to be stored differently, for example through conceptual associations (long-term memory).

Or: As long as neurocomputing deals with homogeneous networks, efficiency in information reduction can hardly be expected. Only the temporal (delays) and spatial inhomogeneity of networks through cross-connections opens up the fascinating possibilities of the nervous system.

A proof of the representation of higher dimensions (overdetermination) could be provided, for example, if it were possible to detect a signal of the same origin on a number k of different fibers, where k should be greater 4.

16. Time reversal and image mirroring

and the role played by overdetermination

For Fig.13, a four-channel projection and a four-channel reconstruction (by reversing the time functions of the channels) were calculated using PSI tools. While the reconstruction (above) is sharp everywhere, the projection (below) shows the axially distant blurring known from optical images. As is known from an optical lens system, it also appears as a mirror image.

The same set of channel data (time functions) was used to simulate both images. The only difference between reconstruction and projection (the pictures above and below) is a reversal of the sign of the four time functions. In the upper part we look backwards into the channel data and see the right-sided GH of the generating field. In the lower part we let the time function waves ripple forward in time across the image field and thus see a GH that is only axially sharp but mirror-inverted as an interference integral. Since we are projecting onto a 2-dimensional image with 4 channels, the overdetermination condition is fulfilled here.

During the reconstruction, the overdetermination condition is apparently also fulfilled, but the time inversion (through inverse delays or through time reversal) still leads to a perfect image - that was one of the basic ideas that led me to the development of Acoustic Cameras with a high number of channels. The second essential basic idea was that a clean, non-mirored image can only be created through time inversion.

Fig.13: Projection and reconstruction of the same time function set. Above: time-inverse interference reconstruction (by time reversal of the time functions); below: forward-running interference projection (mirror image with off-axis blurring)

If more than three channels are used, ambiguities generally arise in the interference location. The projection (below) shows that in the area around the central axis of symmetry, the highest image quality is achieved thanks to a higher level of correspondence between the delays on all paths. The image was generated from a reconstruction (above) and a projection (below) of the same channel data onto an identical detector space. It clarifies the nature of neural mapping. While the (technical) reconstruction appears undisturbed and laterally correct, we see the (natural) projection distorted (overdetermined) and reversed. We remember that even optical projections only appear sharp in the vicinity of the axis.

Another peculiarity of pulse interference becomes visible from the observation: A mapping k = d+1 which is neither under- nor over-determined allows only specific time functions with long pauses. Since images cannot be arbitrarily overdetermined in the case of forward time (projections) in the nerve network, nature was forced to discover the Dirac-like time function type for image-like information processing (I).

The images from the acoustic camera (almost sinusoidal time functions), on the other hand, required a different trick: here negative delays (-T) had to be used to compensate for the positive delays (+T as distances between source and microphone). In 1993 I still called the corresponding algorithm mask algorithm and later interference reconstruction. See PPT animations for details.

17. Software environment

At the beginning in 1993 there was the hope of being able to make the first "pictures of thoughts" from spike-like time functions of the nervous system. Since nobody can calculate interference integrals in their heads, software had to be developed. This was consequently called "Bio-Interface". It should generate time functions from bitmaps, be able to calculate interference integrals from time functions, and be able to record time functions. In addition, channel data had to be displayed and time functions had to be inverted in order to be able to calculate in a reflective/non-reflective manner (projection/reconstruction). This minimal range of functions was required in order to be able to test the tool on itself.

In contrast to ANN simulators, (unhindered) wave propagation was calculated over two fields (generator and detector). So it wasn't a neuro-simulator. The interference integral of all channels is calculated in each pixel of the detector field. But there was no way to include refractive cancellation.

When it became clear that there was no way of obtaining qualitatively suitable, high-channel spiky recordings from nerve fibers, on the other hand, the "Bio-Interface" had created the first acoustic images, the name was neutralized in early 1996 and renamed to "PSI-Tools" (Parallel and Serial Interference Tools). The era of acoustic images and films began. The work on neural networks was slowly fading.

From 1998, PSI-Tools was further developed and focused on acoustic images only, from around 2000 it was called "NoiseImage". It received a USB-camera connection for the automatic superimposition of the acoustic image on the optical image, but other options, such as the time reversal of the channel data or the various calculation algorithms, were omitted.

Fig.14: Original tool with which the first interference integrals were calculated and which made the first acoustic images and films (Bio-Interface, Sabine Hfs and Gerd Heinz).

The interference transformation can be carried out in a detector room with an arbitrarily selectable channel arrangement. PSI-Tools could only calculate the laterally correct interference reconstruction; the channels could be time-inverted to calculate mirrored interference projections.

For test purposes and for general simulation tasks, a channel data synthesis based on a generator field available as a bitmap (see Fig.4a) was also implemented. Black pixels act as firing neurons with predefined and loadable time-function output.

With the hardware data-recorder connected, it was possible to record high-channel data streams. Hardware functions were digitally adjustable and storable. The gain could be varied over 5 powers of ten, starting at 500V full scale. Hardware high-pass and low-pass filters allowed recordings in a selectable range from 0.05 Hz to 50 kHz. The channel amplifiers were noise-optimized for high-resistance sources (15 kOhm for EEG, 2 kOhm for electrete-microfones).

PSI-Tools has been working with a 16-channel data recorder UEI-DAC WIN30-DS from UEI (distributed by National Instruments, ISA Board) since 1994. Preamplifiers were developed in-house, see pictures of the old hardware.

Attempts to build a GUI with Labwindows were canceled. Labwindows was too slow. Sabine Hoefs continued to develop Bio-Interface/PSI-Tools under Borland-C (Windows3.11). Around 1995, the system switched to Microsoft MS-C under Windows95. See also a description with pictures and the old help files of the software as well as the functions with a download option from PSI-Tools. Various verifications was made, using the software.


Thanks to everyone who worked on PSI-Tools and contributed a lot of initiative to develop the first interference simulator. Special thanks to our hard-working 'bee', Sabine Hfs (ne Schwanitz), who programmed PSI tools and thus enabled the basics of "interferential neuroinformatics" as well as "acoustic photo and cinematography" in the first place. Thanks to Dirk Dbler, who integrated the USB camera into NoiseImage and thus enabled the commercial use of the new technology. Last but not least, thanks to Carsten Busch and Sven Tilgner, who devotedly looked after the respective hardware developments.


* On this page interesting pictures were originally placed for the press or colleagues. This may justify the sparse commentary and the sometimes spartan appearance. But because the site offers a brief, partly prosaic overview, it should stay that way.

** The original intention was to translate the Greek word 'holos' for 'whole' into the Latin 'totos' for 'whole'. Apparently this failed because of the dictionary used. But a child needs a name. Now it's just the protected images by interference.

Please send comments, corrections or notes to

File created Sept. 1, 1995, redesign 25.3.2013
HTML-redesign and some adds October 20, 2020
English translation using June 12, 2021

This was the access since August 26, 1996