At their most abstract level, the logic gates that make up a digital computer are machines for destroying information. That might not be immediately apparent, but take a look at this (image from Wikipedia):
Two inputs; one output. At every operation, one bit of information is lost forever in an irreversible process.
The bigger and faster a computer is, the more information it destroys every second. Each operation takes time and uses energy, and that places limits on the capabilities of our present-day computers.
Meanwhile, the neurons that make up the human brain look like this (image from Wikipedia):
Thousands of inputs (dentrites); one output (axon).
Some people have argued that the brain is much like a computer; others have focused on reasons why they are different. These arguments don’t concern me here. At an abstract level, it doesn’t matter what happens inside a neuron. What matters here is that there are many inputs and only one output.
The neuron, like a logic gate, destroys information. And with thousands of inputs instead of just two, it’s massively more efficient at destroying information.
So my naive question for today is – could we build integrated circuits with neuron-like logic gates that have many inputs for each output?
How might we arrange these logic gates into useful structures? And would this enable a new generation of computers capable of orders of magnitude more processing capacity than the current paradigm, which is based on very simple, highly inefficient logic gates?