Logic gates and neurons

At their most abstract level, the logic gates that make up a digital computer are machines for destroying information. That might not be immediately apparent, but take a look at this (image from Wikipedia):

AND symbol

Two inputs; one output. At every operation, one bit of information is lost forever in an irreversible process.

The bigger and faster a computer is, the more information it destroys every second. Each operation takes time and uses energy, and that places limits on the capabilities of our present-day computers.

Meanwhile, the neurons that make up the human brain look like this (image from Wikipedia):

At one end of an elongated structure is a branching mass. At the centre of this mass is the nucleus and the branches are dendrites. A thick axon trails away from the mass, ending with further branching which are labeled as axon terminals. Along the axon are a number of protuberances labeled as myelin sheaths.

Thousands of inputs (dentrites); one output (axon).

Some people have argued that the brain is much like a computer; others have focused on reasons why they are different. These arguments don’t concern me here. At an abstract level, it doesn’t matter what happens inside a neuron. What matters here is that there are many inputs and only one output.

The neuron, like a logic gate, destroys information. And with thousands of inputs instead of just two, it’s massively more efficient at destroying information.

So my naive question for today is – could we build integrated circuits with neuron-like logic gates that have many inputs for each output?

How might we arrange these logic gates into useful structures? And would this enable a new generation of computers capable of orders of magnitude more processing capacity than the current paradigm, which is based on very simple, highly inefficient logic gates?

Advertisements

30 responses to “Logic gates and neurons

  1. (As an aside, we believe information, like energy, is conserved and never truly lost. OTOH, getting it back can be decidedly non-trivial. 😀 )

    A neuron is a very sophisticated processing device with a complex output signal that may encode a great deal of the information from its inputs. They also “remember” and integrate information about their history.

    I think their only real application would be in neural nets. It’s hard to imagine any other application for them. The more sophisticated and application-specific a component is, the less utility it has. (There are car mechanic tools that apply to a single purpose on a single model.)

    I’ve seen eight-input AND gates, and they’re a lot less useful then one might think. They’re best for signalling when a byte is all-ones… and not much else.

    If you need a six-input AND, you have to tie-off two inputs. Or four if you wanted a four-input AND. Alternately you can build any N-gate AND by cascading two-input ANDs.

    It also turns out that, in digital logic, you don’t have many occasions where many signals combine to one in any simple way. There are circuits that effectively combine many inputs into one, but there are many, many such circuits, so you can’t design any one-chip device to meets all these needs.

    Or, rather, you can. It’s called a CPU (or even just a PLD). 🙂

    Which, in many regards, is the best analogue for a neuron.

    The reason logic components are simple is that, like alphabet letters, they are the basic building blocks for more sophisticated circuits. A flip-flop contains a number of gates, a binary counter even more. But even those are basic building blocks.

    CPUs can contain billions of transistors, most of which are configured into NAND or other gates. So there are devices that effectively contain millions of AND gates.

  2. I think we can and will, eventually. We haven’t had a whole lot of incentive to go more complicated than traditional logic gates because Moore’s Law kept improvements coming year after year. But Moore’s Law appears to be running out of steam, at least in the sense of increasing the number of transistors on a silicon chip. This will mean that we’ll need to start exploring alternate architectures. The next decade ought to be interesting in computer design.

    BTW, a neuron actually has thousands of inputs and thousands of outputs (the axon splits off into thousands of axon-terminals). Each input comes from another neuron across a synapse. Each output also goes to another neuron, also across a synapse. There are several types of synapses and they exist in varying strengths, all of which influence what the neuron does. The reason I note this is because a neuron is almost like it’s own mini-processing core. (Although it would be wrong to get too carried away with this comparison.)

    The equivalent of logic gates in neurons would be at the synapse level. Two adjacent synapses, neither of which is strong enough to start an action potential (a nerve impulse) but strong enough together, is effectively an AND gate. Two adjacent synapses, both strong enough individually to start an action potential, are effectively an OR gate. A inhibitory synapse is usually compared to a NOT gate (although the comparison here is not completely accurate.)

    Taken together, the thousands of input synapses, and with thousands of output synapses to downstream neurons, a lot of complex computation happens in a single neuron. It also shows that there remains a lot of room for growth in engineered processors.

    • “Two adjacent synapses, neither of which is strong enough to start an action potential (a nerve impulse) but strong enough together, is effectively an AND gate.”

      Fascinating! I’ve never heard about this, and Google doesn’t return anything helpful for [adjacent synapses]. Do you have a link or something? I’d like to know more!

      • I got the concept from the neuroscience books I read. It was a model developed by Warren S. McCulloch and Walter Pitts in the 1940s. If you google “McCulloch Pitts neurons” you should see info on the theoretical models. Here’s one that came up for me: http://www.i-programmer.info/babbages-bag/325-mcculloch-pitts-neural-networks.html

        The term “adjacent synapses” was my own improvised description (which is why you didn’t get anything when you searched for it). I used it because, at least in the case of an AND gate, it seems like the two synapses would have to be near each other on the neural membrane for their signals to trigger an action potential together than they couldn’t trigger individually.

        • Oh, okay, theoretical models! (MCP neurons even turn out to be mathematical models!)

          Judging by your link, an AND gate would have to be a two-input neuron. I didn’t see anything suggesting a sub-set of adjacent synapses on one neuron implemented logic on their own — all the logic seems to be in the neuron.

          Which would match biology. The neuron state integrates all the dendrite synapses.

          It’s an interesting idea, though. Neurons implementing one level of logic with groups of synapses capable of implementing some kind of low-level sub-logic! Complexity on top of complexity! 🙂

        • “I didn’t see anything suggesting a sub-set of adjacent synapses on one neuron implemented logic on their own”

          True, for that model. It was constructed in 1943 as an exercise to understand how information might be processed by the nervous system. It was oversimplified, even back then. Biological neurons are obviously far more complicated.

          For what follows, a neuron’s action potential, it’s nerve impulse, is best thought of as a chain reaction that propagates along the membrane of the neuron. (Hank Green has an excellent Youtube primer on this which I’ll link to below.)

          Consider an excitatory synapse at the tip of a dendritic branch. It receives a signal. If the synapse isn’t strong enough to trigger the action potential, nothing happens and the rest of the neuron isn’t disturbed. But if either it is strong enough, or another nearby excitatory synapse signals around the same time and together they are strong enough, the action potential is triggered and starts propagating along the dendritic tree toward the axon.

          But the action potential could be stopped if it encounters a region of the tree where a sufficiently strong inhibitory synapse (or group of synapses) has just fired, making the local membrane not conducive to propagating the action potential. If so, the action potential ends well before it reaches the axon.

          Of course, this means that synapses more proximal (closer) to the axon have a much better chance of their signal reaching the axon than ones more distal (farther) from the axon.

          Given the thousands of synapses a typical neuron has along its dendritic branches, it’s not hard to see these branches as logic circuits in their own right. Only if the action potential reaches the axon do we say that the “neuron has fired”, but just getting to that point involves a lot of logical processing.

          The point for this discussion is that, conceptually, portions of these branches could be considered logic gates. Of course, biology isn’t going to be as tidy about it as we are, and so they almost certainly won’t always divide cleanly into AND, OR, or NOT gates, but a lot of messier hybrids. But a typical inter-neuron could be considered to have hundreds of logic gates.

          (Or we could reject this view and insist that the entire neuron is one hideously complicated logic gate. It really comes down to what view is productive for us at the time.)

          Here’s the Hank Green video on action potentials. Warning: it’s a bit information dense.

        • I love information dense! 🙂

          Okay, I see what you’re saying. A strong enough firing of inhibitory synapses can suppress excitation synapses in the region if their action potentials aren’t as strong. There is a local effect that pre-stages the overall response of the neuron. This is a form of signal integration not easily modeled by logic gates. As you said, biology isn’t tidy! 🙂

          That video you linked is about signals along the axon once the neuron has fired. The adjacency it talks about isn’t related to what we’re talking about here, but next video in the series is about synapses and he does touch on the local effect at about 6:30.

          (There’s also a nice bit in there about how drugs affect synapses — something discussed elsewhere in the comments.)

        • “This is a form of signal integration not easily modeled by logic gates”

          Definitely not by traditional digital ones. Setting aside that the number of inputs can vary wildly, we don’t usually have to account for the kind of morphing that happens as synapses gain or lose strength, transforming what we might consider an AND gate into OR one or vice-versa.

          The chief benefit of the comparison with logic gates is the insight it may give us into what is going on. But the brain definitely can’t be shoehorned into our technological paradigms. It has to be studied on its own terms. (Which of course is giving us new technological paradigms.)

          On the videos, you’re right. Sorry. It’s been a while since I watched them, and I don’t always recall correctly which one covers what. Now I’m wondering how different (if there is any difference) propagation of the action potential down the axon is from propagation through the dendrites (that is, aside from myelin sheath insulation and the nodes of Ranvier). Something new to research.

        • Heh. In a word: Very!

          It’s a whole new level of “information dense” compared to Hank’s excellent science communication videos, but the Wiki article on dendrites would get your feet wet. (You have the background for it; many wouldn’t.)

          There is some serious signal processing going on there, and it doesn’t appear to work like axons do. I thought it was interesting that the dendritic spines are thought to be crucial to learning.

          Until this conversation I hadn’t realized how much actually does go on in the dendrites themselves as opposed to the soma. So that’s kinda neat; thanks for bringing it up!

          Maybe I can return the favor. I learned another new fact that, despite 40 years of reading about physics, I’d never heard expressed before.

          You’ve probably seen that chart with the curves that show how the EM, weak, and strong, forces at extremely high energy — around 10^16 Gev — merge at a single value. Those curves are what make us think those forces can be unified as a single force at high energy in a GUT. In particular the way they all merge at the same value at the same energy.

          What I learned recently is that the chart depends on supersymmetry being true. And so far there’s no evidence at all that it is (in fact, the possible sectors for it have been narrowing — fewer and fewer places to look).

          This may have very interesting consequences for any GUT, but apparently there are enough problems with a GUT that they’ve fallen out of fashion and no one is really working on them anymore.

          But I had never realized that “GUT” chart depended on supersymmetry. Without it, the curves still merge, but at different places.

        • Electro-weak theory is well established, both theoretically and experimentally, without recourse to supersymmetry. So the GUTs merge eletroweak and strong forces, but there is no accepted GUT at present.

          I think that the predicted merging of electroweak and strong forces at a single energy is a motivation for assuming supersymmetry, rather than the other way around. It seems like too much of a coincidence for supersymmetry not to be true.

        • It’s been pretty elusive so far. There’s been no hint of it. Many of the simplest models of supersymmetry have been ruled out now, forcing supporters to find more and more complex versons.

          There is that hope it turns into something real 750 GeV diphoton “bump” found during the upgraded run at CERN (and a lot of bated breath and crossed fingers about the coming runs). But the impression I have is it is not thought to be a supersymmetry partner particle. (The joke is that there have been 750 papers speculating about what it might be… if it’s real at all. 🙂 )

        • Thanks for calling my attention to the Wiki article. The book I consulting was maddeningly vague.

          The main point I was interested was whether the signal was passive (and fading) or active (as in the axon). Based on the Electrical properties section of the article, it sounds like it was once thought to be passive, but is now understood to be active using ion channels, and the description there does sound superficially similar to me to the axon. (Although I’d like to confirm that somewhere else. Not that Wikipedia isn’t a great resource but, well, this understanding doesn’t seem consistent even with the introduction part of the article which describes it as passive.)

          Another thing to consider about dendrites is that the post-synaptic part of the synapse is contained in their membranes, just as the pre-synaptic part is contained in the axon terminals. Everything you read about the sophistication of synapses is part of the sophistication of dendrites and axon terminals (and hence of neurons overall).

          Interesting on the forces merging. Thanks! In truth, other than a vague understanding that they did in fact merge at high energies, I didn’t know much about it. The experimental failures of supersymmetry seem like they’re going to have repercussions all over theoretical physics. Hopefully it will quiet some theoretical physicists who want to use mathematical elegance as a judge of theories instead of empirical results.

        • “Everything you read about the sophistication of synapses is part of the sophistication of dendrites and axon terminals (and hence of neurons overall).”

          Yeah, they’re sub-systems of neurons, but they are complicated and distinct little mechanisms in their own right, especially with regard to the operation of neural transmitters. I’ve read a number of biologists refer to synapses as the most complicated biological mechanism of all.

    • I don’t mean to derail the discussion, but a light bulb went off when you started talking about “action potential.” Quick question: Do those “evoked potential” tests measure a synapse’s strength and duration? Or the “logic gate” in neurons? Or both? To put it another way, what are those tests measuring, exactly? I tend to hear, “They measure such and such pathway.” But if the “pathway” is this complicated, how do they measure it? Are they measuring thousands of synapses at once?

      Sorry about the randomness. The meaning of “evoked potential” has eluded me and I’ve only now thought to look into it. Google searches aren’t giving me the answers (unless I want to start reading medical papers, and I don’t think I could…)

      • No worries Tina. This strikes me as relevant and interesting. (Unless Steve has an issue with it, since it is his blog after all 🙂 )

        On measuring action potentials, particularly dendrite potentials, it sounds like it is profoundly difficult, which is why dendrite action potentials weren’t discovered until 2013. After my discussion with Wyrd, I found this article (which I kept meaning to leave here but forgetting, so glad you’re spurring me to do so): http://scitechdaily.com/neuroscientists-discover-dendritic-spikes-enhance-brains-computing-power/
        If you’re really feeling masochistic, it links to the actual Nature paper that describes it in detail.

        I think the main thing to bear in mind is that, once an excitatory synapse triggers the action potential, the potential ripples through and is fueled by the ion gates in the neural membrane, unless it hits a region where an inhibitory synapse has made conditions unsuitable to the ripple continuing. If the potential reaches the axon, from what I’ve read, the axon fires it forward faster and with more energy until it reaches the axon terminals and the next neuron.

  3. So if I understand correctly (and I am not a computer scientist, nor a neuro-scientist, as should be obvious), a neuron is analogous to a CPU.

    To use Wyrd’s comment on the balance of utility and application-specificity, it would seem that nature has evolved the neuron as the perfectly balanced device, at least for the kind of processing that goes on in the brain. And as Mike says, there is a lot of room for growth in engineered processors.

    I know that IBM is working on what it calls neurosynaptic chips, built from neurosynaptic cores, and that these consume orders of magnitude less energy than traditional chips. From IBM: “Cores operate—without a clock—in an event-driven fashion. Cores integrate memory, computation, and communication.”

    This sounds rather like the idea I was trying to get across.

    • On neurons and CPUs (I used “mini-core” above, although I’ve also seen them compared to full computers), just to repeat, I think we have to be extremely careful not to make too much of that comparison. Neurons are a tangle of execution and storage (something modern computers generally keep segregated), and a neuron in isolation can’t really accomplish anything. Its connections to other neurons, its synapses, are an integral part of its functionality.

      Indeed, memories are encoded in the strength of the synapses, which grow stronger with use but atrophy with non-use. (Though their strength also has short term variances as a result of recent activity, drugs, and overall state of fatigue.) A synapse can be excitatory, making an action potential firing more likely, or inhibitory, making the action potential less likely. As you noted in the post, whether the neuron fires is the result of the cumulative logic processing of all its input synapses that are receiving right then.

      Perusing Wyrd’s comment, I wasn’t aware of eight input AND gates and the like, or of the IBM research you mentioned. Interesting all.

      As I noted above, you could interpret adjacent input synapses along branches of the dendritic tree as logic gates similar to the ones in modern processors (albeit not as uniform), and so still view nervous systems as made up of relatively simple logic gates. Under this view, the entire neuron is a complex logic circuit. (When a logic circuit becomes complex enough to call it a “processor” seems like a matter of interpretation.)

      • Thanks Mike. I just want to stress that my intention was not to compare a neuron with a CPU or logic gate, but to suggest a way to improve on the current computing paradigm. If nature finds it useful to construct complicated nodes with many inputs and with storage included, I wonder why we don’t do the same?

        I was thinking out loud and wondering what sort of useful circuits could be constructed from such nodes. It seems like IBM is already working out how to make them and what can be done with them.

        • Oh, that’s was what my comment was aimed at. (Although admittedly my interest in neuroscience led me to blather too much about how neurons and synapses work. Sorry!)

          My overall point was similar to yours, although with a caveat at the end that you could say that nature herself was sticking with the logic gate building blocks. Either way, I think studying how neurons work could give us insight into future designs.

        • “If nature finds it useful to construct complicated nodes with many inputs and with storage included, I wonder why we don’t do the same?”

          We always have, but they’re generally large systems. A library, for example, fits the basic description.

          But you’re talking about something specific, neurons, and our technology just isn’t quite to that point, yet. We’re only recently able to get a really clear picture of what neurons are all about. And synapses have been called the most complicated biological machine there is.

          There is also that binary makes the engineering infinitely simpler, so we went there first. (And per information theory, it’s all binary anyway. 🙂 )

    • “[I]t would seem that nature has evolved the neuron as the perfectly balanced device, at least for the kind of processing that goes on in the brain.”

      While they’re very good at doing what they do, I’m not sure that reflects a balance between utility and specificity. Neurons are extremely specific! 🙂

      They’re even specific with regard to function; there are different types of neurons! And as Mike mentioned, there are different types of synapses, and how they function varies considerably (they use different neuro-chemicals).

      They do have potentially thousands of connections to other neurons through the axon, but the neuron does reduce its inputs to a single output signal, so your point about information reduction does apply.

      (As an aside, the average number of connections is ~7,000, but some neurons have as few as two, while others can have tens of thousands. A human brain can have as many as 500 trillion synapses in all! That’s a lot of connections!)

      Something that’s important to understand here is that neurons and synapses are not binary gates, they’re analog signal processing devices. The output of a neuron is a signal, a pulse train. The pulse width and frequency carry (analog) information.

      “I know that IBM is working on what it calls neurosynaptic chips…”

      Has been for quite a few years. The project began with DARPA, check out the SyNAPSE program.

      Now that you bring it up, I remember reading about this. A hardware approach to brain simulation. (I’m of the opinion that a hardware approach is the most likely to succeed in replicating apparent consciousness.)

      These are specific devices in a sense, but you can use them to build intelligent systems for an infinite number of tasks. Exactly like the human brain is very specific in a sense, yet capable of an infinite number of tasks.

      It’s the aggregate that matters. “Neuron” chips are only good at being neurons (or neuron-like), and their individual application is very specific, but what you build with them can be general purpose. (An analogy might be that a transistor is a very specific device — just a switch — but a collection of them can be a general purpose tool.)

      I saw a good article on the IBM site: Introducing a Brain-inspired Computer.

      Sounds very much like what you had in mind!

  4. Tangent, I think from Aldous Huxley. Our everyday consciousness is a mechanism for destroying information. Of the massive chaos of sensory inputs we are exposed to every second, consciousness destroys the superfluous information and gives us the output we need in our quest to survive and flourish. Huxley’s point, if I recall, was that hallucinogens (mescaline, LSD) remove some of the filters and bring us back to a more primitive and expanded sense of awareness.

    • You’re right about consciousness as a tool for filtering out irrelevant information. Our brains are seemingly designed to draw out the most abstract understanding of complex situations. We might stand on a hillside at night, gazing at a complex array of glittering lights before us, and think “city”.

      My understanding is that conditions such as autism can disrupt the brain’s capacity to abstract information in this way. So an autistic savant has an enormous appreciation of detail, but a difficulty in drawing out a simple inference. Some have the ability to memorise and draw photo-realistic faces, but cannot guess what the person in the picture is feeling.

      As for drugs, my understanding is that they disrupt neural pathways. This might remove some of the filters, and lead to a more literal view of the world, with some meaning and higher-level interpretation blocked.

      • What some drugs do is suppress the action of inhibitory neuro-chemicals so the brain runs in a less constrained fashion. It no longer suppresses details or information that are normally damped out.

        An example is “trailing.” If you wave your hand rapidly in front of you, you see a blur of your hand and, if you concentrate, you can see that the blur extends behind the movement for some distance. There’s an after-image that persists briefly.

        Normally, because evolution has trained us about the physics of rapidly moving objects (i.e. they don’t actually blur), our brain is tuned to mostly suppress this. Hallucinogenic drugs remove this suppression and you see trails following moving objects.

        Hence the canonical question to those who’ve taken LSD, mushrooms (psilocybin), or peyote cactus buttons (mescaline): “Are you trailing, yet?”

  5. What I find interesting is how we might combine such complicated nodes into useful networks. Is it possible for us to program them, or do we have to let them “evolve” like neural networks?

    • The problem is that the information is spread out holistically and holographically, so it’s hard to point to any one part and say, “This is the X bit that does Y.” It’s a bit like trying to create a hologram pixel by pixel through a calculation (something I’m not sure we know how to do).

      We can copy holograms and trained neural networks, so once trained they can be replicated.

      • So maybe training followed by replication is the way to go. The human brain comes with certain pre-trained areas for doing standard jobs, and the combination of “standard” parts plus “adaptable” parts seems to work well. I could imagine a processor designed to recognise and identify objects, for example, being part of some robotic system.

  6. Yes, I am listening out for an announcement from CERN!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s