I wrote this essay in November 1994. It describes (as the name doesn't indicate) the various facets of learning, memory and intelligence, particularly as they relate to my research, and my journey across various fields revolving around them.
I had worked with just about every aspect of computer science that I could get my hands on, including image processing, algorithms, graph theory and the like (my resumé has more details). I had also explored the fields robotics and AI (I worked with a company in New Delhi, India -- once again, see my resumé for more details) and had really enjoyed it. Among other things, we often discussed concepts of intelligence in machines, but it made more sense to me to first find out about how REAL intelligence happened. However, biology was completely a different realm, and no one touched it with a bargepole.
But when I heard a great talk on the immense opportunities that lay in the study of the Mind, that changed it all, and I started seriously thinking about it. When I came to UGA to pursue graduate study, I finally made the decision. I would study the biological basis of learning, memory, and (hence) intelligence.
However, six months of courses and lots of literature review always does oodles of good to anyone, and it has had its beneficial effects on me too. I am now fairly conversant with what goes on, and can actually say "Membrane depolarisation and Calcium induce c-fos transcription via phosphorylation of transcription factor CREB" without batting an eyelid, and actually understand what it means. Some of my old friends (who still move in computer-related networks) are impressed when I do that. Some others believe that I have gone bonkers, studying biology in this, the age of the information revolution, instead of computers. I like to tell them what I sincerely believe in myself: There is no machine greater than the brain, and there is no research greater than to study it. Morover, computer science (and all of science, for that matter) has a LOT to learn from biology (which is one of the many discoveries I've made in the past six months). And with the fast advance of computers and their strong foray into AI, it is becoming even more necessary to elaborate the kinds of processes that learning (and eventually, intelligence) entails. All in all, I am particularly thrilled to be part of the field.
So much, well said. Now lets come to some brass tacks. What is really interesting about research on the brain (on any facet of learning), is that it is driven by many directions. To start with, there is the force of science itself, which is the desire to know how we learn, for its own sake, not that there is anything "special" about the brain. Then, there are those who like to believe that the brain is "special", in some way, and it's more important to know how it works. And then, there's people from AI, and others like them, who believe that the brain is REALLY "special", and it needs to be studied before all else. They're all keen to impart these princliples of intelligence to machines, and other such devices, envisioning robots that actually understand what you meant when you said "Oh! That's a beautiful rainbow!"
With so many different pulls, the field is bound to be interesting, and indeed it is. At one end of the spectrum are people like John Searle , who suffer from the conviction that the whole process of higher level cognition (and ultimately 'consciousness') is so "special" that it will defy any attempt to study it, and is, therefore, useless to even try. To be fair to them, however, they do think that the biology of the brain can be studied: they just believe that consciousness and higher level cognition, though a direct result of the biology, cannot be understood by understanding the biology. At the other end of the spectrum are people like Douglas Hofstadter  -- whom Searle calls 'Strong AI people' -- who believe that higher level cognition can, indeed, be studied, even if only as a separate process from the biology of the brain.
And then there are others. Roger Penrose presents arguments for why AI (as it is currently being studied) can never work, and then in  goes ahead and proposes his own schema for how 'learning' really happens. This schema, where the microtubules in the axons are shown to be the real carriers of memory, has been widely discussed (mostly with careful scepticism), so I won't go into it here. For some competent reviews of the psychological implications of his claim ---that problems in quantum physics are linked to problems in consciousness---the reader is referred to Psyche.
Then, there are several very accomplished scientists who don't argue at this (relatively grandiose) level, but are content doing their bit for the cause of science. Perhaps it is needless to argue at that level. Perhaps the 'lets just investigate and see what happens' approach is the best way out. After all, any research on the brain does tell us `something' new. Of course, whether or not that `something' helped us in getting a better understanding of 'learning', or 'consciousness,' or anything like that, is debatable.
It is especially interesting that several different kinds of learning seem to have their roots in changing thresholds. For example, a simple neurobiological model for associative learning based on a temporally specific threshold (in the involved neurons) has been proposed in . The same article also looks at simulations of simple higher-order features of classical conditioning as well as operant conditioning.
It is fast emerging that second messengers -- like Cyclic AMP -- play a key role in neuronal plasticity, and hence in learning and memory. To figure out the exact nature of such mechanisms, and their contributions to higher cognitive processes, such as consciousness, is a task for the future. However, in the short term, it is very possible to elucidate the relationships that exist in the metabolic pathways with(in) which second messengers interact. Such is the aim of our current research.
Interstingly, second messengers may also be playing a role in changing threshold. To consider an example, slow excitatory synaptic potentials may summate with conventional fast excitatory synaptic potentials to cause a previously subthreshold input to trigger an action potential. According to this scheme, the duration of the slow synaptic potential would correspond to the duration of the memory. Moreover, increased cAMP levels in cells seem to provide a biochemical mechanism for encoding information about the temporal association of separate inputs to these cells . This information be provided by the proximate and sequential interaction of Ca ions and Serotonin (5-HT) -- or related neuromodulators -- with the adenylate cyclase complex (that converts ATP to cAMP). Evidence for such interaction comes from various studies, such as those by Ocorr et al.  and Eliot et al. . Recently, there have been (in my opinion, laudable) attempts at modeling neurons, such as the GENESIS simulation system. It is most gratifying to see that engineers seem to be waking up to biology. GENESIS, for example, goes into elaborate details regarding the many functional features in real neurons, and takes into account almost all of the parameters that a biologist would care to feed into it. However, it seems to lack the (somewhat essential, I would argue) feature of neuronal plasticity. This is probably because there is still considerable mystery shrouding the concept, and (to put it mildly), all is not yet clear in that realm.
Second messengers (and particularly cAMP) play a crucial role in learning and memory. To establish the exact basis of learning and memory, therefore, it is necessary to first explain processes related to second messengers. To model such processes at all, it is necessary to begin to establish quantitative relationships between the metabolites, the inputs (both molecular and real-world inputs) and the second messengers themselves. In the long term, such work is expected to impact the development of adaptive neural network architectures that capture the richness of behavior pattern seen in biological systems. We intend to start doing precisely the same.
Computationally, nothing new has been accomplished, since all solutions are essentially of the `try all possibilities' kind. Conveniently small size and easy maneouverability of DNA strands allows a massively parallel `DNA computer' to use brute force, try out all possible combinations, and report the solution. (literally!) But practically, it's a neat way of solving computationally hard problems, that would take conventional computers years to solve.
It has the classical symptoms of a great discovery: it's obvious, and you wonder why no one ever thought of it before. But, as Adleman says, it is too early to be optimistic or pessimistic about it. Whether this will lead to great things, time (and several researchers) will tell.