Life, Research, and Everything

I wrote this essay in November 1994. It describes (as the name doesn't indicate) the various facets of learning, memory and intelligence, particularly as they relate to my research, and my journey across various fields revolving around them.

First of all, a journey...

Not so long ago, I used to be a computer science person, proud of what my rapidly expanding field had achieved in a relatively short lifetime. All in all, I was happy with life, looking at a career in some computer-related industry. And then it all changed.

I had worked with just about every aspect of computer science that I could get my hands on, including image processing, algorithms, graph theory and the like (my resumé has more details). I had also explored the fields robotics and AI (I worked with a company in New Delhi, India -- once again, see my resumé for more details) and had really enjoyed it. Among other things, we often discussed concepts of intelligence in machines, but it made more sense to me to first find out about how REAL intelligence happened. However, biology was completely a different realm, and no one touched it with a bargepole.

But when I heard a great talk on the immense opportunities that lay in the study of the Mind, that changed it all, and I started seriously thinking about it. When I came to UGA to pursue graduate study, I finally made the decision. I would study the biological basis of learning, memory, and (hence) intelligence.

First Steps

Biology (not unlike computer science) has a marked tendency to swamp anyone who tries to enter the field at any level other than kindergarten. It did the same to me. As soon as I attempted to understand the various issues involved, I found myself swamped by a sea of information, looking blankly at words that meant nothing (to me), but sounded really serious and meaningful (I hold the same view for latin, greek, and various other languages).

However, six months of courses and lots of literature review always does oodles of good to anyone, and it has had its beneficial effects on me too. I am now fairly conversant with what goes on, and can actually say "Membrane depolarisation and Calcium induce c-fos transcription via phosphorylation of transcription factor CREB" without batting an eyelid, and actually understand what it means. Some of my old friends (who still move in computer-related networks) are impressed when I do that. Some others believe that I have gone bonkers, studying biology in this, the age of the information revolution, instead of computers. I like to tell them what I sincerely believe in myself: There is no machine greater than the brain, and there is no research greater than to study it. Morover, computer science (and all of science, for that matter) has a LOT to learn from biology (which is one of the many discoveries I've made in the past six months). And with the fast advance of computers and their strong foray into AI, it is becoming even more necessary to elaborate the kinds of processes that learning (and eventually, intelligence) entails. All in all, I am particularly thrilled to be part of the field.

The Field

How the brain codes, stores, and retrieves memories is, of course, among the most important and baffling questions in science. It is believed that the uniqueness of each human being is due largely to the memory store -- the biological residue of memory from a lifetime of experience. The cellular basis of this ability to learn can be traced to simple organisms. In recent times, understanding of the biological basis of learning and memory has undergone a revolution. It is clear that various forms and aspects of learning and memory involve particular systems, networks, and circuits in the brain, and it now appears possible to identify these circuits, localize the sites of memory storage, and analyze the cellular and molecular mechanisms of memory and learning [16].

So much, well said. Now lets come to some brass tacks. What is really interesting about research on the brain (on any facet of learning), is that it is driven by many directions. To start with, there is the force of science itself, which is the desire to know how we learn, for its own sake, not that there is anything "special" about the brain. Then, there are those who like to believe that the brain is "special", in some way, and it's more important to know how it works. And then, there's people from AI, and others like them, who believe that the brain is REALLY "special", and it needs to be studied before all else. They're all keen to impart these princliples of intelligence to machines, and other such devices, envisioning robots that actually understand what you meant when you said "Oh! That's a beautiful rainbow!"

With so many different pulls, the field is bound to be interesting, and indeed it is. At one end of the spectrum are people like John Searle [15], who suffer from the conviction that the whole process of higher level cognition (and ultimately 'consciousness') is so "special" that it will defy any attempt to study it, and is, therefore, useless to even try. To be fair to them, however, they do think that the biology of the brain can be studied: they just believe that consciousness and higher level cognition, though a direct result of the biology, cannot be understood by understanding the biology. At the other end of the spectrum are people like Douglas Hofstadter [7] -- whom Searle calls 'Strong AI people' -- who believe that higher level cognition can, indeed, be studied, even if only as a separate process from the biology of the brain.

And then there are others. Roger Penrose[13] presents arguments for why AI (as it is currently being studied) can never work, and then in [14] goes ahead and proposes his own schema for how 'learning' really happens. This schema, where the microtubules in the axons are shown to be the real carriers of memory, has been widely discussed (mostly with careful scepticism), so I won't go into it here. For some competent reviews of the psychological implications of his claim ---that problems in quantum physics are linked to problems in consciousness---the reader is referred to Psyche.

Then, there are several very accomplished scientists who don't argue at this (relatively grandiose) level, but are content doing their bit for the cause of science. Perhaps it is needless to argue at that level. Perhaps the 'lets just investigate and see what happens' approach is the best way out. After all, any research on the brain does tell us `something' new. Of course, whether or not that `something' helped us in getting a better understanding of 'learning', or 'consciousness,' or anything like that, is debatable.

From Biological to Artificial Neural Networks

Artificial neural networks (ANNs) came into existence after the first mathematical neuron model (the M-P model) was proposed and applied to construct neural nets based on simple logic calculus in 1943 [11], and have played a very important role in promoting studies in information science, automation and computer systems' theory. Basically, such a network is a directed graph with a ``neuron'' at each node. Each of these neurons is simply a sigmoidal summer, whose state changes when the sum of the inputs to it exceeds a certain threshold. Simple as they are, such networks can be ``taught'' to model complex systems fairly accurately. As a result, ANNs find wide-ranging applications, from agriculture to astrology.

The role of changing threshold

Non-linear properties, such as threshold, play a key role in the information processing of biological neural networks. However, the M-P model has been used in most ANNs, in which only the time-invariant threshold property is considered. It has been demonstrated that a dynamic threshold in the model of a neuron used in an ANN improves its performance [6, 8, 18]. Various models (for dynamic threshold) have been proposed in the past [6,8,18], but it has never been made clear how the threshold variance was formulated, or how it is related to threshold variance in real (biological) neurons.

It is especially interesting that several different kinds of learning seem to have their roots in changing thresholds. For example, a simple neurobiological model for associative learning based on a temporally specific threshold (in the involved neurons) has been proposed in [3]. The same article also looks at simulations of simple higher-order features of classical conditioning as well as operant conditioning.

The cellular basis of memory

Many researchers have looked at the cellular basis of memory. Eric Kandel proposed the use of Aplysia as a model system, which has since gained widespread popularity. Several behavioral patterns in Aplysia have been examined (such as the defensive siphon and tail withdrawal reflexes, the inking response, and the like). Several forms of learning have now been identified in this species, and most of these have begun to be investigated. Byrne et al. [4] provide a good review of the role of second messengers in various forms of learning in Aplysia, and in [3] they present a comprehensive examination of the issues involved in associative learning.

It is fast emerging that second messengers -- like Cyclic AMP -- play a key role in neuronal plasticity, and hence in learning and memory. To figure out the exact nature of such mechanisms, and their contributions to higher cognitive processes, such as consciousness, is a task for the future. However, in the short term, it is very possible to elucidate the relationships that exist in the metabolic pathways with(in) which second messengers interact. Such is the aim of our current research.

Interstingly, second messengers may also be playing a role in changing threshold. To consider an example, slow excitatory synaptic potentials may summate with conventional fast excitatory synaptic potentials to cause a previously subthreshold input to trigger an action potential. According to this scheme, the duration of the slow synaptic potential would correspond to the duration of the memory. Moreover, increased cAMP levels in cells seem to provide a biochemical mechanism for encoding information about the temporal association of separate inputs to these cells [4]. This information be provided by the proximate and sequential interaction of Ca ions and Serotonin (5-HT) -- or related neuromodulators -- with the adenylate cyclase complex (that converts ATP to cAMP). Evidence for such interaction comes from various studies, such as those by Ocorr et al. [12] and Eliot et al. [5]. Recently, there have been (in my opinion, laudable) attempts at modeling neurons, such as the GENESIS simulation system[2]. It is most gratifying to see that engineers seem to be waking up to biology. GENESIS, for example, goes into elaborate details regarding the many functional features in real neurons, and takes into account almost all of the parameters that a biologist would care to feed into it. However, it seems to lack the (somewhat essential, I would argue) feature of neuronal plasticity. This is probably because there is still considerable mystery shrouding the concept, and (to put it mildly), all is not yet clear in that realm.

A Note for the Future

Several things are clear. The long-term goal is to explain all of learning and memory, but that is still a distant dream. In the shorter-term, some of the processes taking place in the brain have begun to be investigated, but much more needs to be done. It is a well-established fact that dynamic analysis of biochemical pathways has several advantages [9]. Primarily, it allows simulation models to be created from such data, which can not only be used to test hypotheses themselves, but also result in a general improvement in the response of neural networks in which such models might be incorporated. (Models like GENESIS [2] will be prime beneficiaries of such work.) Moreover, such knowledge-bases provide a generic framework into which the work of many different teams can be integrated with natural ease.

Second messengers (and particularly cAMP) play a crucial role in learning and memory. To establish the exact basis of learning and memory, therefore, it is necessary to first explain processes related to second messengers. To model such processes at all, it is necessary to begin to establish quantitative relationships between the metabolites, the inputs (both molecular and real-world inputs) and the second messengers themselves. In the long term, such work is expected to impact the development of adaptive neural network architectures that capture the richness of behavior pattern seen in biological systems. We intend to start doing precisely the same.


Recently, there has been a flurry of activity in a new-found field called DNA computation, in which strands of DNA are used to solve computationally hard problems. The approach was pioneered by Len Adleman[1], and is now being pursued by several researchers, particularly noticable among which is Richard Lipton [10]. The approach takes advantage of the various DNA endocleases, that can recognize specific sequences, once the various combinations of DNA have been formed in a test-tube. So far, several NP-hard problems have been attacked. Adleman had shown his solution for a simple version of the traveling-salesman problem, and Lipton has enhanced his technique to solve the satisfiability problem among others. Several other problems are being fast tackled, including a `universal' computer, proposed at a recently-held conference at Princeton.

Computationally, nothing new has been accomplished, since all solutions are essentially of the `try all possibilities' kind. Conveniently small size and easy maneouverability of DNA strands allows a massively parallel `DNA computer' to use brute force, try out all possible combinations, and report the solution. (literally!) But practically, it's a neat way of solving computationally hard problems, that would take conventional computers years to solve.

It has the classical symptoms of a great discovery: it's obvious, and you wonder why no one ever thought of it before. But, as Adleman says, it is too early to be optimistic or pessimistic about it. Whether this will lead to great things, time (and several researchers) will tell.


  1. Leonard M. Adleman. 1994. Molecular Computation of Solutions to Combinatorial Problems, Science, 266:1021.
  2. James M. Bower and David Beeman. 1995. The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System, Springer Verlag, New York.
  3. John H. Byrne, D. A. Baxter, D. V. Buonomano and J. L. Raymond. 1990. Neuronal and Network Determinants of Simple and Higher Order Features of Associative Learning: Experimental and Modeling Approaches, Cold Spring Harbor Symposia on Quantitative Biology, 55:175-186
  4. John H. Byrne, Raymond Zwartjes, Ramin Homayouni, Stuart D. Critz and Arnold Eskin. 1993. Roles of Second Messenger Pathways in Neuronal Plasticity and in Learning and Memory: Insights Gained from Aplysia. In Advances in Second Messenger and Phosphoprotein Research, Eds. Shirish Shenolikar and Angus C. Nairn, 47-108, Raven Press Ltd., New York.
  5. L. S. Eliot and Y. Dudai and Eric R. Kandel and T. W. Abrams. 1989. Ca/Calmodulin Sensitivity May be Common to All Forms of Neural Adenylate Cyclase, Proceedings of the National Academy of Sciences, USA, 86:9564-9568.
  6. Q. Gan and Y. Wei. 1992. Neural Modeling with Dynamically Adjustable Threshold and Refractory Period, Biosystems, 27:137-144.
  7. Douglas R. Hofstadter. 1979. Gödel, Escher, Bach. Basic Books, New York.
  8. D. Horn and M. Usher. 1989. Neural Networks with Dynamic Thresholds, Physical Review, 40:1036-1044.
  9. Peter D. Karp and Michael L. Mavrovouniotis. 1994. Representing, Analyzing, and Synthesizing Biochemical Pathways, IEEE Expert, 9(2):11-21.
  10. Richard J. Lipton. 1995. DNA Solution of Hard Computational Problems, Science, 268:542-545.
  11. W. S. McCulloch and W. Pitts. 1943. A Logical Calculus of the Ideas Imminent in Nervous Activity. Bulletin of Mathematics and Biophysics, 5:115-133.
  12. Karen A. Ocorr, E. T. Walters and John H. Byrne. 1985. Associative Conditioning Analog Selectively Increases cAMP Levels of Tail Sensory Neurons in Aplysia, Proceedings of the National Academy of Sciences, USA, 82:2548-2552.
  13. Roger Penrose. 1990. The Emperor's New Mind. Oxford University Press, New York.
  14. Roger Penrose. 1994. Shadows of the Mind. Oxford University Press, New York.
  15. John R Searle, 1994. The Rediscovery of the Mind, MIT Press, Boston.
  16. Richard F. Thompson and Nelson H. Donegan. 1989. Learning and Memory, Neural Mechanisms, in Learning and Memory, Birkhaüser Boston, Inc., 5-7.
  17. L. Wang and J. Ross. 1991. Variable Threshold as a Model for Selective Attention, (De)sensitization, and Anesthesia in Associative Neural Networks, Biological Cybernetics, 64:231-241.
  18. H. Yanai and Y. Sawada. 1990. Associative Memory Network Composed of Neurons with Hysteretic Property, Neural Networks, 3:223-228.