Darwinian civilization


"Nature is red in tooth and claw", Charles Darwin is often quoted as writing (even though this turn of phrase actually stems from a poem by Alfred Tennyson, mourning the loss of his friend Arthur Hallam, a poem that appeared before the publication of Darwin's "Origin of Species"). Be that as it may, the phrase is designed to make us appreciate nature's cruelty, that thousands have to die for the rare variant to ascend to greater fitness, that progress must be purchased at the expense of unfathomable suffering. Evolution, we learn to understand,  is a bleak process, devoid of compassion, in a winner-take-all world. 

What does this say about us, the product of this process? Our genes were shaped for eons by teeth and claws; our purpose in life is to reproduce faster than the other guy (and prevent him from doing the same), is it not? Should we not capitulate to these tendencies bred into us via the eternal survival of the fittest, and realize that the meek and the poor are very unlikely to inherit the earth?

My view is that this is an altogether unevolved view of humanity. What is it about us humans that is remarkable, that is worth a pause? Is it our ability to think, plan, use tools, to create art? I'm afraid that if you think that this is purely our domain, you should think again: animals can do this too, they can even paint portraits.  

This is not what makes us special. What makes us special is precisely our ability to resist the Darwinian drive. People have risen above animals the moment they created civilizations. You may feel like discussing what I mean by civilization, but my meaning is really quite pedestrian: a civilization is any organization or group of humans that engages in a division of labor, and protects its members from outside groups. From this point of view, a civilization is more than an extension of the "empathic circle" beyond the close-knit kin group. Civilization also encompasses cooperation (division of labour is a form of cooperation) and in particular protection. One of the distinguishing features of a civilization is, in my view, its anti-Darwinian tendencies: to protect from elimination those that can't protect themselves. I believe that if there is any nobility in humankind it is that: empathy for fellow humans we are only distantly related to; to care for people without asking for a return, simply because it is the human thing to do. 

I also understand that not everybody shares my views concerning the value of civilization. I realize that there are fellow humans that think we ought to return to a more Darwinian society where the strong rule the weak, and where the meek (by virtue of having "chosen" to be meek) should reap the genetic consequences of defeat. In this view of life, there is no place for losers.

But, may I offer a counterargument to this, shall I call it: "The Tea Party reads Darwin for the First Time" view?  

What follows may become a bit technical, so I'm afraid I may lose some of my Tea Party readers. But I do encourage you all to hang on.

"Nature, red in tooth and claw" emphasizes the strength and brutality of selection, but by itself selection cannot create progress. Progress, defined here as "increasing the fit" of an organism to its environment, requires variation. You immediately see that this is true: if all organism are identical (genetically and phenotypically, that is, in their appearance), then no amount of selection will help you if the environment changes, for example. We would all be doomed (identically so) if the new world is inhospitable to us. Because we would all be screwed in the same manner. Progress, in the light of a changing environment, can only come about if there is diversity. What if you and I are different enough so that I cannot cope with the changed environment, but you—as it turns out—can? Then you will found the lineage that will inherit the Earth. Because you were different. And I was not.

That this diversity—or variation as it is more properly called within the field of population genetics (the mathematical formulation of Darwinism)—is important for adaptation is an old hat (so to speak), commonly going by the moniker "Fisher's Fundamental Theorem" of evolution. What is less well-known is how populations go about maintaining the necessary variation in populations to assure that they can adapt when times are a-changin'. Because here it is: if what makes us who we are is determined largely by our genes, then maintaining diversity requires maintaining a diversity of genes. But what if an individual possesses a gene that is just one change away from fantastic, but in the absence of that change is rather dull, or worse, inferior? This individual, one change away from greatness (and founding the other lineage that will inherit the Earth) is vulnerable. It is meek. It is unprotected. The tooth and the claw will likely eliminate it so that its (potential) greatness will never be revealed. Darwinian dynamics is cruel for sure, but also sometimes shortsighted. Couldn't we do better? Is it possible that by protecting the weak we actually foster the type of "valley-crossing" events that Darwinian evolution has a hard time to effect, but relies on for the occasional fundamental change?  

Perhaps the answer is "Yes, we can". But, what is the cost of keeping around the meek? There is a cost, for sure. The meek are plentiful, because there are more ways to diminish genes than there are to improve them. The potential, however, is boundless. Within the field of evolutionary biology, we practitioners spend countless hours to understand the mechanisms that molecular biology uses to increase and maintain variation: recombination, negative frequency-dependent selection, increased mutation rate, linkage, and much more. But we compassionate humans can transcend molecular mechanisms: we can maintain diversity simply because we believe in giving people a chance, that every person on Earth has the right to attempt to realize their dreams and passions, to "live a healthy and productive life". And just perhaps, keeping around the "tired, the poor, the huddled masses yearning to breathe free" may be the thing to ensure the survival of the truly evolved species, one that understands that the "wretched refuse of your teeming shores" may very well constitute the genetic key to survival in tomorrow's changed world. 

So, could it be that a compassionate and altruistic civilization could actually transcend Darwinian dynamics by outwitting the demon of selection, so as to allow an unheard-of level of valley crossings? It would be fitting, wouldn't it, given that the segment of the (U.S.) population that most argues for promoting the survival of only the fittest in human civilization, is precisely the one that has the most problems with the science of evolution. 

Disclaimer: This blog post may or may not have been influenced by the Adami Lab's emphasis on understanding the features of fitness landscapes that make valley crossings a fundamental feature of Darwinian adaptation. 



Oh these rascally black holes! (Part 3)


Fortunately, at that point I already know how to calculate the capacity of quantum channels, because I was involved in this endeavor during what is now known as the "heydays" of quantum information theory. I knew that in order to calculate the capacity for the black hole to transmit classical information, I had to calculate the shared entropy between a "preparer" and the radiation observed at future infinity. The  preparer creates the physical quantum states that are to be used as signals (our particles and anti-particles) according to a list of symbols. So, if you want to send "0010011", then the preparer sends "ppappaa", where "p" stands for particle and "a" for anti-particle. The entropy of the preparer is just the entropy of the symbols she sends. Fairly quickly, I realize that the shared entropy has just the form that appears in Holevo's theorem. At that point, I see that it is all over, because the capacity of the black hole channel is just the Holevo capacity, as it should be. And it is also clear that if there is no stimulated emission, then the capacity is exactly zero and we have to look for someone or something to sue.

But now (back in 2004) we hear that Hawking is going to give a talk in Ireland where he will announce that he solved the black hole information paradox. He will announce (we hear), that he was wrong all along, that information is preserved in black holes. Greg and I are dumbfounded. Has he figured this out at the same time as we did? I start writing up our results like there is no tomorrow, but can't finish until a day after Hawking gives his talk. And we read the reports, and exhale. His "solution" has nothing to do with ours, and many physicists are very skeptical whether it is a solution at all. 

At almost exactly this point, Charles Seife from Science Magazine calls me to comment on Hawking's "discovery", and I explain to him my thoughts, but can't hold back my excitement about what we found. That's the history of  my comments in the article he wrote here.

So what happens now? Now comes a period where we submit our paper to Physical Review Letters, and fight with referees for two years. 

But we also realize that what the black hole is doing by stimulating the emission of radiation is to act like a quantum cloning machine, and that we should calculate the cloning fidelity.  This we do, and the results are incredible. First, we notice that the mathematics of cloning is exactly like that at work in  stimulated emission in quantum optics, and that just as in the case of quantum optics, the fidelity of cloning is nearly optimal! Well, it is if the black hole can reflect a little bit of radiation. If it absorbs everything (and most people know that black holes don't necessarily absorb all radiation, it depends on the angular momentum of the incoming particle) then the fidelity of cloning is equal to the best you can do classically, namely classical state estimation. (In this case, classical state estimation comes down to  Laplace's "rule of succession"). What this means is that if your initial quantum state has N particles in it, then you can reconstruct the initial state (using the particles that are emitted via stimulated emission, but without error correction!) with probability (N+1)/(N+2). This is also the estimated probability that the sun will rise tomorrow, given that you have observed it to rise N times in the past! Go figure.

Then we submit that paper to PRL And then things go from bad to worse. After another endless series of reviews (which admittedly are difficult because how many quantum gravity experts are also experts in quantum information theory and quantum cloning?) we finally give up, after receiving an (unsigned) Division Associate Editor (DAE) report that is, well how can I put it, angry? The report is 11 pages long, and I'm pretty sure who wrote it. But I'm determined to remain a gentleman. 

I decide to lay low for a while, in particular because I have other papers to write. For about six years I lay low, give or take. But my interest is renewed when I see a paper by Kamil Bradler on the capacity of the Unruh channel. The Unruh channel, as you can imagine, is kind of like the "Hawking channel", where the noise is not Hawking radiation but rather Unruh radiation. You guessed it. And Bradler flat-out calculates this capacity while acknowledging that we had derived the same exact result for black holes earlier! (You can get his paper from arxiv here). And he also notices that these are all cloning channels!

So I decide to take my four-page article and turn it into a long article for Physical Review D, acknowledging that it is simply too much for PRL. Well, and then I register for the APS March meting to talk about the paper.

My talk at the meeting was, well, eventful: the device that switches one laptop to another to display slides froze my computer. I started to talk while the session chair tried to reboot my computer, log in while I'm talking, then the computer quits and shows black screen. I decide to give the talk entirely without slides, which is just as well: the point I was trying to make can be conveyed with flailing arms only.

After the talk, I am asked whether stimulated emission by any chance sheds light on the AMPS controversy. This discussion, also known as the "firewall controversy", is about another paradox engendered by black holes. Without being too technical, the paradox involves the impossibility of being maximally entangled with two different systems (see John Preskill's description of the paradox).
(Illustration: Courtesy of John Preskill)

Now, it turns out that when you read papers about the original suggestion by Almheiri, Marolf, Polchinsky, and Sully—hence "AMPS"— (and there has been a deluge of such papers since the AMPS paper first appeared), there is very little calculation going on. Shouldn't we try to calculate what the entanglement of Alice (who is thrown into a black hole) actually looks like? Well, if you neglect stimulated emission, you can't really do this calculation. In particular, nobody (from what I can tell), is trying to write down the joint wavefunction of the black hole, the stuff that formed it (or at least a piece of that stuff), as well as Alice being thrown into the mix at late times, and becoming entangled with it. But it turns out this is precisely what we have calculated in our paper here. If you skip to section 4 of that paper, that's where we discuss late-time particles accreting on the already formed black hole. I don't write down the joint wavefunction there but I have it in my notes, of course. There is no problem with entanglement monogamy, mainly because one of the central assumptions of AMPS, namely that the "Hawking radiation is pure", is incorrect. Hawking radiation is not pure at all. It is entangled with the black hole alright, but because of the presence of stimulated radiation, it is classically uncorrelated. This is not surprising because measuring Hawking radiation, after all, tells you nothing about the black hole. All the information is contained in the stimulated radiation.

I am reminded of the logical inference that if you start out with a statement that is false, you can derive any number of falsehoods from it. In the same manner, if you begin with a paradox (neglecting stimulated emission of radiation) you can generate an infinite number of other paradoxes from it.

I'm perfectly aware that I may be wrong. But let us first agree that:

1.) we should do calculations
2.) we should start with the right physics

Then let the chips fall where they may.






Oh these rascally black holes! (Part 2)

This is the 2nd part of the "Rascally Black Holes" Series. Part 1 is found here.

Now I wrote the word information. For the first time in this blog, actually. People working in the field of quantum gravity use this word a lot, but not always precisely. It has a precise meaning both in classical and in quantum physics. Let me convince you that serious problems may already exist with classical information when paired with black holes, so that I can talk about quantum information in another blog post. 

Classical information is the shared entropy between two systems. It has never been anything else than that, and will never be. If you are talking about a set of states and their probability distribution, you are talking about entropy. If you think you have information but you don't know what it predicts, you don't have information, you have entropy. In particular, imagine I have 4 bits of information (which allow me to reduce the entropy of system X, say, by 4 bits). Suppose I encode these 4 bits in a string 4 million bits long and the channel scrambles 23 million, say, of these bits. If the receiver of this string can reconstruct the 4 bits of information (via decoding), no information was lost. She can also reduce the information of X by 4 bits, and thus make exactly the same prediction that I, the sender, was able to make. The stuff that was lost was entropy, not information. The 3,999,994 bits that were used to encode the signal aren't predicting anything. Information is about prediction, after all, nothing else.

But I'll assume you know all this already, and if not you can read a little bit about it in a review I wrote.

Before I go on, there is one last thing you have to know about black holes (those that have no charge, and don't spin on their axis crazily). They can only be distinguished by their mass. That's it. No color, no smell, no weird shape. With this in mind, you may already carry out devilish thought experiments in your head. What if I throw two different things of equal mass into the black hole? After they are swallowed, can you tell me which one I threw in? The answer appears to be no and we would have to seriously think about accusing the black hole of treacherous villainy. But first things first. Let's burn some books first.

OK, burning books is generally frowned upon (particularly by me), but let's keep the analogy for a moment. Let's imagine I have two different books. They weigh exactly the same. But one is, say, Shakespeare's "Hamlet", and the other, oh, Darwin's "Origin". (Two books I like a lot, by the way). 



Let's say I throw one or the other into the fire, and they have exactly the same mass, and the cover and pages are exactly the same, except the text (unlike in the pics above). And let's imagine that after they have burned up, the ashes are just ashes. Was information lost? In practice, yes. In principle, no. In terms of classical communication theory, we are dealing with a noisy channel. The receiver cannot access the book, but only watch the flames. And you may think that the flames cannot possibly tell us about the identity of the book I just incinerated. But indeed they could, in principle. When "Hamlet" burns up, the flames and smoke are just a tiny bit different from what happens when "Origin" burns up. It may be imperceptible to your eye because it is lost in the natural variability of fire, but it is there. It must be there, otherwise the laws of physics would be violated. You can imagine an ultra-sensitive measurement device that can distinguish the two, or you can take a page from the book written by Shannon, and make your life a ton easier. You see, it is really quite normal that noise in a channel overwhelms the signal. But if there is any signal at all, then it can be protected from the noise via a process known as "encoding". This process makes the signal state identifiable, and you can imagine doing this by coating the books with some sort of phosphorescent substance before you throw them into the fire: red for "Hamlet", green for "Origins". Now, you just sit back and watch the color of the flame, and then you know which book was just burned. 

The thing you have to understand here is that coating the books in this manner is not cheating, because information was never lost in principle, only in practice. We can make things more practical using coding, and this way we will be able to recover information with arbitrary accuracy. 

Now let's throw the books into a black hole. You may think: "Oh, the Hawking radiation is just like the fire, we can encode the information in some way and just watch the 'color' of the Hawking radiation". Only this does not work at all. The Hawking radiation is not burning the books. The stuff that is emitted has absolutely nothing to do with what falls in, because for all I know the radiation was just created while the books were thrown in eons ago. There is no causal connection whatsoever between the books and the vacuum fluctuations. In fact Hawking himself acknowledged this right away: the radiation is completely and utterly thermal, which means that it depends on absolutely nothing, except the temperature. And the temperature of the black hole is set precisely by the mass, and the mass of each book is the same. I don't know about you, but I find such a situation absolutely untenable, because if this were all true, we would now have broken the law of "you can reverse anything". When I first read about this, I decided that it could not possibly be right, and embarked on figuring out why.

First, I replace the two books by just two particles, identifiable in some way. You can think of a particle or its anti-particle (of equal mass of course), or of a photon with one or the other polarization. Then I mentally throw them into the black hole. And nothing coming out and the black hole just sitting there almost makes me physically sick, so I realize that just before the particle disappears before the horizon, it must emit something, it simply must. Then I start reading. It's 2003, so I can't Google around. And I quickly happen upon the literature of the quantum theory of radiation, which describes how a black body responds to radiation. And I read a superb article by Einstein from 1917, where he describes how he derived Planck's radiation law using only what now looks like common sense assumptions, but which at the time must have looked like pure magic. In this ground-breaking paper, Einstein shows that when radiation is incident on a black body, three things happen: absorption, reflection, the spontaneous emission of radiation, and the stimulated emission of radiation. Stimulated emission is what give you a laser: a particle comes in, two (identical ones) come out. Put a mirror on one side, and two particles re-enter, and four come out. Put a mirror on the other side...  and you get my drift. (To make a real laser, you have to make one of the mirrors a little permeable, so that the beam can finally get out).

Credit: inventors.about.com
Now, Hawking radiation has precisely Planck's form, but in Hawking's paper you only read about spontaneous emission. What happened to the stimulated part? In fact, I then realize, that emitting stimulated particles is precisely what I need to get rid of that queasy feeling in my stomach! So I read Hawking's paper again and again, and there is no stimulated emission. Zilch, nada. 

So then I sit down and redo Hawking's calculation, but I take care not to throw out the bath water when, umm, there's still something in it. The calculation usually goes like this: You write down the vacuum in flat space time (far away from the black hole), and then you transform it into another basis, namely the one in the far future, in the presence of a black hole. This transformation is called a "Bogoliubov transformation" and it creates the future vacuum in which there are particles, from a past vacuum where there are none. Except that if any particles are actually forming the black hole, there should be some particles in the past too! So I just take the past vacuum with a single particle present and evolve it into the future, then I take the vacuum with a single anti-particle, and evolve it into the future. And lo and behold, everything changes! Suddenly, the radiation outside of the black hole at future infinity depends on what I threw in! Of course it has to, because the particle stimulated the emission of another particle before it went down the rabbit hole. Stimulated emission is just like making xerox copies. It's as if physics strips off the information from the particle (which is still falling into the hole) to make sure that the laws of physics are upheld. And I don't feel so terrible.

Then I read some more, and I find that I'm not the only who has noticed this. In fact, Jacob Bekenstein (working with his student Meisels) wrote a beautiful paper just a year after Hawking wrote his, where he essentially writes "Hold your horses Mr. Hawking, you... kinda... forgot something". Using just statistical arguments of the form Einstein used in his <looking for adjectives> really swell 1917 paper <that's a fail>, Bekenstein shows that if you have absorption, reflection, and spontaneous emission of radiation, then you must have stimulated emission. If not, you might get some, umm, paradoxes. Then my student Greg ver Steeg (who is helping me derive all the known results and deriving in parallel with me our new ones) and I discover that Panangaden and Wald have derived Bekenstein's result less than a year later in quantum field theory. But both expressions look very different from the result that we derived. First, we are worried that we have nothing new, then we worry that our calculation, which uses methods completely different from what Bekenstein and Meisels, as well as Pangaden and Wald have used, may be wrong. They look utterly different. The first thing we notice is that Bekenstein's result can be simplified enormously using some of the things we discovered. Then Greg codes both expressions into Mathematica and evaluates them numerically. And they agree excactly!

To make a long story slightly shorter, it took us another year to actually prove that the two expressions (ours and that of Bekenstein & Meisels, which was the same as that of Panangaden & Wald) can be turned into each other analytically, but there it was. Now, all we had to do is prove that including stimulated emission leads to a non-vanishing capacity of the information transmission channel. Because if you can do that, then black holes are exonerated, proven innocent, free to go! Well, perhaps we still have to show that the whole initial state of the black hole can be reconstructed from the final state in principle, but one step at a time! Let's first convince ourselves that the most basic laws aren't fractured, first. So that we can sleep again, and not tiptoe downstairs in the middle of the night to check a calculation that is too hard to do in your head. Really!

Part 3 to appear in due time. Stay tuned!

Oh these rascally black holes! (Part I)


People are fascinated by black holes. You can't see them directly, they can be supermassive, and they are mysterious. Kind of like dinosaurs, which explains the attraction black holes have to (some) kids. But among physics people, it seems black holes create more heated arguments than any other topic, as opposed to child-like wonder. Black holes appear to violate some of our most sacred laws, and people cannot agree on whether they are truly violated, whether we should just go on with our merry lives in the light of such larceny, or what the universe is really doing to prevent this deplorable malfeasance.

So what evil thing are black holes accused of? One of the laws they are purportedly breaking is the law that all dynamics must be time-reversible (barring, perhaps, CP-violating processes). One way in which time reversal invariance can be broken is by processes that lead to a coalescence of trajectories (in phase space, to be precise). If trajectories coalesce (two or more turn into one) then I cannot run time backward unambiguously ("Which branch should I take?") The coalescence of phase space trajectories implies that knowing the future does not allow us to predict the past. It is truly an abomination, and we have to insist that black holes stop it (if in fact they are guilty). 

Another law that we believe in is that wave functions evolve forward in time in a unitary manner, which implies that the entropy of a known state is and remains zero for all time. The latter implies that the (quantum) state is and remains predictable at all times. There is a direct relationship with our law of time-reversal invariance as you see immediately, because if all quantum trajectories can be reversed uniquely, then this means they never coalesce. Two trajectories that have coalesced cannot be time-reversed unambiguously: hence the relationship between predictability and time reversal.  Such a coalescence of trajectories has many outrageous consequences: for example the vanishing entropy of the initial state, upon evaporation of the black hole (something I will explain below), would turn into the non-vanishing entropy of the radiation field left behind. Unitarity would be lost, and with that our conviction that the universe is and remains pure. It is like the loss of innocence.

I will try to convince you here that it is the accused that is innocent, that black holes are just ordinary participants within cosmology. They are quantum, and they are heavy, they are black bodies, but they are not evil and they certainly do not violate any laws.

First, what is the evidence for this violation? This evidence goes back to a 1975 paper by Stephen Hawking, which introduced the world to his eponymous radiation. The paper is not an easy read, but I still encourage everyone who wants to enter the field to read it, and to replicate the calculation as much as he or she can. In my view, nothing replaces actually doing a calculation and re-deriving results. However, there are now much more succinct ways to derive the same result (I think I can do it on a single page), and I'll sketch those here (without equations, though). My simplification relies on ignoring the red shift (the lengthening of wavelengths when light moves within a gravitational field). This may appear problematic, but we can restore the red shift at the end of the calculation, and consider its effect separately. The red shift does not change any of the arguments I give here. It's something practitioners do frequently, if only the informational aspect is of concern. 

The central result of Hawking comes from understanding what a vacuum is. In ordinary language, a vacuum is the absence of anything, but not in quantum field theory. In quantum field theory, the vacuum teems with fluctuations: particles and their respective anti-particles are constantly created in pairs, only in order to decay again (lest they violate our most sacred of laws: energy conservation). In fact, any time a pair is produced, it must borrow a little bit of energy (from the infinite bank of the universe) which may allow them to travel apart from each other for a little bit. But of course, this attempt at separation between the twin particles must be fleeting, because energy bills must be paid.  The pair annihilates in a flash, returning the borrowed good to the bank. Now suppose a system is accelerated, like, a lot. The pairs are still being produced. Now imagine that one pair borrows a lot of energy, and manages to move apart appreciably. Because the system is so strongly accelerated, it can happen that the pair can never be re-united (unless one travels faster than light). One of the particles has disappeared behind a "causal wall", and if you are part of the accelerated system, you will see only one of the two particles, which is now all alone. Now, this looks like radiation. There are now physical particles in a system where there were none when the system was at rest. This curious fact was discovered independently by Stephen Fulling, Paul Davies, and William Unruh, but the effect is usually just abbreviated as the "Unruh effect". If you think about it, it Unruh radiation makes a lot of sense.

Now let's imagine that the pairs are formed (and de-formed) not in an accelerated system, but instead near the horizon of a large black hole. If you paid attention in whatever class taught you general relativity (or whatever book or blog you read to replace said class), you know that Einstein's path to understanding gravity was precisely from seeing the analogy between accelerated systems and gravitational fields. At the edge of the horizon, the same thing can happen to the excited twin-pair as what happened to the accelerated twin pair: one may venture towards the horizon, and one may move the opposite way. But if the daring one goes beyond the point of no return, there will be no happy reunion: the twin moving away from the horizon looks like a particle: he is Hawking radiation. So: Hawking radiation is just like Unruh radiation, only near black hole horizons. Fine, but so what?

Credit: Science Magazine (2004)

Well, there is more. I told you somebody had to pay the energy bill. In this case it will be the black hole who has to pay: there is nobody else around. (In the case of the accelerated observer, it is this observer/detector who will lose mass.)  If this process happens often enough, the black hole will lose all of its mass: it is said to have evaporated. So what? So big deal, as I demonstrate now.

The stuff  that made the black hole isn't just stuff: it's particles and radiation. So yes, particles and radiation turn into particles and radiation, but the stuff that made the star can be seen as special: quantum mechanically, we can say that it is completely known. But after evaporation, nothing of that knowledge remains. After all, according to this picture, all there is to the black hole is mass: the details of how this mass was formed are completely gone. Trajectories have merged in a most heinous way. Information is lost. Or is it?

Part 2 is here.

Part 3 is here.














Your Conscious You

You! Yes, you, reading this… blog. A blog essentially unknown no less, with a name vaguely reminiscent of shapes and sounds. Let me ask you this: 

You are conscious, aren't you? You take this for granted, this feeling, because it is familiar to you, this feeling of "being there". But have you ever wondered what it is that makes this feeling possible? How it is possible that you can have feelings at all? What feelings even are?

As scientists, we ask many questions about many difficult subjects. We wonder about the universe, the origin of life, and how to stop disease. Once in a while, scientists stop and wonder about their capacity to wonder. Yes, we are intelligent, in the sense that we can solve logical puzzles and perform computations in our head. (Yes, I mean you, and, well,  ... some of us.) But solving puzzles and being able to compute is not a set of skills reserved solely to the primus of primates. Computers do this very well too. Sometimes better. They play chess and drive cars, make money on Wall Street, and predict the weather days in advance. But we give these machines nary a second thought. They are machines, and we built them. We do not care about them. Why would we? They do not care about us. Of course they don't, they do not have feelings. We care about animals, in part because we think they have feelings. While they don't compute the way we do (we believe),  feelings are important to us. Having feelings makes all the difference. But we hardly ever ask where they come from. 

This is a bit strange because, between you and me, you are a machine too. No matter how much you think that you are special because you are conscious, the difference between you with your feelings, and this machine in front of you, with a computer screen accurately relaying to you the words that I typed into another such machine not that long ago (but possibly very far away from where you read this) is precisely consciousness: the ability to feel something, to experience anything. Yet you never wonder why you have this ability (and your cat and your dog), but not that machine. 

Can we even ask this question as scientists? Is consciousness, its origin, its construction, its evolutionary utility, fair game as an object of inquiry? Quite frankly, it hasn't been for the longest of times. As you can imagine, for those longest of times consciousness was synonymous with "soul", and any investigation of the origin and substance of the soul was the territory of that "other book", that also deals with origins but insists it has all the answers within 38,000 words, give or take. It took a Nobel laureate and his neuroscientist sidekick to make consciousness an object of serious scientific inquiry. Crick and Koch (yes, the famous Crick and the infamous Koch) famously asked: "If there is something in the brain that makes us conscious, we ought to be able to find which part it is. Show me the neural correlate of consciousness, and the mysteries surrounding it will be lifted".

But as the search for this neural correlate continues, it is slowly dawning on us that even if we find a particular place in our brain that turns consciousness on and off (should this place exist), we may still not understand how it works. For us to understand something, anything, don't we need to understand how consciousness works? Shouldn't we be able, for example,  to make this "it" if we are to say that we understand it?

Can we make consciousness? Can we build conscious machines?


"Hold your horses!", I can hear you through the digital divide, "how can you propose to make something that, er …,  you do not, er,…  know  how to make"?  "Easy, my friend", I reply. "We do this every day, we do this all the time!. We have at our disposal a process, an algorithm you may say, that can make things, even though we have absolutely not the faintest idea how they (the things we make) work." 

"You've got to be kidding me!" I hear half of you say (while the other half goes: "I see where you're going with this!"). Yes, Darwinian evolution is a process that creates complex "things" without having to ask for blueprints, a business plan, a timeline to completion, and an estimate of market penetration within the next quarter. Evolution makes things that work, without theory, without understanding. (If you define by "evolution" things that ended up on the line of descent, rather than the myriad of failed attempts that ended up, well, not on the line of descent.) Evolution, most assuredly, makes things we don't understand. (Ask any biologist: if we understood all the things it produces we would not be blogging about biology. Or, reading blogs about biology. Or writing research papers. About biology.) 

"All right, fair enough", you exclaim," biological evolution can do it, in fact, DID it, I  stipulate. But can you?"

That is indeed question number one. (Question number two will follow in due time). Now you're asking me. Because I have a reputation of "evolving things". Yes, I have evolved things, even complex things. Have I evolved things that no human can design? That's a difficult question to answer unless you have a competition of sorts. Let me, at this point, simply say that this point may or may not be settled soon, and instead point to the bigger question: Can you use artificial evolution (by which I mean evolution within a computer) to evolve consciousness? 

If you have been paying attention (and I have zero doubt that you have, otherwise THIS word would not have been read by you) there is a question burning in the back of your mind. "Granted everything", you commence, "granted that you are the grand voodoo of digital evolution, and that you can 'evolve complex things', as you ineloquently put it--this helps you squat because whatever you evolve, it will just be … something".  "How will you prove to me and others that it is consciousness that you evolved, unless the "thing" walks out of the computer and says 'Cogito ergo sum'? (And speaks Latin for no apparent reason)."

That's a fair point. How can we claim that we evolved something that we don't know how to make? Don't we have to have a measure of that thing, so that we can say: "On the scale of the "Consciousness-O-meter" we achieved 0.8, so bite me and go away. We'll be at 0.9 next quarter!". Yes, if we had this, things would be so much easier. If only we had a way to measure that thing that we know not how to make. But clearly that's impossible. Or is it?

Can we measure something that we don't understand? (This is question number two, in case you are keeping track). 

Once you turn this question over in your mind, you realize that this is a somewhat subtle question. For example, we were able to measure gravitational acceleration, say, before we understood gravity. (I shouldn't really say "we" in the same sense as I've use "we" here before. "Galileo was able to measure gravity before he understood gravity."  There, that's better.) The latter is certainly true. But Galileo was lucky in a way. Gravity is so pervasive that its effect on our measurement devices is almost impossible to miss. But what is the effect of consciousness on a measurement device? And for that matter, being pervasive is not what makes a phenomenon easy to measure because consciousness, to us, is perhaps even more pervasive than gravity. Gravity makes things fall. Consciousness makes us experience things. But this experience is private, it occurs within the boundaries of our personality and no measurement device can measure its strength, unless it reaches deep within the entrails of our thoughts and dreams. The particular hue of our experience, the tint of our perception, those cannot be characterized, we are sure. Or can they?

Enter Giulio Tononi, a neuroscientist and sleep researcher at the University of Wisconsin. Giulio is the kind of guy who as a kid writes a letter to Karl Popper (the eminent philosopher who wrote extensively about the brain with John Eccles) whether he should devote his life to studying consciousness. Popper, incidentally, has been a hero of mine for a bunch of things, not the least for dissing Niels Bohr for his outrageous views on quantum mechanics. So Tononi gets back a nice letter from Popper, and then figures he better get an M.D. along with a Ph.D., so he can figure out how this thing we call consciousness comes and goes when we sleep. But if this was not enough, he decides he needs to understand what it is, mathematically speaking, the brain does when it does its thing. Now, an M.D. and even a Ph.D. in neurobiology doesn't even come close to qualify you to push the envelope in the mathematics of how the brain processes information. But Tononi, unfazed, developed the theory of integrated information processing that is now at the heart of the most ambitious attempt at capturing the peculiarity of the computer that is creating every word that I write. As opposed to the one that I write this in to.

What is this theory? Well, if I could explain it in a sentence, it wouldn't be much of a theory. As far as theories goes, it is pretty tame: it does not introduce new particles or forces, nor does it introduce a radical departure from our thinking about computation, for example. What it does, instead, is to focus our attention on the different ways that computation can be achieved. Yes, Tononi fully acknowledges that our brain is a computing device: it takes in signals from the outside, manipulates them inside the computer we call our brain, and then comes up with decisions. This is, in essence, not different from what every single computing device does that we humans have designed. What Tononi says is that the brain does it a little bit differently.

You see, when you or I design a computing device (suppose we are in that profession) we make sure that we know what every element of that computer does, and that we know when it will send what result on to the next module. Because this is the way we make sure that the device does precisely what we want it to do. We call this design. A designer makes sure that the thing works. We do this by isolating parts, and connecting these parts in such a way that we have complete control. What we don't realize when we design this way, is that we give up just about everything else in return for this predictability. 

When you think about any object that you experience, do you experience its shape, color, smell and feel, separately? Do you remember how to use it--and the dreams you have had about it--independently from the shape color, smell, and touch? No you don't, they are all one, they are the experience of the object. But if if you design your computer to process every aspect of an object independently (and at different times) you rob this computer of experiencing this object. Realizing this, Tononi set out to develop the mathematics of information integration, to fashion a mathematical measure that distinguishes processes that integrate, from processes that don't. His quintessential non-integrating device is the digital camera's photoreceptor, that faithfully reflects the world in its millions of pixels, but integrates nothing. At the other extreme is our brain, that takes the input from our own photoreceptors, but then integrates the hell out of it and merges it with our other senses and memories, to create sensations. To create, ultimately, you.

After all this mathematics is done, we are left with a number that Tononi calls "Phi" (also the title of his most recent book), which characterize the capacity of any computing machinery to integrate information. What does this construction do for you? Right now, nothing of course. But imagine Tononi could record the activity patterns of your brain as you read this... shall we call it a blog? Phi could tell you if you are dreaming or in dreamless sleep, because your brain integrates information only during dreaming sleep. In the dreamless sort, you are unconscious, you have no experience at all. And as it so happens, Tononi is in a position to make precisely these types of measurements, as the director of a sleep laboratory. But this is not where the usefulness of Phi ends. If Phi can measure whether or not you are conscious, then shouldn't it be able to determine whether people in a comatose state are in a vegetative state without any consciousness at all, or else in a so-called locked-in state, fully conscious but unable to communicate this state to the outside world (think "The Diving Bell and the Butterfly")? There is now some evidence that Phi can distinguish between vegetative and conscious states but definitive proof will only be available when we can record from brains in more sophisticated ways than we can do today.

"Fine" you say, "I grant you that there may be a way to measure whether a computing machine integrates, or just reproduces, information". "Unless I'm caught in a coma, why should I care?"

And I tell you in return: "You have forgotten the first half of this blog post haven't you, where I insisted that evolution--the process that knows not what it designs--can do things that people can't do." Yes, that's right: evolution can create computational systems that integrate information at high levels, because evolution is not concerned with beautiful, predictable design. Evolution is messy, opportunistic, and unpredictable. Evolution takes what works and runs with it, whether it is neat or not, whether it adheres to design standard ISO9000 or not. More importantly, evolution creates designs that wastes as little as possible, reusing the same components over and over, and doing all these things at the same time. As a result, evolved computing systems integrate information. At a massive scale.

So, what makes natural computing machines such as brains interesting is information integration, and from what we know today, this level of integration cannot be achieved by design. What should we learn from this? Well, I think you beat me to it, this time. We should use evolution as the process that leads us toward the undesignable, toward the computer that integrates (for reasons of expediency only) to such an extent that the objects it perceives don't just evoke a reaction, they evoke an experience. There is a good reason we should expect experiences to be selected for: they allow the recipient of the experience to make better predictions about the future, and further the survival of the species that has these experiences.

So, where do we go from here? I think that we need to put together a team that understands evolution, as well as information integration, and works on one singular goal: to harness the power of evolution to create complex systems that defy engineering. Such an endeavor must be supplemented by empirical research wherever possible: research into brain anatomy, functional imaging, connectome mapping, adaptation, learning, and more. The reason for this is that theory and computation ought not to proceed in a vacuum: after all, we have the thing we want to make right here in front of us. We would be foolish if we did not try to learn from it as much as possible. 

In my lab at Michigan State University, we are trying to push this envelope by evolving brains that are increasingly complex, by designing fitness landscapes that are increasingly complex. We do a lot of this work in collaboration with Giulio Tononi at the University of Wisconsin and Christof Koch at the Allen Institute for Brain Sciences. And we are trying to create a much larger team here at MSU so that we can integrate empirical approaches, behavioral biology, neuroscience, cognitive science, as well as psychology and decision-making into this effort. Just a month ago, we had a Symposium here at MSU where some of the "players" in this endeavor made an appearance. There I was reunited with Jeff Hawkins, who graciously accepted my invitation to speak, and updated us on the progress in the realization of his own dream. (Readers of this blog remember our meeting at the Google Fest, for the simple reason that my recollection of this meeting was the first post to the blog). 

Other invited keynote speakers at the Symposium were Giulio Tononi, Daniel Wagenaar from Caltech, Ken Stanley from the University of Central Florida,  and Mike Hawrylycz from the Allen Brain Institute. Local speakers were Dave Knoester and Arend Hintze from my lab, as well as Natalie Phillips who discussed reading Jane Austen inside an fMRI magnet, and Karim Oweiss who showed amazing videos of monkey brains interfacing with computers. 

So this brings us full circle. You. Yes, you! Do you think we should give this a shot? Do you think we can do this? Do you want to answer with a 2008 presidential campaign slogan?

I leave you to reflect about the photograph of the left upper corner of the blackboard in Richard Feynman's office at Caltech, as it was after he died (that's what you saw about 5 minutes ago on this blog, give or take depending on your speed of reading). I think I do not have to explain the quote's relevance with respect to what we endeavor to do. If we cannot understand what we cannot build, then building it is our only path to understanding the brain, and ultimately what makes you you, and me me. It is not an easy path. It may be the hardest path of all. But as far as I can tell, it is the only path.